With the AI landscape changing so rapidly, choosing the correct tooling for AI enablement becomes increasingly important. In our team’s last engagement, the customer we worked with is an Independent Software Vendor (ISV) that has grown through acquisitions and many of their products are poorly integrated, making more traditional mechanisms for handling some functionality very difficult to achieve. AI was seen as a way to bridge the gap between these many systems. The customer was seeking to build a general AI Orchestrator that could be easily customizable by adding or restricting functionality. This would be based on the selection of products purchased by their customers, as well as the roles and capabilities of their customers’ end users. While building a general AI solution is not recommended, our team found success in utilizing Semantic Kernel as our tool of choice and we identified valuable learnings to share back to the ISE community.
The Semantic Kernel SDK empowers developers to harness AI capabilities through a common plugin representation, the automatic orchestration of these plugins, the ability to access observability data on the plugins called, and much more. (nuget, pip). For these reasons, along with having contact with the Semantic Kernel team here at Microsoft, our dev crew found the SDK to be a flexible and powerful tool with which to approach a challenging AI enablement problem.
The next section details our team’s key learnings that are specific to using Semantic Kernel, along with code snippets that are utilizing the C# version of the Semantic Kernel SDK. The final section will walk through some general pointers for implementing AI solutions based on the broader insights this customer engagement provided us. This blog post is meant to help inform others on how the Semantic Kernel SDK might be leveraged, as well as enhance current understandings of implementing AI solutions.
Key Learnings
-
Function invocation filters allow for custom logic to be executed upon function invocation. This feature can be incredibly helpful when debugging or during prompt engineering, as a filter can be added to the kernel to return the FunctionInvocationContext. This context contains useful information on what function was called and with what arguments. This is our implementation of a function invocation filter that captures the
Context
s:ContextCaptureInvocationFilter.cs
// ----------------------------------------------------------------------- // // Copyright (c) Microsoft. All rights reserved. Licensed under the MIT license. See LICENSE file in the project root for full license information. // // ----------------------------------------------------------------------- #pragma warning disable SKEXP0001 // Disables the SKEXP0001 (Experimental) warning for stylecop purposes. namespace Heartland.Ai.Generative.Orchestrator.Prompts.KernelBuilders { using System.Diagnostics; using System.Diagnostics.CodeAnalysis; using Microsoft.SemanticKernel; ///
/// Class ContextCaptureInvocationFilter. /// Implements the . ///
/// public class ContextCaptureInvocationFilter : IContextCaptureInvocationFilter { ////// Initializes a new instance of the class. ///
/// The activity source. /// The logger. public ContextCaptureInvocationFilter(ActivitySource activitySource, ILogger logger) { this.ActivitySource = activitySource; this.Logger = logger; } /// public IList Contexts { get; } = new List(); private ActivitySource ActivitySource { get; } private ILogger Logger { get; } /// public async Task OnFunctionInvocationAsync( FunctionInvocationContext context, Func next) { if (!this.Contexts.Contains(context)) { this.Contexts.Add(context); } using (var activity = this.ActivitySource.StartActivity($"{context.Function.PluginName}:{context.Function.Name}")) { this.Logger.LogDebug("Starting invocation. Skill: {skill}; Function: {function}", context.Function.PluginName, context.Function.Name); try { await next(context); } catch (Exception ex) { this.Logger.LogError(ex, "Error during invocation. Skill: {skill}; Function: {function}", context.Function.PluginName, context.Function.Name); activity?.SetStatus(ActivityStatusCode.Error, ex.Message); throw; } this.Logger.LogInformation("Invocation completed. Skill: {skill}; Function: {function}", context.Function.PluginName, context.Function.Name); } } } } #pragma warning disable SKEXP0001IContextCaptureInvocationFilter
implements theIFunctionInvocationFilter
. For each request in which we want to enable telemetry, this filter is added to the request’s ephemeral kernel and the filter’sContext
s are added to the response object.Using the kernel
Context
along with Semantic Kernel’sChatHistory
object, debugging Semantic Kernel invocations is much easier to do.-
In the following example response, you can see everything the
Context
provides as well as the additional details returned fromChatHistory
such as tokens used or any prompt safety filters applied.response.json
{ "responseId": "3fa85f64-5717-4562-b3fc-2c963f66afa6", "request": { "requestId": "3fa85f64-5717-4562-b3fc-2c963f66afa6", "ignoreHistory": true, "persona": "MathTeacher", "prompt": "What is 1+1?" }, "responseText": "1 plus 1 equals 2.", "chatHistory": [ { "role": { "label": "system" }, "items": [ { "$type": "TextContent", "text": "You are a helpful 2nd grade Math teacher that helps with basic math problems" } ] }, { "role": { "label": "user" }, "items": [ { "$type": "TextContent", "text": "What is 1+1?" } ] }, { "role": { "label": "assistant" }, "items": [ { "$type": "FunctionCallContent", "id": "call_5TUUxnInjz7gkoUM13CmqYGw", "pluginName": "MathSkill", "functionName": "Add", "arguments": { "number1": "1", "number2": "1" } } ], "modelId": "gpt-4", "metadata": { "Id": "chatcmpl-AAia5d2ayVDI3jSBlwNB8iFyJZTya", "Created": "2024-09-23T19:00:13+00:00", "PromptFilterResults": [ { "promptIndex": 0, "contentFilterResults": { "sexual": { "filtered": false, "severity": {} }, "violence": { "filtered": false, "severity": {} }, "hate": { "filtered": false, "severity": {} }, "selfHarm": { "filtered": false, "severity": {} }, "profanity": null, "customBlocklists": null, "error": null, "jailbreak": null, "indirectAttack": null } } ], "SystemFingerprint": "fp_5603ee5e2e", "Usage": { "completionTokens": 22, "promptTokens": 275, "totalTokens": 297 }, "ContentFilterResults": { "sexual": null, "violence": null, "hate": null, "selfHarm": null, "profanity": null, "customBlocklists": null, "error": null, "protectedMaterialText": null, "protectedMaterialCode": null }, "FinishReason": "tool_calls", "FinishDetails": null, "LogProbabilityInfo": null, "Index": 0, "Enhancements": null, "ChatResponseMessage.FunctionToolCalls": [ { "name": "MathSkill-Add", "arguments": "{\"number1\":1,\"number2\":1}", "id": "call_5TUUxnInjz7gkoUM13CmqYGw" } ] } }, { "role": { "label": "tool" }, "items": [ { "$type": "TextContent", "text": "2", "metadata": { "ChatCompletionsToolCall.Id": "call_5TUUxnInjz7gkoUM13CmqYGw" } }, { "$type": "FunctionResultContent", "callId": "call_5TUUxnInjz7gkoUM13CmqYGw", "pluginName": "MathSkill", "functionName": "Add", "result": "2" } ], "metadata": { "ChatCompletionsToolCall.Id": "call_5TUUxnInjz7gkoUM13CmqYGw" } }, { "role": { "label": "assistant" }, "items": [ { "$type": "TextContent", "text": "1 plus 1 equals 2.", "modelId": "gpt-4", "metadata": { "Id": "chatcmpl-AAia7q1hFUPxRNASeiY1TDQfMCAu8", "Created": "2024-09-23T19:00:15+00:00", "PromptFilterResults": [ { "promptIndex": 0, "contentFilterResults": { "sexual": { "filtered": false, "severity": {} }, "violence": { "filtered": false, "severity": {} }, "hate": { "filtered": false, "severity": {} }, "selfHarm": { "filtered": false, "severity": {} }, "profanity": null, "customBlocklists": null, "error": null, "jailbreak": null, "indirectAttack": null } } ], "SystemFingerprint": "fp_5603ee5e2e", "Usage": { "completionTokens": 9, "promptTokens": 308, "totalTokens": 317 }, "ContentFilterResults": { "sexual": { "filtered": false, "severity": {} }, "violence": { "filtered": false, "severity": {} }, "hate": { "filtered": false, "severity": {} }, "selfHarm": { "filtered": false, "severity": {} }, "profanity": null, "customBlocklists": null, "error": null, "protectedMaterialText": null, "protectedMaterialCode": null }, "FinishReason": "stop", "FinishDetails": null, "LogProbabilityInfo": null, "Index": 0, "Enhancements": null } } ], "modelId": "gpt-4", "metadata": { "Id": "chatcmpl-AAia7q1hFUPxRNASeiY1TDQfMCAu8", "Created": "2024-09-23T19:00:15+00:00", "PromptFilterResults": [ { "promptIndex": 0, "contentFilterResults": { "sexual": { "filtered": false, "severity": {} }, "violence": { "filtered": false, "severity": {} }, "hate": { "filtered": false, "severity": {} }, "selfHarm": { "filtered": false, "severity": {} }, "profanity": null, "customBlocklists": null, "error": null, "jailbreak": null, "indirectAttack": null } } ], "SystemFingerprint": "fp_5603ee5e2e", "Usage": { "completionTokens": 9, "promptTokens": 308, "totalTokens": 317 }, "ContentFilterResults": { "sexual": { "filtered": false, "severity": {} }, "violence": { "filtered": false, "severity": {} }, "hate": { "filtered": false, "severity": {} }, "selfHarm": { "filtered": false, "severity": {} }, "profanity": null, "customBlocklists": null, "error": null, "protectedMaterialText": null, "protectedMaterialCode": null }, "FinishReason": "stop", "FinishDetails": null, "LogProbabilityInfo": null, "Index": 0, "Enhancements": null } } ], "semanticKernelContext": { "Contexts": [ { "CancellationToken": { "IsCancellationRequested": false, "CanBeCanceled": false, "WaitHandle": { "Handle": { "value": 3936 }, "SafeWaitHandle": { "IsInvalid": false, "IsClosed": false } } }, "Kernel": { "Plugins": [ [ { "Name": "Sqrt", "PluginName": "MathSkill", "Description": "Take the square root of a number", "Metadata": { "Name": "Sqrt", "PluginName": "MathSkill", "Description": "Take the square root of a number", "Parameters": [ { "Name": "number1", "Description": "The number to take a square root of", "DefaultValue": null, "IsRequired": true, "ParameterType": "System.Double, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e", "Schema": { "RootElement": { "ValueKind": 1 } } } ], "ReturnParameter": { "Description": "", "ParameterType": "System.Double, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e", "Schema": { "RootElement": { "ValueKind": 1 } } }, "AdditionalProperties": {} }, "ExecutionSettings": null }, { "Name": "Add", "PluginName": "MathSkill", "Description": "Add two numbers", "Metadata": { "Name": "Add", "PluginName": "MathSkill", "Description": "Add two numbers", "Parameters": [ { "Name": "number1", "Description": "The first number to add", "DefaultValue": null, "IsRequired": true, "ParameterType": "System.Double, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e", "Schema": { "RootElement": { "ValueKind": 1 } } }, { "Name": "number2", "Description": "The second number to add", "DefaultValue": null, "IsRequired": true, "ParameterType": "System.Double, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e", "Schema": { "RootElement": { "ValueKind": 1 } } } ], "ReturnParameter": { "Description": "", "ParameterType": "System.Double, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e", "Schema": { "RootElement": { "ValueKind": 1 } } }, "AdditionalProperties": {} }, "ExecutionSettings": null }, { "Name": "Subtract", "PluginName": "MathSkill", "Description": "Subtract two numbers", "Metadata": { "Name": "Subtract", "PluginName": "MathSkill", "Description": "Subtract two numbers", "Parameters": [ { "Name": "number1", "Description": "The first number to subtract from", "DefaultValue": null, "IsRequired": true, "ParameterType": "System.Double, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e", "Schema": { "RootElement": { "ValueKind": 1 } } }, { "Name": "number2", "Description": "The second number to subtract away", "DefaultValue": null, "IsRequired": true, "ParameterType": "System.Double, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e", "Schema": { "RootElement": { "ValueKind": 1 } } } ], "ReturnParameter": { "Description": "", "ParameterType": "System.Double, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e", "Schema": { "RootElement": { "ValueKind": 1 } } }, "AdditionalProperties": {} }, "ExecutionSettings": null }, { "Name": "Multiply", "PluginName": "MathSkill", "Description": "Multiply two numbers. When increasing by a percentage, don't forget to add 1 to the percentage.", "Metadata": { "Name": "Multiply", "PluginName": "MathSkill", "Description": "Multiply two numbers. When increasing by a percentage, don't forget to add 1 to the percentage.", "Parameters": [ { "Name": "number1", "Description": "The first number to multiply", "DefaultValue": null, "IsRequired": true, "ParameterType": "System.Double, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e", "Schema": { "RootElement": { "ValueKind": 1 } } }, { "Name": "number2", "Description": "The second number to multiply", "DefaultValue": null, "IsRequired": true, "ParameterType": "System.Double, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e", "Schema": { "RootElement": { "ValueKind": 1 } } } ], "ReturnParameter": { "Description": "", "ParameterType": "System.Double, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e", "Schema": { "RootElement": { "ValueKind": 1 } } }, "AdditionalProperties": {} }, "ExecutionSettings": null }, { "Name": "Divide", "PluginName": "MathSkill", "Description": "Divide two numbers", "Metadata": { "Name": "Divide", "PluginName": "MathSkill", "Description": "Divide two numbers", "Parameters": [ { "Name": "number1", "Description": "The first number to divide from", "DefaultValue": null, "IsRequired": true, "ParameterType": "System.Double, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e", "Schema": { "RootElement": { "ValueKind": 1 } } }, { "Name": "number2", "Description": "The second number to divide by", "DefaultValue": null, "IsRequired": true, "ParameterType": "System.Double, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e", "Schema": { "RootElement": { "ValueKind": 1 } } } ], "ReturnParameter": { "Description": "", "ParameterType": "System.Double, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e", "Schema": { "RootElement": { "ValueKind": 1 } } }, "AdditionalProperties": {} }, "ExecutionSettings": null } ] ], "FunctionInvocationFilters": [], "PromptRenderFilters": [], "AutoFunctionInvocationFilters": [], "Services": {}, "Culture": "(Default)", "LoggerFactory": {}, "ServiceSelector": {}, "Data": {} }, "Function": { "Name": "Add", "PluginName": "MathSkill", "Description": "Add two numbers", "Metadata": { "Name": "Add", "PluginName": "MathSkill", "Description": "Add two numbers", "Parameters": [ { "Name": "number1", "Description": "The first number to add", "DefaultValue": null, "IsRequired": true, "ParameterType": "System.Double, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e", "Schema": { "RootElement": { "ValueKind": 1 } } }, { "Name": "number2", "Description": "The second number to add", "DefaultValue": null, "IsRequired": true, "ParameterType": "System.Double, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e", "Schema": { "RootElement": { "ValueKind": 1 } } } ], "ReturnParameter": { "Description": "", "ParameterType": "System.Double, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e", "Schema": { "RootElement": { "ValueKind": 1 } } }, "AdditionalProperties": {} }, "ExecutionSettings": null }, "Arguments": { "number1": "1", "number2": "1" }, "Result": { "Function": { "Name": "Add", "PluginName": "MathSkill", "Description": "Add two numbers", "Metadata": { "Name": "Add", "PluginName": "MathSkill", "Description": "Add two numbers", "Parameters": [ { "Name": "number1", "Description": "The first number to add", "DefaultValue": null, "IsRequired": true, "ParameterType": "System.Double, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e", "Schema": { "RootElement": { "ValueKind": 1 } } }, { "Name": "number2", "Description": "The second number to add", "DefaultValue": null, "IsRequired": true, "ParameterType": "System.Double, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e", "Schema": { "RootElement": { "ValueKind": 1 } } } ], "ReturnParameter": { "Description": "", "ParameterType": "System.Double, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e", "Schema": { "RootElement": { "ValueKind": 1 } } }, "AdditionalProperties": {} }, "ExecutionSettings": null }, "Metadata": null, "Culture": "(Default)", "ValueType": "System.Double, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e", "RenderedPrompt": null, "Value": "2" } } ] }, "error": false }
NOTE: Exposing the kernel Context and ChatHistory should only ever be done in development. This data contains a lot of information on the inner workings of the kernel, which may open attack vectors for your application.
-
-
Another benefit of using Semantic Kernel is that it seamlessly integrates with OpenTelemetry. As stated in the linked post, capturing the traces and metrics produced by Semantic Kernel is as simple as just adding “Microsoft.SemanticKernel*” as both a meter and a trace source to your application’s OpenTelemetry configuration. Serilog is a great option for exporting any logs produced in the application to the desired destination as set in the OpenTelemetry configuration. We were able to extend observability into the plugins themselves by using dependency injection for a logger, activity source, and meter.
For example, when adding OpenTelemetry to a DI container’s service collection, Semantic Kernel traces can be simply configured with:
services.AddOpenTelemetry() .ConfigureResource( builder => builder.AddService(OtelServiceName, OtelServiceVersion)) .WithTracing( builder => { builder .AddSource("Microsoft.SemanticKernel*") })
-
One challenge we faced was in making sure users would be able to use the plugins that were applicable to their role and that they should have access to. If we were to have used a single, fixed kernel with all our possible plugins loaded to it, it would be impossible to ensure we had isolated groups of plugins. To solve this, we came up with the idea of “personas” which would have access to a set list of defined plugins for their role. These plugins would be loaded into an ephemeral, per-request kernel based on the user’s role. In this way, we could safeguard against the model happening to choose a plugin that the user shouldn’t have access to. Using this persona model, it would also be possible to implement an authentication flow for when a user attempts to assume a persona. Additionally, we wanted our solution to be LLM-agnostic, so we developed “KernelBuilders” for the various models to be supported. For example, when using Azure OpenAI, that kernel builder will add a ChatCompletion service to our DI container such that it is accessible to the kernel.
-
At the start of our customer engagement, we focused on the Semantic Kernel implementations for both function calling and handlebars planners based on this post from the SK team in December of 2023. After adapting our application to accommodate either implementation, new guidance was released this past summer of 2024 to use “vanilla” function calling in lieu of planners (for now 🙂). Semantic Kernel can be configured to automatically call functions by setting:
public OpenAIPromptExecutionSettings OpenAIPromptExecutionSettings { get; set; } = new OpenAIPromptExecutionSettings { ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions, };
As of v1.20.0 of Semantic Kernel, more flexible function choice behaviors were released.
Other Important Findings
-
Even with prompt engineering and thorough testing, it is still possible for the incorrect plugin/function to be called or for the model to hallucinate. In addition to the requirements and best practices outlined by the Microsoft Responsible AI Standard v2, some other suggested ways to safeguard your applications and systems include:
- If possible, only implement functions with “read” access to non-confidential data.
- Treat the LLM as an external user and validate any requests and actions triggered via function calling.
- Implement a feedback mechanism for identifying gaps.
- Present “suggested prompts” to the user that are known to consistently work.
- Consider a re-try mechanism to have the LLM check for a different plugin.
-
Consider using or developing your own “analyzer” for prompt engineering and plugin development. While Semantic Kernel proved to be an excellent framework for building out an AI application, our team found that we needed additional tooling for actual plugin development and response testing. It was very important to understand how our entire application (with many plugins, personas, and logic) would respond to a request, not just how a single plugin operates. To solve our specific needs, we developed a lightweight “SK + LLM Response Analyzer Toolkit”. This enabled us to run repeated and scripted user prompts through a SK driven conversation, and track the results over time, for offline analysis. We created a JSON schema for a standard analysis input that the
Analyzer
could parse and execute. Here is an example of this input JSON:input.json
{ "settings": { "persona": "LightSwitchAgent", "analyzerDefaults": [ { "type": "LLM", "options": { "systemMessage": "You are a linguistics analyzer comparing two pieces of text for semantic similarity. For each pair of texts, output a score from 0 (no similarity) to 100 (identical) of semantic similarity. Output just a number, and take extra time if needed.", "prompt": "The two texts are:\n- {expectedResponseText}\n- {responseText}" } }, { "type": "CosineSimilarity", "options": { "k": 3 } } ] }, "prompts": [ { "prompt": "Is the light on?", "expectedResponse": "The light is off.", "analyzers": [ { "type": "LLM" }, { "type": "CosineSimilarity" }, { "type": "Levenshtein" }, { "type": "Keyword", "options": { "keywords": ["off"] } }, { "type": "FunctionInvocation", "options": { "functions": [ { "pluginName": "LightSwitchSkill", "functionName": "GetLightState", "result": "off", "Parameters": {} } ] } } ] } ] }
In this file, you will notice we specify a list of analyzers – there are many methods for analyzing responses; The right method is use case dependent and often multiple methods for analysis should be used. Some of the “analyzers” we developed included:
- String Similarity Analyzers
- StringEqualityAnalyzer: Determines if the expected and actual responses are equivalent.
- CosineAnalyzer: Provides the cosine similarity for the expected response compared to the actual response.
- LevenshteinAnalyzer: Provides the Levenshtein similarity for the expected response compared to the actual response.
- KeywordAnalyzer: For a given list of expected keywords, this will return how many are found in the response.
- Semantic Kernel Analyzers
- FunctionAnalyzer: Determines if the expected plugins were used to process the prompt.
- AI Analyzers
- TokenUsageAnalyzer: Compares expected token usage with the actual number of tokens used.
- LLMAnalyzer: Utilizes a user-defined system prompt to evaluate the response to a provided prompt.
An example output given our above input:
output.json
[ { "prompt": "Is the light on?", "responseText": "The light is currently off.", "type": "LLM", "result": { "systemMessage": "You are a linguistics analyzer comparing two pieces of text for semantic similarity. For each pair of texts, output a score from 0 (no similarity) to 100 (identical) of semantic similarity. Output just a number, and take extra time if needed.", "prompt": "The two texts are:\n- The light is off.\n- The light is currently off.", "response": "98" } }, { "prompt": "Is the light on?", "responseText": "The light is currently off.", "type": "CosineSimilarity", "result": { "expected": "The light is off.", "actual": "The light is currently off.", "options": { "k": 3 }, "result": 0.7229568912920512 } }, { "prompt": "Is the light on?", "responseText": "The light is currently off.", "type": "Levenshtein", "result": { "expected": "The light is off.", "actual": "The light is currently off.", "result": 0.6296296296296297 } }, { "prompt": "Is the light on?", "responseText": "The light is currently off.", "type": "Keyword", "result": { "actual": "The light is currently off.", "keywordsExpected": ["off"], "keywordsFound": ["off"], "result": 1 } }, { "prompt": "Is the light on?", "responseText": "The light is currently off.", "type": "FunctionInvocation", "result": { "expected": [ { "pluginName": "LightSwitchSkill", "functionName": "GetLightState", "result": "off", "parameters": {} } ], "actual": [ { "pluginName": "LightSwitchSkill", "functionName": "GetLightState", "result": "off", "parameters": {} } ], "found": [ { "pluginName": "LightSwitchSkill", "functionName": "GetLightState", "result": "off", "parameters": {} } ], "result": 1 } } ]
-
Continuously be on the lookout for updates to Semantic Kernel, OpenAI, and to other models in the industry! We all know AI is rapidly evolving and there are always new things to learn and more ways to empower our customers!
Summary
Implementing AI solutions, whether simple or complex, can be a daunting endeavor. Choosing the appropriate tooling can make a large impact on ease of development of the solution, as well as the success of the final result. We found Semantic Kernel to be an effective tool in approaching a complex problem. Additionally, continuous updates to the Semantic Kernel SDK are encouraging signs that this tool will be able to keep up with the evolving AI ecosystem. Finally, as with traditional engineering best practices, AI implementations also require proper validation, governance, and maintenance.
Thumbnail generated using DALL-E 3.