Introducing v1.0.0 Beta6 for the .NET Semantic Kernel SDK

Matthew Bolanos

Semantic Kernel v1.0 has shipped and the contents of this blog entry is now out of date.

The Semantic Kernel team continues to make improvements to our beta release of the .NET library. Our sixth Beta release is the most jam packed one so far, so we wanted to take the opportunity to share everything new inside of it.

As a reminder, if you want visibility into our full plans for v1 of Semantic Kernel, please check out this blog post. Additionally, if you’re needing to update from pre-Beta, we recommend our first Beta blog post where we outline some of the initial breaking changes.

Semantic functions are getting way more powerful.

With this release, we’ve made to substantial improvements to how semantic functions work. We now have Handlebars support out-of-the-box and you can start authoring semantic functions that take full control of the chat completion API.

Initial Handlebars support (with extensibility for more!)

With Semantic Kernel, we already provided an out-of-the-box template engine. We heard from the community, however, that this template engine was too limiting, so with Beta6 we’re now adding support for Handlebars.

var templateFactory = new HandlebarsPromptTemplateFactory();
var handlebarsPrompt = "Hello AI, my name is {{name}}. What is the origin of my name?";


// Create the semantic function with Handlebars
var skfunction = kernel.CreateSemanticFunction(
    promptTemplate: prompt,
    functionName: "MyFunction",
    promptTemplateConfig: new PromptTemplateConfig()
    {
        TemplateFormat = templateFormat
    },
    promptTemplateFactory:
);

// Create your variables
var variables = new ContextVariables()
{
    { "name", "Bob" }
};

// Run your semantic function
var result = await kernel.RunAsync(skfunction, variables);

If you have a different templating language that you favor (e.g., Jinja2, Liquid, etc.), we also used this release to update how you can author and use templating languages. You can follow the patterns we set while creating our new HandlebarsPromptTemplateFactory class to add support for any template language. Afterwards, you can use it to create new semantic functions.

Lastly, our approach to adding template engines also allows you to use prompts with multiple different template engines all in the same kernel! No need to refactor prompts that you’ve collected from other prompt repositories to bring them into your AI application. To see an example of this in action, check out example 64 in our kernel syntax examples.

Chat role support in semantic functions

One of the biggest limiting factors of semantic functions was that you couldn’t leverage different messages or roles while using chat completion APIs. This changes with the introduction of a new prompt syntax that we’re introducing that will work across all model types.

We’re starting today with just the new <message /> tag. With it, you’ll be able to create prompts like the following:

<message role="user">Can you tell me about Seattle?</message>
<message role="system">Respond to the user request in JSON</message>

This is powerful, because it means you can quickly and easily author prompts with system messages at the top and bottom of your prompt to better control the behavior of the AI. Behind the scenes, we’ll parse these tags and turn them into a chat history object for you.

In the near future, we’ll also use these tags to support multi-modal applications. For example, if you wanted to add an image to a message, you’d be able to use the <img/> tag straight from HTML! The same will be true for other file types like audio and video.

<message role="user">Can you describe this image?<img src="./cat.png"/></message>

We now have full OpenAI function calling support.

Semantic Kernel has had function calling support with Open AI for awhile, but it wasn’t possible to add the results of a function call back into the chat history to facilitate long running conversations. With Beta6, that is now possible with the new AddFunctionMessage method on Chat History.

To see function calling working end-to-end, we recommend checking out example 59 in our kernel syntax examples. In it, we highlight the following:

    1. Use a kernel to automatically add functions to the OpenAI function calling request.
    2. Take the response from OpenAI to invoke a function with the kernel.
    3. Use the results of the function to create a new function message in the chat history.

To make function calling even easier, we’re currently updating our implementation of stepwise planner to leverage it. Keep an eye out for it in out next release!

Monitor your functions with hooks.

For enterprises, it’s extremely important to know what your AI applications are doing every step of the process. To provide this visibility, we’ve introduced pre- and post-hooks that run for every SKFunction, which allows you to do the following things:

  • Generate telemetry for your functions.
  • Provide updates to users on the progress of long-running tasks.
  • Add logic that enforces responsible AI.

For example, if you want to inspect rendered prompts before they are sent to the AI to check that users don’t accidentally send passwords, you could write something like the following:

kernel.FunctionInvoking += (object? sender, FunctionInvokingEventArgs e) =>
    {
        if (e.TryGetRenderedPrompt(out var prompt) && credScanModule.ScanCredentials(prompt!).Any())
        {
            var redactedPrompt = credScanModule.RemoveCredentials(prompt!);
            e.TryUpdateRenderedPrompt(redactedPrompt);
            
            Console.WriteLine("Credentials detected and redacted from the prompt.");
            Console.WriteLine($"Updated prompt: {redactedPrompt}");
        }
    };

This code adds a hook whenever a function is invoked so that the rendered prompt can be investigated. If a credential is detected, the prompt can be altered so that the password is no longer sent to the LLM. You also can cancel the entire function if you wanted.

To see additional scenarios and examples on how to use both pre- and post- hooks, please refer to example 57 of the kernel syntax examples.

What’s next?

As part of Ignite this week, we’ll be merging in several other updates that improve the quality of Semantic Kernel. In particular, keep your eyes peeled for planner improvements that leverage our new Handlebars support and function calling support.

0 comments

Discussion is closed.

Feedback usabilla icon