Release Candidate 1 for the Semantic Kernel .NET SDK is now live.

Matthew Bolanos

Semantic Kernel v1.0 has shipped and the contents of this blog entry is now out of date.

Since the interface is getting extremely close to its final v1.0.0 structure, we’re excited to release v1.0.0 RC1 of the .NET Semantic Kernel SDK. During the next two weeks we’ll be focused on bug fixes and making minor adjustments to finish the shape of the API.

In this blog we’ll share…

  1. Just how much easier it is to get started.
  2. Improvements to the kernel
  3. Making function invocation easier
  4. Creating and sharing prompts with YAML.

Automated function calling makes getting started easy.

With the latest round of updates, we took great care to make the SDK as simple to use for new and existing users. This included renaming many of our classes and interfaces to better align with the rest of the industry and upgrading custom classes to existing .NET implementations.

To highlight just how much easier Semantic Kernel has gotten, I want to share what I’m most proud of: our work simplifying function calling with OpenAI. With function calling, the model can tell the program which function should be called next to satisfy a user’s need, but setting up OpenAI function calling has required multiple steps. You had to…

  1. Describe your functions
  2. Call the model
  3. Review the results to see if a function call request was being made
  4. Parse the data necessary to make the call
  5. Perform the operation
  6. Add the results back to the chat history
  7. And then start the operation over again…

With Semantic Kernel, however, we have all the information needed to completely automate this entire process, so we’ve done just that. Take, for example, a simple app that allows a user to turn a light bulb on and off with an AI assistant.

In V1.0.0 RC1, you’ll start by creating your plugins with the [KernelFunction] attribute.

public class LightPlugin
{
    public bool IsOn { get; set; }

    [KernelFunction, Description("Gets the state of the light.")]
    public string GetState() => IsOn ? "on" : "off";

    [KernelFunction, Description("Changes the state of the light.'")]
    public string ChangeState(bool newState)
    {
        IsOn = newState;
        var state = GetState();

        // Print the state to the console
        Console.ForegroundColor = ConsoleColor.DarkBlue;
        Console.WriteLine($"[Light is now {state}]");
        Console.ResetColor();

        return state;
    }
}

You can then easily add your services and plugins to a single kernel.

var builder = Kernel.CreateBuilder();
builder.Services.AddAzureOpenAIChatCompletion("gpt-35-turbo", "gpt-3.5-turbo", endpoint, apiKey)(kernelSettings)
builder.Plugins.AddFromType<LightPlugin>();
Kernel kernel = builder.Build()

Finally, you can invoke a prompt that uses the new plugin you just authored. This is where the updates to Semantic Kernel really start to shine! Since all the necessary information is stored in the kernel, you can automatically call the registered functions using the AutoInvokeKernelFunctions option.

// Enable auto invocation of kernel functions
OpenAIPromptExecutionSettings settings = new()
{
    FunctionCallBehavior = FunctionCallBehavior.AutoInvokeKernelFunctions
};

// Start a chat session
while (true)
{
    // Get the user's message
    Console.Write("User > ");
    var userMessage = Console.ReadLine()!;

    // Invoke the kernel
    var results = await kernel.InvokePromptAsync(userMessage, new(settings));

    // Print the results
    Console.WriteLine($"Assistant > {results}”);
}

When you run the program, you can now ask the agent to turn the lights on and off.

Image Screenshot 2023 12 04 at 7 13 07 PM

We could make our agent even more efficient, though. To “toggle” the lights, the AI must currently make two function calls: 1) get the current state and then 2) change the state. This essentially doubles the number of tokens and time required to fulfill the request.

Image Screenshot 2023 12 05 at 3 48 08 PM

With Semantic Kernel templates, you can instead pre-emptively provide the LLM with the current state. Notice how we can tell the AI about the current state with a system message below.

// Invoke the kernel
var results = await kernel.InvokePromptAsync(@$"
        <message role=""system"">The current state of the light is ""{{{{LightPlugin.GetState}}}}""</message>
        <message role=""user"">{userMessage}.</message>",
    new(settings)
);

Now, when we ask the AI to toggle the lights, it only needs to make a single function call!

Image Screenshot 2023 12 04 at 7 22 55 PM

We believe the updates we’ve made to function calling in Semantic Kernel make it much easier than what we had before. We also believe it’s much easier than the other popular open-source SDKs available today.

This was just a quick overview of what it will look like to build an AI application with Semantic Kernel. To learn more about the changes we made as a team, please continue reading.

 

Image 9e589fc0 853d 43d4 9ebd 7de4e5642d0a Sergey Menshykh

The kernel is now at the center of everything.

In our original blog post for v1.0.0, we shared how we wanted to increase the value of the kernel and make it easier to use. We believe we’ve done just that by making it the property bag for your entire AI application.

  • Have multiple AI services?
  • Have multiple plugins?
  • Have other services like loggers and HTTP handlers?

All these elements can be added to a kernel so that all components of Semantic Kernel can leverage them to perform AI requests. In the function calling example, you already saw how we could use these components together to automate much of the work necessary to build AI apps.

Use dependency injection to create your kernel.

This new approach also makes it much easier to use dependency injection with Semantic Kernel. In the following example, we demonstrate how you can create a transient kernel with services .NET developers are already familiar with (e.g., logging and http clients).

Services
    .AddTransient<Kernel>(sp => 
    {
        var builder = Kernel.CreateBuilder();
        builder.Services.AddLogging(c => c.AddConsole().SetMinimumLevel(LogLevel.Information));
        builder.Services.ConfigureHttpClientDefaults(c =>
        {
            // Use a standard resiliency policy
            c.AddStandardResilienceHandler().Configure(o =>
            {
                o.Retry.ShouldHandle = args => ValueTask.FromResult(args.Outcome.Result?.StatusCode is HttpStatusCode.Unauthorized);
            });
        });
        builder.Services.AddOpenAIChatCompletion("gpt-4", apiKey);
        builder.Plugins.AddFromType<LightPlugin>();
        return builder.Build();
    });

You’ll notice that we treat plugins similarly. With the Plugins property on the KernelBuilder, you can easily register the plugins you have in your application with your kernel.

While the services are immutable, later in your code you can inject additional plugins into your kernel. If you add your kernel as a transient, you can mutate your kernel without impacting other parts of your code, but if you make it a singleton, we’ve added the Clone() method to avoid manipulating the kernel in other parts of your code.

public MyService(Kernel kernel)
{
    this._kernel = kernel.Clone();
    this._kernel.Plugins.AddFromType<MathPlugin>();
}

The kernel is now passed everywhere.

Once you’ve created your kernel, V1.0.0 RC1 will use the kernel nearly everywhere to ensure Semantic Kernel operations have all the services they need. This includes function invocation, prompt rendering, service selection, and connector requests. As we continue to improve Semantic Kernel, we will continue leveraging this pattern because we believe it’s the best way to consolidate all the runtime configuration for your AI applications.

Try these features yourself!

If you want to see dependency injection in action, check out the console chat starter app for Semantic Kernel.

 

Image bfea630e f10f 4587 a0a2 c8447f187620 Roger Barreto

Getting responses from AI has never been easier.

To further improve the kernel, we wanted to make sure you could invoke any of your logic directly from it. You could already invoke a function from it, but you couldn’t 1) stream from the kernel, 2) easily run an initial prompt, or 3) use non-string arguments. With V1.0.0 RC1, we’ve made enhancements to support all three.

Just getting started? Use the simple InvokePromptAsync methods.

For new users of the kernel, we wanted to make it as simple as possible to get started. Previously, you had to create a 1) semantic function, 2) wrap it in a plugin, 3) register it in a kernel, before finally 4) invoking it.

So we collapsed all these steps into a single method.

Console.WriteLine(await kernel.InvokePromptAsync("Tell me a joke"));

This should return a result like the following. This is much easier.

Sure, here's a classic one for you:
Why don't scientists trust atoms?
Because they make up everything!

Invoke function directly from the kernel with kernel arguments.

We also wanted to allow users to send more than just strings as arguments to the kernel. With the introduction of KernelArguments, you can now pass non-strings into any of your functions. For example, you can now send an entire ChatHistory object to a prompt function.

var result = kernel.InvokeAsync (
    promptFunction,
    arguments: new() {
        { "messages", chatMessages }
    });

If you then use a template engine like Handlebars, you could then write a prompt that loops over all the messages before sending them to the model.

<message role="system">You are a helpful assistant.</message>
{{#each messages}}
  <message role="{{Role}}">{{~Content~}}</message>
{{/each}}

Easily stream directly from the kernel.

Finally, we wanted to bring streaming to the kernel. With streaming, you can improve perceived latency and build experiences like ChatGPT (it also just looks cool).

To stream a response, simply use the InvokeStreamingAsync() method and loop over the chunk

// Print the chat completions
await foreach (var chunk in kernel.InvokeStreamingAsync<StreamingChatMessageContent>(function))
{
    Console.Write(chunk);
}

Try these features yourself!

If you want to see this features action, check out the updated hello world starter for Semantic Kernel.

 

Image 3fc4453f 6cd5 40f1 a3c4 19fdbd03afee Mark Wallace

Creating templates has never been so easy or powerful.

At the heart of Semantic Kernel are prompts. Without them, you cannot make the requests that give your applications AI. With V1.0.0, we’ve aligned with Azure AI’s prompt serialization format to make it easier to create prompt assets with YAML.

With YAML files, you can now easily share prompts.

Instead of juggling separate prompt files and configuration files, you can now use a single YAML file to describe everything necessary for a prompt function (previously called semantic functions).

For example, below you can see how we’ve defined a prompt function called GenerateStory that has two inputs: the topic and length.

name: GenerateStory
template: |
  Tell a story about {{$topic}} that is {{$length}} sentences long.
template_format: semantic-kernel
description: A function that generates a story about a topic.
input_variables:
  - name: topic
    description: The topic of the story.
    is_required: true
  - name: length
    description: The number of sentences in the story.
    is_required: true
output_variable:
  description: The generated story.

We can load this function and run it with the following code. For this sample, I’ll ask for a story about a dog that is three sentences long.

// Load prompt from resource
using StreamReader reader = new(Assembly.GetExecutingAssembly().GetManifestResourceStream("prompts.GenerateStory.yaml")!);
var function = kernel.CreateFunctionFromPromptYaml(await reader.ReadToEndAsync());

Console.WriteLine(await kernel.InvokeAsync(prompt, arguments: new()
{
    { "topic", "Dog" },
    { "length", 3 }
}));

This should output something like the following:

Once upon a time, there was a dog named Max. He was a loyal companion to his owner, always by their side. Together, they embarked on countless adventures, creating memories that would last a lifetime.

Use Handlebars in your prompt templates for even more power.

If you want even more power (i.e., loops and conditionals), you can also leverage Handlebars. Handlebars makes a great addition to support any type of input variable. For example, you can now loop over chat history messages.

name: Chat
template: |
  <message role="system">You are a helpful assistant.</message>
  {{#each messages}}
    <message role="{{Role}}">{{~Content~}}</message>
  {{/each}}
template_format: handlebars
description: A function that uses the chat history to respond to the user.
input_variables:
  - name: messages
    description: The history of the chat.
    is_required: true

To use the new Handlebars template, you’ll need to include the Handlebars package and include the HandlebarsPromptTemplateFactory when you create your prompt.

using StreamReader reader = new(Assembly.GetExecutingAssembly().GetManifestResourceStream("prompts.Chat.yaml")!);
KernelFunction prompt = kernel.CreateFunctionFromPromptYaml(
    reader.ReadToEnd(),
    promptTemplateFactory: new HandlebarsPromptTemplateFactory()
);

var result = kernel.InvokeAsync (
    prompt,
    arguments: new() {
        { "messages", chatMessages }
    });

Other important changes

In addition to the features above, we’ve also cleaned up our interface as part of V1.0.0 RC1.

Aligning names with the rest of the industry.

We updated our naming patterns to align with the rest of the industry and to avoid potential collisions in .NET. This list is not exhaustive, but it does cover the major changes that occurred:

  • The SK prefix was renamed to Kernel; for example:
    • SKFunction has become KernelFunction.
    • SKFunctionMetadata has become KernelFunctionAttribute
    • SKJsonSchema has become KernelJsonSchema
    • SKParameterMetadata has become KernelParameterMetadata
    • SKPluginCollection has become KernelPluginCollection
    • SKReturnParameterMetadata has become KernelReturnParameterMetadata
  • The connector interfaces have been updated to match their model type in Azure AI and Hugging Face
    • ITextCompletionService has become ITextGenerationService
    • IImageGenerationService has become ITextToImageService
  • RequestSettings has been renamed to PromptExecutionSettings
  • Semantic function have been renamed to prompt functions

Custom implementations have been replaced with .NET standard implementations.

Previously, we had classes and interfaces like IAIServiceProvider, HttpHandlerFactory, and retry handlers. With our move to align with dependency injection, these implementations are no longer necessary because developers canuse existing standard approaches that are already in use within the .NET ecosystem.

SKContext and ContextVariables have been replaced.

As we developed V1.0.0, we noticed that SKContext shared many similarities with the kernel, so SKContext has been replaced in all of the method signatures that required it as an input parameter with your Kernel instance.

As part of this move, we also replaced ContextVariables with KernelArguments. KernelArguments is very similar to ContextVariables except it can store non-string variables and it also includes PromptExecutionSettings (previously known as RequestSettings).

Getting started

If you’ve gotten this far and want to try out v1.0.0, please check out our two updated starters.

  • Hello world – quickly get started using prompt YAML files and streaming
  • Console chat – see how to use Semantic Kernel with .NET dependency injection

Join the hackathon and let us know what you think

As of today, the V1.0.0 RC hackathon has started. Give the starters a try, build something cool, and give us feedback on what the experience was like on our discussion boards. We’ll use this information to polish the SDK before going live with V1.0.0 at the end of the year.

4 comments

Discussion is closed. Login to edit/delete existing comments.

  • Bruce Lin 2

    Great! Looking forward to chat-copilot synchronized updates.

  • José Luis Latorre Millás 1

    Amazing! Super excited to try this on ASAP!! Thanks for the great effort!!!

  • Sławek Rosiek 0

    How does AutoInvokeKernelFunctions works. How do AI knows what plugins are available? I tried to use it and it doesn’t work. I use AzureOpenAIChatCompletionWithData

    • Matthew BolanosMicrosoft employee 0

      Behind the scenes it uses OpenAI function call, which you can read more about here (OpenAI docs) and here (Azure OpenAI docs). To make it work, we take all of the plugin functions you’ve imported into a kernel and serialize it so that the function calling feature in the OpenAI models are aware of your functions.

      For function calling to work, you first need to make sure you use a model that supports function calling. This requires a GPT-3.5-turbo model or GPT-4 model that is at least version 0613. You also mentioned that you’re using Azure OpenAI using your own data. I’ll need to follow up with the “using your own data” team to see if they currently support function calling.

Feedback usabilla icon