{"id":1664,"date":"2023-12-05T15:14:33","date_gmt":"2023-12-05T23:14:33","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/semantic-kernel\/?p=1664"},"modified":"2024-01-10T15:12:55","modified_gmt":"2024-01-10T23:12:55","slug":"release-candidate-1-for-the-semantic-kernel-net-sdk-is-now-live","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/agent-framework\/release-candidate-1-for-the-semantic-kernel-net-sdk-is-now-live\/","title":{"rendered":"Release Candidate 1 for the Semantic Kernel .NET SDK is now live."},"content":{"rendered":"<blockquote><p>Semantic Kernel <a href=\"https:\/\/github.com\/microsoft\/semantic-kernel\/releases\/tag\/dotnet-1.0.1\">v1.0<\/a> has shipped and the contents of this blog entry is now out of date.<\/p><\/blockquote>\n<p>Since the interface is getting extremely close to its final v1.0.0 structure, we\u2019re excited to release v1.0.0 RC1 of the .NET Semantic Kernel SDK. During the next two weeks we\u2019ll be focused on bug fixes and making <em>minor<\/em> adjustments to finish the shape of the API.<\/p>\n<p>In this blog we\u2019ll share\u2026<\/p>\n<ol>\n<li><a href=\"#automated-function-calling-makes-getting-started-easy\"><em>Just<\/em> how much easier it is to get started.<\/a><\/li>\n<li><a href=\"#the-kernel-is-now-at-the-center-of-everything\">Improvements to the kernel<\/a><\/li>\n<li><a href=\"#getting-responses-from-ai-has-never-been-easier\">Making function invocation easier<\/a><\/li>\n<li><a href=\"#creating-templates-has-never-been-so-easy-or-powerful\">Creating and sharing prompts with YAML.<\/a><\/li>\n<\/ol>\n<h2 style=\"margin-top: 2rem;\">Automated function calling makes getting started easy.<\/h2>\n<p>With the latest round of updates, we took great care to make the SDK as simple to use for new and existing users. This included renaming many of our classes and interfaces to better align with the rest of the industry and upgrading custom classes to existing .NET implementations.<\/p>\n<p>To highlight just how much easier Semantic Kernel has gotten, I want to share what I\u2019m most proud of: our work simplifying function calling with OpenAI. With function calling, the model can tell the program which function should be called next to satisfy a user\u2019s need, but setting up OpenAI function calling has required multiple steps. You had to\u2026<\/p>\n<ol>\n<li>Describe your functions<\/li>\n<li>Call the model<\/li>\n<li>Review the results to see if a function call request was being made<\/li>\n<li>Parse the data necessary to make the call<\/li>\n<li>Perform the operation<\/li>\n<li>Add the results back to the chat history<\/li>\n<li>And then start the operation over again\u2026<\/li>\n<\/ol>\n<p>With Semantic Kernel, however, we have <em>all<\/em> the information needed to completely automate this entire process, so we\u2019ve done just that. <strong>Take, for example, a simple app that allows a user to turn a light bulb on and off with an AI assistant.<\/strong><\/p>\n<p>In V1.0.0 RC1, you\u2019ll start by creating your plugins with the <code>[KernelFunction]<\/code> attribute.<\/p>\n<pre>public class LightPlugin\r\n{\r\n\u00a0\u00a0\u00a0 public bool IsOn { get; set; }\r\n\r\n \u00a0\u00a0 [KernelFunction, Description(\"Gets the state of the light.\")]\r\n \u00a0\u00a0 public string GetState() =&gt; IsOn ? \"on\" : \"off\";\r\n\r\n \u00a0\u00a0 [KernelFunction, Description(\"Changes the state of the light.'\")]\r\n \u00a0\u00a0 public string ChangeState(bool newState)\r\n \u00a0\u00a0 {\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 IsOn = newState;\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 var state = GetState();\r\n\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \/\/ Print the state to the console\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Console.ForegroundColor = ConsoleColor.DarkBlue;\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Console.WriteLine($\"[Light is now {state}]\");\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Console.ResetColor();\r\n\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 return state;\r\n \u00a0\u00a0 }\r\n}\r\n\r\n<\/pre>\n<p>You can then easily add your services and plugins to a single kernel.<\/p>\n<pre>var builder = Kernel.CreateBuilder();\r\nbuilder.Services.AddAzureOpenAIChatCompletion(\"gpt-35-turbo\", \"gpt-3.5-turbo\", endpoint, apiKey)(kernelSettings)\r\nbuilder.Plugins.AddFromType&lt;LightPlugin&gt;();\r\nKernel kernel = builder.Build()<\/pre>\n<p>Finally, you can invoke a prompt that uses the new plugin you just authored. This is where the updates to Semantic Kernel really start to shine! Since all the necessary information is stored in the kernel, you can automatically call the registered functions using the <code>AutoInvokeKernelFunctions<\/code> option.<\/p>\n<pre>\/\/ Enable auto invocation of kernel functions\r\nOpenAIPromptExecutionSettings settings = new()\r\n{\r\n \u00a0\u00a0 FunctionCallBehavior = FunctionCallBehavior.AutoInvokeKernelFunctions\r\n};\r\n\r\n\/\/ Start a chat session\r\nwhile (true)\r\n{\r\n \u00a0\u00a0 \/\/ Get the user's message\r\n \u00a0\u00a0 Console.Write(\"User &gt; \");\r\n \u00a0\u00a0 var userMessage = Console.ReadLine()!;\r\n\r\n \u00a0\u00a0 \/\/ Invoke the kernel\r\n \u00a0\u00a0 var results = await kernel.InvokePromptAsync(userMessage, new(settings));\r\n\r\n \u00a0\u00a0 \/\/ Print the results\r\n \u00a0\u00a0 Console.WriteLine($\"Assistant &gt; {results}\u201d);\r\n}<\/pre>\n<p>When you run the program, you can now ask the agent to turn the lights on and off.<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2023\/12\/Screenshot-2023-12-04-at-7.13.07\u202fPM.png\"><img decoding=\"async\" class=\"aligncenter size-large wp-image-1681\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2023\/12\/Screenshot-2023-12-04-at-7.13.07\u202fPM-1024x134.png\" alt=\"Image Screenshot 2023 12 04 at 7 13 07 PM\" width=\"640\" height=\"84\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/Screenshot-2023-12-04-at-7.13.07\u202fPM-1024x134.png 1024w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/Screenshot-2023-12-04-at-7.13.07\u202fPM-300x39.png 300w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/Screenshot-2023-12-04-at-7.13.07\u202fPM-768x100.png 768w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/Screenshot-2023-12-04-at-7.13.07\u202fPM.png 1316w\" sizes=\"(max-width: 640px) 100vw, 640px\" \/><\/a><\/p>\n<p>We could make our agent even more efficient, though. To \u201ctoggle\u201d the lights, the AI must currently make two function calls: 1) get the current state and then 2) change the state. This essentially doubles the number of tokens and time required to fulfill the request.<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2023\/12\/Screenshot-2023-12-05-at-3.48.08\u202fPM.png\"><img decoding=\"async\" class=\"aligncenter size-large wp-image-1682\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2023\/12\/Screenshot-2023-12-05-at-3.48.08\u202fPM-1024x132.png\" alt=\"Image Screenshot 2023 12 05 at 3 48 08 PM\" width=\"640\" height=\"83\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/Screenshot-2023-12-05-at-3.48.08\u202fPM-1024x132.png 1024w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/Screenshot-2023-12-05-at-3.48.08\u202fPM-300x39.png 300w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/Screenshot-2023-12-05-at-3.48.08\u202fPM-768x99.png 768w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/Screenshot-2023-12-05-at-3.48.08\u202fPM-1536x198.png 1536w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/Screenshot-2023-12-05-at-3.48.08\u202fPM-2048x264.png 2048w\" sizes=\"(max-width: 640px) 100vw, 640px\" \/><\/a><\/p>\n<p>With Semantic Kernel templates, you can instead pre-emptively provide the LLM with the current state. Notice how we can tell the AI about the current state with a system message below.<\/p>\n<pre>\/\/ Invoke the kernel\r\nvar results = await kernel.InvokePromptAsync(@$\"\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 &lt;message role=\"\"system\"\"&gt;The current state of the light is \"\"{{{{LightPlugin.GetState}}}}\"\"&lt;\/message&gt;\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 &lt;message role=\"\"user\"\"&gt;{userMessage}.&lt;\/message&gt;\",\r\n \u00a0\u00a0 new(settings)\r\n);<\/pre>\n<p>Now, when we ask the AI to toggle the lights, it only needs to make a single function call!<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2023\/12\/Screenshot-2023-12-04-at-7.22.55\u202fPM.png\"><img decoding=\"async\" class=\"aligncenter size-large wp-image-1683\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2023\/12\/Screenshot-2023-12-04-at-7.22.55\u202fPM-1024x49.png\" alt=\"Image Screenshot 2023 12 04 at 7 22 55 PM\" width=\"640\" height=\"31\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/Screenshot-2023-12-04-at-7.22.55\u202fPM-1024x49.png 1024w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/Screenshot-2023-12-04-at-7.22.55\u202fPM-300x14.png 300w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/Screenshot-2023-12-04-at-7.22.55\u202fPM-768x37.png 768w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/Screenshot-2023-12-04-at-7.22.55\u202fPM-1536x74.png 1536w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/Screenshot-2023-12-04-at-7.22.55\u202fPM-2048x98.png 2048w\" sizes=\"(max-width: 640px) 100vw, 640px\" \/><\/a><\/p>\n<p>We believe the updates we\u2019ve made to function calling in Semantic Kernel make it <em>much<\/em> easier than what we had before. We also believe it\u2019s much easier than the other popular open-source SDKs available today.<\/p>\n<p>This was just a quick overview of what it will look like to build an AI application with Semantic Kernel. To learn more about the changes we made as a team, please continue reading.<\/p>\n<div class=\"table-responsive\" style=\"text-align: center;\">\n<table style=\"background: none; border: none; display: inline-block;\">\n<tbody>\n<tr style=\"background: none;\">\n<td style=\"text-align: center; background: none; border: 0px solid white;\" width=\"156\">&nbsp;<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2023\/12\/9e589fc0-853d-43d4-9ebd-7de4e5642d0a.jpeg\"><img decoding=\"async\" class=\"lazyloaded avatar aligncenter wp-image-1676 size-thumbnail\" style=\"width: 58px; height: 58px;\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2023\/12\/9e589fc0-853d-43d4-9ebd-7de4e5642d0a-150x150.jpeg\" sizes=\"(max-width: 150px) 100vw, 150px\" alt=\"Image 9e589fc0 853d 43d4 9ebd 7de4e5642d0a\" width=\"150\" height=\"150\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/9e589fc0-853d-43d4-9ebd-7de4e5642d0a-150x150.jpeg 150w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/9e589fc0-853d-43d4-9ebd-7de4e5642d0a-24x24.jpeg 24w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/9e589fc0-853d-43d4-9ebd-7de4e5642d0a-48x48.jpeg 48w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/9e589fc0-853d-43d4-9ebd-7de4e5642d0a-96x96.jpeg 96w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/9e589fc0-853d-43d4-9ebd-7de4e5642d0a.jpeg 286w\" \/><\/a>\nSergey Menshykh<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<h2>The kernel is now at the center of <em>everything<\/em>.<\/h2>\n<p>In our <a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/introducing-the-v1-0-0-beta1-for-the-net-semantic-kernel-sdk\/\">original blog post for v1.0.0<\/a>, we shared how we wanted to increase the value of the kernel <em>and<\/em> make it easier to use. We believe we\u2019ve done just that by making it <em>the<\/em> property bag for your entire AI application.<\/p>\n<ul>\n<li>Have multiple AI services?<\/li>\n<li>Have multiple plugins?<\/li>\n<li>Have other services like loggers and HTTP handlers?<\/li>\n<\/ul>\n<p>All these elements can be added to a kernel so that all components of Semantic Kernel can leverage them to perform AI requests. In the function calling example, you already saw how we could use these components together to automate much of the work necessary to build AI apps.<\/p>\n<h3 style=\"margin-top: 1rem;\">Use dependency injection to create your kernel.<\/h3>\n<p>This new approach also makes it much easier to use dependency injection with Semantic Kernel. In the following example, we demonstrate how you can create a transient kernel with services .NET developers are already familiar with (e.g., logging and http clients).<\/p>\n<pre>Services\r\n \u00a0\u00a0 .AddTransient&lt;Kernel&gt;(sp =&gt; \r\n    {\r\n        var builder = Kernel.CreateBuilder();\r\n \u00a0\u00a0 \u00a0\u00a0\u00a0 builder.Services.AddLogging(c =&gt; c.AddConsole().SetMinimumLevel(LogLevel.Information));\r\n        builder.Services.ConfigureHttpClientDefaults(c =&gt;\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0    \/\/ Use a standard resiliency policy\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0c.AddStandardResilienceHandler().Configure(o =&gt;\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0    o.Retry.ShouldHandle = args =&gt; ValueTask.FromResult(args.Outcome.Result?.StatusCode is HttpStatusCode.Unauthorized);\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0});\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0});\r\n \u00a0\u00a0\u00a0\u00a0\u00a0  builder.Services.AddOpenAIChatCompletion(\"gpt-4\", apiKey);\r\n        builder.Plugins.AddFromType&lt;LightPlugin&gt;();\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 return builder.Build();\r\n \u00a0\u00a0 });<\/pre>\n<p>You\u2019ll notice that we treat plugins similarly. With the <code>Plugins<\/code> property on the <code>KernelBuilder<\/code>, you can easily register the plugins you have in your application with your kernel.<\/p>\n<p>While the services are immutable, later in your code you can inject additional plugins into your kernel. If you add your kernel as a transient, you can mutate your kernel without impacting other parts of your code, but if you make it a singleton, we\u2019ve added the <code>Clone()<\/code> method to avoid manipulating the kernel in other parts of your code.<\/p>\n<pre>public MyService(Kernel kernel)\r\n{\r\n \u00a0\u00a0 this._kernel = kernel.Clone();\r\n \u00a0\u00a0 this._kernel.Plugins.AddFromType&lt;MathPlugin&gt;();\r\n}<\/pre>\n<h3 style=\"margin-top: 1rem;\">The kernel is now passed <em>everywhere<\/em>.<\/h3>\n<p>Once you\u2019ve created your kernel, V1.0.0 RC1 will use the kernel nearly everywhere to ensure Semantic Kernel operations have all the services they need. This includes function invocation, prompt rendering, service selection, and connector requests. As we continue to improve Semantic Kernel, we will continue leveraging this pattern because we believe it\u2019s the best way to consolidate all the runtime configuration for your AI applications.<\/p>\n<h3 style=\"margin-top: 1rem;\">Try these features yourself!<\/h3>\n<p>If you want to see dependency injection in action, check out the <a href=\"https:\/\/github.com\/microsoft\/semantic-kernel-starters\/tree\/main\/sk-csharp-console-chat\">console chat starter app<\/a> for Semantic Kernel.<\/p>\n<div class=\"table-responsive\" style=\"text-align: center;\">\n<table style=\"background: none; border: none; display: inline-block;\">\n<tbody>\n<tr style=\"background: none;\">\n<td style=\"text-align: center; background: none; border: 0px solid white;\" width=\"156\">&nbsp;<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2023\/12\/bfea630e-f10f-4587-a0a2-c8447f187620.jpeg\"><img decoding=\"async\" class=\"lazyloaded avatar aligncenter wp-image-1678 size-thumbnail\" style=\"width: 58px; height: 58px;\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2023\/12\/bfea630e-f10f-4587-a0a2-c8447f187620-150x150.jpeg\" sizes=\"(max-width: 150px) 100vw, 150px\" alt=\"Image bfea630e f10f 4587 a0a2 c8447f187620\" width=\"150\" height=\"150\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/bfea630e-f10f-4587-a0a2-c8447f187620-150x150.jpeg 150w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/bfea630e-f10f-4587-a0a2-c8447f187620-24x24.jpeg 24w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/bfea630e-f10f-4587-a0a2-c8447f187620-48x48.jpeg 48w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/bfea630e-f10f-4587-a0a2-c8447f187620-96x96.jpeg 96w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/bfea630e-f10f-4587-a0a2-c8447f187620.jpeg 220w\" \/><\/a>\nRoger Barreto<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<h2>Getting responses from AI has never been easier.<\/h2>\n<p>To further improve the kernel, we wanted to make sure you could invoke <em>any<\/em> of your logic directly from it. You could already invoke a function from it, but you couldn\u2019t 1) stream from the kernel, 2) easily run an initial prompt, or 3) use non-string arguments. With V1.0.0 RC1, we\u2019ve made enhancements to support all three.<\/p>\n<h3 style=\"margin-top: 1rem;\">Just getting started? Use the simple InvokePromptAsync methods.<\/h3>\n<p>For new users of the kernel, we wanted to make it as simple as possible to get started. Previously, you had to create a 1) semantic function, 2) wrap it in a plugin, 3) register it in a kernel, before finally 4) invoking it.<\/p>\n<p>So we collapsed <em>all<\/em> these steps into a single method.<\/p>\n<pre>Console.WriteLine(await kernel.InvokePromptAsync(\"Tell me a joke\"));<\/pre>\n<p>This should return a result like the following. This is <em>much<\/em> easier.<\/p>\n<pre class=\"prettyprint language-html\"><code class=\"language-html\">Sure, here's a classic one for you:\r\nWhy don't scientists trust atoms?\r\nBecause they make up everything!<\/code><\/pre>\n<h3 style=\"margin-top: 1rem;\">Invoke function directly from the kernel with kernel arguments.<\/h3>\n<p>We also wanted to allow users to send more than just strings as arguments to the kernel. With the introduction of <code>KernelArguments<\/code>, you can now pass non-strings into any of your functions. For example, you can now send an entire <code>ChatHistory<\/code> object to a prompt function.<\/p>\n<pre>var result = kernel.InvokeAsync (\r\n\u00a0\u00a0\u00a0 promptFunction,\r\n\u00a0\u00a0\u00a0 arguments: new() {\r\n\u00a0\u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0{ \"messages\", chatMessages }\r\n\u00a0\u00a0\u00a0 });\r\n<\/pre>\n<p>If you then use a template engine like <a href=\"#use-handlebars-in-your-prompt-templates-for-even-more-power\">Handlebars<\/a>, you could then write a prompt that loops over all the messages before sending them to the model.<\/p>\n<pre class=\"prettyprint language-html\"><code class=\"language-html\">&lt;message role=\"system\"&gt;You are a helpful assistant.&lt;\/message&gt;\r\n{{#each messages}}\r\n\u00a0 &lt;message role=\"{{Role}}\"&gt;{{~Content~}}&lt;\/message&gt;\r\n{{\/each}}<\/code><\/pre>\n<h3 style=\"margin-top: 1rem;\">Easily stream directly from the kernel.<\/h3>\n<p>Finally, we wanted to bring streaming to the kernel. With streaming, you can improve perceived latency and build experiences like ChatGPT (it also just looks cool).<\/p>\n<p>To stream a response, simply use the <code>InvokeStreamingAsync()<\/code> method and loop over the chunk<\/p>\n<pre>\/\/ Print the chat completions\r\nawait foreach (var chunk in kernel.InvokeStreamingAsync&lt;StreamingChatMessageContent&gt;(function))\r\n{\r\n \u00a0\u00a0 Console.Write(chunk);\r\n}<\/pre>\n<h3 style=\"margin-top: 1rem;\">Try these features yourself!<\/h3>\n<p>If you want to see this features action, check out the <a href=\"https:\/\/github.com\/microsoft\/semantic-kernel-starters\/tree\/main\/sk-csharp-hello-world\">updated hello world starter<\/a> for Semantic Kernel.<\/p>\n<div class=\"table-responsive\" style=\"text-align: center;\">\n<table style=\"background: none; border: none; display: inline-block;\">\n<tbody>\n<tr style=\"background: none;\">\n<td style=\"text-align: center; background: none; border: 0px solid white;\" width=\"156\">&nbsp;<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2023\/12\/3fc4453f-6cd5-40f1-a3c4-19fdbd03afee.jpeg\"><img decoding=\"async\" class=\"lazyloaded avatar aligncenter wp-image-1677 size-thumbnail\" style=\"width: 58px; height: 58px;\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2023\/12\/3fc4453f-6cd5-40f1-a3c4-19fdbd03afee-150x150.jpeg\" sizes=\"(max-width: 150px) 100vw, 150px\" alt=\"Image 3fc4453f 6cd5 40f1 a3c4 19fdbd03afee\" width=\"150\" height=\"150\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/3fc4453f-6cd5-40f1-a3c4-19fdbd03afee-150x150.jpeg 150w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/3fc4453f-6cd5-40f1-a3c4-19fdbd03afee-300x300.jpeg 300w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/3fc4453f-6cd5-40f1-a3c4-19fdbd03afee-24x24.jpeg 24w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/3fc4453f-6cd5-40f1-a3c4-19fdbd03afee-48x48.jpeg 48w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/3fc4453f-6cd5-40f1-a3c4-19fdbd03afee-96x96.jpeg 96w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/12\/3fc4453f-6cd5-40f1-a3c4-19fdbd03afee.jpeg 360w\" \/><\/a>\nMark Wallace<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<h2>Creating templates has never been so easy or powerful.<\/h2>\n<p>At the heart of Semantic Kernel are prompts. Without them, you cannot make the requests that give your applications AI. With V1.0.0, we\u2019ve aligned with Azure AI\u2019s prompt serialization format to make it easier to create prompt assets with YAML.<\/p>\n<h3 style=\"margin-top: 1rem;\">With YAML files, you can now easily share prompts.<\/h3>\n<p>Instead of juggling separate prompt files and configuration files, you can now use a single YAML file to describe everything necessary for a prompt function (previously called semantic functions).<\/p>\n<p>For example, below you can see how we\u2019ve defined a prompt function called <code>GenerateStory<\/code> that has two inputs: the topic and length.<\/p>\n<pre class=\"prettyprint language-html\"><code class=\"language-html\">name: GenerateStory\r\ntemplate: |\r\n\u00a0 Tell a story about {{$topic}} that is {{$length}} sentences long.\r\ntemplate_format: semantic-kernel\r\ndescription: A function that generates a story about a topic.\r\ninput_variables:\r\n\u00a0 - name: topic\r\n\u00a0\u00a0\u00a0 description: The topic of the story.\r\n\u00a0\u00a0\u00a0 is_required: true\r\n\u00a0 - name: length\r\n\u00a0\u00a0\u00a0 description: The number of sentences in the story.\r\n\u00a0\u00a0\u00a0 is_required: true\r\noutput_variable:\r\n\u00a0 description: The generated story.<\/code><\/pre>\n<p>We can load this function and run it with the following code. For this sample, I\u2019ll ask for a story about a dog that is three sentences long.<\/p>\n<pre>\/\/ Load prompt from resource\r\nusing StreamReader reader = new(Assembly.GetExecutingAssembly().GetManifestResourceStream(\"prompts.GenerateStory.yaml\")!);\r\nvar function = kernel.CreateFunctionFromPromptYaml(await reader.ReadToEndAsync());\r\n\r\nConsole.WriteLine(await kernel.InvokeAsync(prompt, arguments: new()\r\n{\r\n \u00a0\u00a0 { \"topic\", \"Dog\" },\r\n \u00a0\u00a0 { \"length\", 3 }\r\n}));<\/pre>\n<p>This should output something like the following:<\/p>\n<pre class=\"prettyprint language-html\"><code class=\"language-html\">Once upon a time, there was a dog named Max. He was a loyal companion to his owner, always by their side. Together, they embarked on countless adventures, creating memories that would last a lifetime.<\/code><\/pre>\n<h3 style=\"margin-top: 1rem;\">Use Handlebars in your prompt templates for even more power.<\/h3>\n<p>If you want even more power (i.e., loops and conditionals), you can also leverage Handlebars. Handlebars makes a great addition to support any type of input variable. For example, you can now loop over chat history messages.<\/p>\n<pre class=\"prettyprint language-html\"><code class=\"language-html\">name: Chat\r\ntemplate: |\r\n\u00a0 &lt;message role=\"system\"&gt;You are a helpful assistant.&lt;\/message&gt;\r\n\u00a0 {{#each messages}}\r\n\u00a0\u00a0\u00a0 &lt;message role=\"{{Role}}\"&gt;{{~Content~}}&lt;\/message&gt;\r\n\u00a0 {{\/each}}\r\ntemplate_format: handlebars\r\ndescription: A function that uses the chat history to respond to the user.\r\ninput_variables:\r\n\u00a0 - name: messages\r\n\u00a0\u00a0\u00a0 description: The history of the chat.\r\n\u00a0\u00a0\u00a0 is_required: true<\/code><\/pre>\n<p>To use the new Handlebars template, you\u2019ll need to include the Handlebars package and include the HandlebarsPromptTemplateFactory when you create your prompt.<\/p>\n<pre>using StreamReader reader = new(Assembly.GetExecutingAssembly().GetManifestResourceStream(\"prompts.Chat.yaml\")!);\r\nKernelFunction prompt = kernel.CreateFunctionFromPromptYaml(\r\n \u00a0\u00a0 reader.ReadToEnd(),\r\n \u00a0\u00a0 promptTemplateFactory: new HandlebarsPromptTemplateFactory()\r\n);\r\n\r\nvar result = kernel.InvokeAsync (\r\n \u00a0\u00a0 prompt,\r\n \u00a0\u00a0 arguments: new() {\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 { \"messages\", chatMessages }\r\n \u00a0\u00a0 });<\/pre>\n<h2 style=\"margin-top: 2rem;\">Other important changes<\/h2>\n<p>In addition to the features above, we\u2019ve also cleaned up our interface as part of V1.0.0 RC1.<code><\/code><\/p>\n<h3 style=\"margin-top: 1rem;\">Aligning names with the rest of the industry.<\/h3>\n<p>We updated our naming patterns to align with the rest of the industry and to avoid potential collisions in .NET. This list is not exhaustive, but it does cover the major changes that occurred:<\/p>\n<ul>\n<li>The <code>SK<\/code> prefix was renamed to <code>Kernel<\/code>; for example:\n<ul>\n<li><code>SKFunction<\/code> has become <code>KernelFunction<\/code>.<\/li>\n<li><code>SKFunctionMetadata<\/code> has become <code>KernelFunctionAttribute<\/code><\/li>\n<li><code>SKJsonSchema<\/code> has become <code>KernelJsonSchema<\/code><\/li>\n<li><code>SKParameterMetadata<\/code> has become <code>KernelParameterMetadata<\/code><\/li>\n<li><code>SKPluginCollection<\/code> has become <code>KernelPluginCollection<\/code><\/li>\n<li><code>SKReturnParameterMetadata<\/code> has become <code>KernelReturnParameterMetadata<\/code><\/li>\n<\/ul>\n<\/li>\n<li>The connector interfaces have been updated to match their model type in Azure AI and Hugging Face\n<ul>\n<li><code>ITextCompletionService<\/code> has become <code>ITextGenerationService<\/code><\/li>\n<li><code>IImageGenerationService<\/code> has become <code>ITextToImageService<\/code><\/li>\n<\/ul>\n<\/li>\n<li><code>RequestSettings<\/code> has been renamed to <code>PromptExecutionSettings<\/code><\/li>\n<li>Semantic function have been renamed to prompt functions<\/li>\n<\/ul>\n<h3 style=\"margin-top: 1rem;\">Custom implementations have been replaced with .NET standard implementations.<\/h3>\n<p>Previously, we had classes and interfaces like <code>IAIServiceProvider<\/code>, <code>HttpHandlerFactory<\/code>, and retry handlers. With our move to align with dependency injection, these implementations are no longer necessary because developers canuse existing standard approaches that are already in use within the .NET ecosystem.<\/p>\n<h3 style=\"margin-top: 1rem;\"><code>SKContext<\/code> and <code>ContextVariables<\/code> have been replaced.<\/h3>\n<p>As we developed V1.0.0, we noticed that <code>SKContext<\/code> shared many similarities with the kernel, so <code>SKContext<\/code> has been replaced in all of the method signatures that required it as an input parameter with your <code>Kernel<\/code> instance.<\/p>\n<p>As part of this move, we also replaced <code>ContextVariables<\/code> with <code>KernelArguments<\/code>. <code>KernelArguments<\/code> is very similar to <code>ContextVariables<\/code> except it can store non-string variables and it also includes <code>PromptExecutionSettings<\/code> (previously known as <code>RequestSettings<\/code>).<\/p>\n<h2 style=\"margin-top: 2rem;\">Getting started<\/h2>\n<p>If you\u2019ve gotten this far and want to try out v1.0.0, please check out our two updated starters.<\/p>\n<ul>\n<li><a href=\"https:\/\/github.com\/microsoft\/semantic-kernel-starters\/tree\/main\/sk-csharp-hello-world\"><strong>Hello world<\/strong><\/a> \u2013 quickly get started using prompt YAML files and streaming<\/li>\n<li><a href=\"https:\/\/github.com\/microsoft\/semantic-kernel-starters\/tree\/main\/sk-csharp-console-chat\"><strong>Console chat<\/strong><\/a> \u2013 see how to use Semantic Kernel with .NET dependency injection<\/li>\n<\/ul>\n<h2 style=\"margin-top: 2rem;\">Join the hackathon and let us know what you think<\/h2>\n<p>As of today, the <a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/come-hack-with-us-on-semantic-kernel-v1-0\/\">V1.0.0 RC hackathon has started<\/a>. Give the starters a try, build something cool, and give us feedback on what the experience was like on <a href=\"https:\/\/github.com\/microsoft\/semantic-kernel\/discussions\/3358\">our discussion boards<\/a>. We&#8217;ll use this information to polish the SDK before going live with V1.0.0 at the end of the year.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Semantic Kernel v1.0 has shipped and the contents of this blog entry is now out of date. Since the interface is getting extremely close to its final v1.0.0 structure, we\u2019re excited to release v1.0.0 RC1 of the .NET Semantic Kernel SDK. During the next two weeks we\u2019ll be focused on bug fixes and making minor [&hellip;]<\/p>\n","protected":false},"author":121401,"featured_media":1702,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-1664","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-semantic-kernel"],"acf":[],"blog_post_summary":"<p>Semantic Kernel v1.0 has shipped and the contents of this blog entry is now out of date. Since the interface is getting extremely close to its final v1.0.0 structure, we\u2019re excited to release v1.0.0 RC1 of the .NET Semantic Kernel SDK. During the next two weeks we\u2019ll be focused on bug fixes and making minor [&hellip;]<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/posts\/1664","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/users\/121401"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/comments?post=1664"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/posts\/1664\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/media\/1702"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/media?parent=1664"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/categories?post=1664"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/tags?post=1664"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}