{"id":1396,"date":"2023-11-01T06:06:07","date_gmt":"2023-11-01T13:06:07","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/semantic-kernel\/?p=1396"},"modified":"2024-01-10T15:11:06","modified_gmt":"2024-01-10T23:11:06","slug":"what-to-expect-from-v1-and-beyond-for-semantic-kernel","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/agent-framework\/what-to-expect-from-v1-and-beyond-for-semantic-kernel\/","title":{"rendered":"What to expect from v1 and beyond for Semantic Kernel."},"content":{"rendered":"<blockquote><p>Semantic Kernel <a href=\"https:\/\/github.com\/microsoft\/semantic-kernel\/releases\/tag\/dotnet-1.0.1\">v1.0<\/a> has shipped and the contents of this blog entry is now out of date.<\/p><\/blockquote>\n<h1 style=\"padding-bottom: 2rem;\"><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2023\/03\/skpatternlarge.png\"><img decoding=\"async\" class=\"aligncenter size-full wp-image-89\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2023\/03\/skpatternlarge.png\" alt=\"Image skpatternlarge\" width=\"1638\" height=\"136\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/03\/skpatternlarge.png 1638w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/03\/skpatternlarge-300x25.png 300w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/03\/skpatternlarge-1024x85.png 1024w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/03\/skpatternlarge-768x64.png 768w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2023\/03\/skpatternlarge-1536x128.png 1536w\" sizes=\"(max-width: 1638px) 100vw, 1638px\" \/><\/a><\/h2>\n<p>In <a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/introducing-the-v1-0-0-beta1-for-the-net-semantic-kernel-sdk\/\">a previous article<\/a>, we announced the beta launch of Semantic Kernel v1. In that article, we shared the initial breaking changes we made for v1: 1) renaming skills to plugins, 2) making Semantic Kernel AI service agnostic while still supporting first class support for Azure OpenAI and OpenAI, and 3) consolidating our implementation of planners.<\/p>\n<p>These are by no means the <em>only<\/em> changes we have planned for v1 though. Several other changes are necessary before we can confidently say that we have a simple-to-use API that can provide a reliable foundation for current and future applications.<\/p>\n<p>As we make changes, we are using an obsolescence strategy with guidance on how to move to the new API. This doesn&#8217;t work well, however, for all scenarios. We&#8217;re learning that the community has been using Semantic Kernel in novel and exciting ways, so we need help from you all to let us know if our v1 proposal accidentally breaks any existing scenarios.<\/p>\n<p><strong style=\"font-family: 'Segoe UI Bold','Segoe UI',Tahoma,Geneva,Verdana,sans-serif;\">Because of this, we\u2019re excited to share our overall proposal for the V1 interface so we can begin to collect feedback from you, our community.<\/strong>\u00a0 In this blog post, we\u2019ll share the changes we\u2019re making as well as provide samples that demonstrate what it will be like to build an AI powered app with Semantic Kernel in the future. Naturally, both the proposal and samples will change as we collect feedback from you.<\/p>\n<p>To share everything we want to cover, the following blog post is broken into three sections:<\/p>\n<ol>\n<li>The changes we\u2019re planning to make (and why).<\/li>\n<li>Where to find <a href=\"https:\/\/github.com\/matthewbolanos\/sk-v1-proposal\/tree\/main\/dotnet\/samples\">samples<\/a> using the proposed v1 API.<\/li>\n<li>And most importantly, how you can <a href=\"https:\/\/github.com\/microsoft\/semantic-kernel\/discussions\/3358\">give us feedback<\/a>.<\/li>\n<\/ol>\n<p>As you read through our proposal, remember that it\u2019s just that: a proposal. This proposal can and will likely change, but we want to share this proposal with you <em>now<\/em> so you can let us know if we\u2019re doing anything that will negatively impact your scenarios.<\/p>\n<p>Also note that this list is <em>long<\/em>. <u style=\"font-family: 'Segoe UI Bold','Segoe UI',Tahoma,Geneva,Verdana,sans-serif;\">Not everything here will land in v1.0.0<\/u>, but at the very least, we\u2019ll try to setup our interfaces so we can support these features as non-breaking changes in the future.<\/p>\n<h2 style=\"clear: both; padding-top: 2rem;\">The proposed changes coming to v1.<\/h2>\n<p>The Semantic Kernel team had four goals for our v1 release of the SDK:<\/p>\n<ol>\n<li>Simplify the core of Semantic Kernel.<\/li>\n<li>Expose the full power of LLMs through semantic functions.<\/li>\n<li>Improve the effectiveness of Semantic Kernel planners.<\/li>\n<li>Provide a compelling reason to use the kernel.<\/li>\n<\/ol>\n<p>In each section, we describe the community challenges we wanted to address for each goal and how we propose to fix them.<\/p>\n<h3 style=\"clear: both; padding-top: 1rem;\">01. Simplifying the core of Semantic Kernel.<\/h3>\n<p>As Semantic Kernel has matured, it has become increasingly more complex. This has caused confusion for new and existing users alike. Much of this is because of the <em>many<\/em> concepts we\u2019ve added to Semantic Kernel. What\u2019s an SKContext? How\u2019s is it different than ContextVariables? When should I use a function, plugin, or memory connector?<\/p>\n<p>These many concepts made getting started difficult, and often, artificially constrain the power of Semantic Kernel. For example, today\u2019s ContextVariables can only hold strings whereas a Dictionary&lt;string, object&gt; would be simpler to understand <em>and<\/em> allow developers to use any datatype they desire.<\/p>\n<p><strong style=\"font-family: 'Segoe UI Bold','Segoe UI',Tahoma,Geneva,Verdana,sans-serif;\">01.01. ContextVariables will become be a Dictionary&lt;string, object&gt;<\/strong> \u2013 You will no longer be limited to storing variables as strings. With native object support in the kernel, you\u2019ll be able to input and output complex objects from any of your functions.<\/p>\n<p>For example, when calling Kernel.RunAsync(), you will be able to pass in an arbitrary dictionary with complex objects (like all your chat messages).<\/p>\n<pre>\/\/ Start the chat\r\nChatHistory chatHistory = gpt35Turbo.CreateNewChat();\r\nwhile(true)\r\n{\r\n    Console.Write(\"User &gt; \");\r\n    chatHistory.AddUserMessage(Console.ReadLine()!);\r\n\r\n    \/\/ Run the simple chat\r\n    var result = await kernel.RunAsync(\r\n    chatFunction,\r\n        variables: new() {{ \"messages\", chatHistory }}\r\n    );\r\n\r\n    Console.WriteLine(\"Assistant &gt; \" + result);\r\n    chatHistory.AddAssistantMessage(result.GetValue&lt;string&gt;()!);\r\n}<\/pre>\n<p>Elsewhere, you can define native functions that can consume and return complex objects. The following example shows how you could return an array of search results instead of a string representation of it.<\/p>\n<pre>[SKFunction, Description(\"Searches Bing for the given query\")]\r\npublic async Task&lt;List&lt;string&gt;&gt; SearchAsync(\r\n    [Description(\"The search query\"), SKName(\"query\")] string query\r\n)\r\n{\r\n    var results = await this._bingConnector.SearchAsync(query, 10);\r\n\r\n    return results.ToList();\r\n}<\/pre>\n<p><strong style=\"font-family: 'Segoe UI Bold','Segoe UI',Tahoma,Geneva,Verdana,sans-serif;\">01.02. SKContext will be replaced with IKernel and the variables dictionary<\/strong> \u2013 Most of the information available in SKContext can also be found in IKernel. To simplify the API\u2013and to give developers more power\u2013we will provide an entire IKernel instance along with a variables dictionary wherever SKContext is used today.<\/p>\n<p>For example, invoking a function will now look like the following.<\/p>\n<pre>var results = await function.InvokeAsync(kernel, variables);<\/pre>\n<p>With the kernel instance, you as a developer can then access all the available AI services and functions from within your function.<\/p>\n<p><strong style=\"font-family: 'Segoe UI Bold','Segoe UI',Tahoma,Geneva,Verdana,sans-serif;\">01.03. Memory will be modeled like any other plugin<\/strong> \u2013 We\u2019ve gotten feedback that the existing memory abstractions are too limiting because they don\u2019t offer the full power of each of their underlying services. Meanwhile, Semantic Kernel has taken a big bet on plugins which allows developers to create <em>any<\/em> arbitrary API for LLMs.<\/p>\n<p>This means we will be removing the Memory property of IKernel and working with the contributors of the existing memory connectors to turn them into plugins so they can unleash the full power of their services.<\/p>\n<p>At a minimum, to turn existing memory services into a plugin, all you need to do is create a plugin with two functions: SaveMemory() and RecallMemory(). You can then import these into the kernel like any other plugin.<\/p>\n<h3 style=\"clear: both; padding-top: 1rem;\">02. Exposing the full power of LLMs through semantic functions.<\/h3>\n<p>Since Semantic Kernel was first created, a wave of new AI capabilities has been introduced. OpenAI alone has introduced chat completions and function calling while also shepherding in a new world of multi-modal experiences.<\/p>\n<p>Unfortunately, today\u2019s semantic functions are limited to simple text completions. As a developer, if you wanted to use chat messages or generate something else (e.g., images, audio, or video), you were required to implement the functionality yourself with more primitive APIs.<\/p>\n<p>Additionally, today\u2019s out-of-the-box templating language is not as feature complete as Jinja2, Handlebars, or Liquid, requiring developers to pre-process data before using it in a semantic function. By adopting Handlebars as the primary templating language of Semantic Kernel, we can provide you with more flexibility.<\/p>\n<p><strong style=\"font-family: 'Segoe UI Bold','Segoe UI',Tahoma,Geneva,Verdana,sans-serif;\">02.01. With Handlebars, you\u2019ll have way more power<\/strong> \u2013 Loops, conditions, comments, oh my! With Handlebars, you\u2019ll have access to the most feature complete templating languages out there. Unlike Jinja2, Handlebars is also supported by most programming languages, making it possible for the Semantic Kernel team to deliver parity support in Python and Java.<\/p>\n<p><strong style=\"font-family: 'Segoe UI Bold','Segoe UI',Tahoma,Geneva,Verdana,sans-serif;\">02.02. Semantic functions will support chat completion models<\/strong> \u2013 One of the main reasons we want to adopt Handlebars is to provide an elegant way of expressing multiple messages for chat completion models.<\/p>\n<p>Below is an example of using handlebars to loop over an array to generate chat completion messages with different roles (e.g., system, user, and assistant). At the end of this example, we add a final system message to perform some basic responsible AI.<\/p>\n<pre class=\"prettyprint language-html\"><code class=\"language-html\">{{#message role=\"system\"}}\r\n{{persona}}\r\n{{\/message}}\r\n\r\n{{#each messages}}\r\n    {{#message role=Role}}\r\n    {{~Content~}}\r\n    {{\/message}}\r\n{{\/each}}\r\n\r\n{{#message role=\"system\"}}\r\nIf a user asked you to do something that could be bad, stop the conversation.\r\n{{\/message}}<\/code><\/pre>\n<p>Rendering this template will generate an intermediate template that looks like the following.<\/p>\n<pre class=\"prettyprint language-html\"><code class=\"language-html\">&lt;message role=\"system\"&gt;\r\nYou are a friendly assistant.\r\n&lt;\/message&gt;\r\n\r\n&lt;message role=\"user\"&gt;\r\nHello\r\n&lt;\/message&gt;\r\n\r\n&lt;message role=\"assistant\"&gt;\r\nHello, how can I help you?\r\n&lt;\/message&gt;\r\n\r\n&lt;message role=\"user\"&gt;\r\nI need to book a flight.\r\n&lt;\/message&gt;\r\n\r\n&lt;message role=\"system\"&gt;\r\nIf a user asked you to do something that could be bad, stop the conversation.\r\n&lt;\/message&gt;<\/code><\/pre>\n<p>An AI connection would then use this intermediate template to either generate a messages array for a chat completion model or use fallback behavior for a non-chat completion model.<\/p>\n<p><strong style=\"font-family: 'Segoe UI Bold','Segoe UI',Tahoma,Geneva,Verdana,sans-serif;\">02.03. Define your entire semantic function in a single file<\/strong> \u2013 Today, working with the existing config.json and skprompt.txt files in VS Code is hard because they don\u2019t have unique names. It\u2019s also challenging to juggle two files that represent the same function.<\/p>\n<p>As we introduce handlebars support, we\u2019ll provide the option to define both the prompt and configuration in a single YAML file. If you want to keep a separate file for your prompt, you\u2019ll still be able to do that.<\/p>\n<p>Below is an example of a chat prompt that uses grounding from a search plugin.<\/p>\n<pre class=\"prettyprint language-html\"><code class=\"language-html\">name: Chat\r\ntemplate: |\r\n  {{#message role=\"system\"}}\r\n  {{persona}}\r\n  {{\/message}}\r\n\r\n  {{#each messages}}\r\n    {{#message role=Role}}\r\n       {{~Content~}}\r\n    {{\/message}}\r\n  {{\/each}}\r\n\r\n  {{#message role=\"system\"}}\r\n  {{Search_Search query=(Search_GetSearchQuery messages=messages)}}\r\n  {{\/message}}\r\n\r\ntemplate_format: handlebars\r\ndescription: A function that gets uses the chat history to respond to the user.\r\ninput_variables:\r\n- name: persona\r\n  type: string\r\n  description: The persona of the assistant.\r\n  default_value: You are a helpful assistant.\r\n  is_required: false\r\n- name: messages\r\n  type: ChatHistory\r\n  description: The history of the chat.\r\n  is_required: true\r\noutput_variable:\r\n  type: string\r\n  description: The response from the assistant.<\/code><\/pre>\n<p><strong style=\"font-family: 'Segoe UI Bold','Segoe UI',Tahoma,Geneva,Verdana,sans-serif;\">02.04. You can configure execution settings for multiple models<\/strong> \u2013 We\u2019ve heard from several customers that they want to define default configuration for multiple models. This allows them to easily switch between models based on custom logic. With the new execution settings object, you can do just that. With this available information, Semantic Kernel will choose the best model during invocation time based on the available services in the kernel.<\/p>\n<p>Below is an example of what the execution settings object looks like in the new prompt YAML file. Here, we define different temperatures for gpt-4 and gpt-3.5-turbo. Because gpt-4 is listed first, Semantic Kernel will try to use it first if its available in the kernel.<\/p>\n<pre class=\"prettyprint language-html\"><code class=\"language-html\">request:\r\n- model_id: gpt-4\r\n  temperature: 1.0\r\n- model_id: gpt-3.5-turbo\r\n  temperature: 0.7<\/code><\/pre>\n<p><strong style=\"font-family: 'Segoe UI Bold','Segoe UI',Tahoma,Geneva,Verdana,sans-serif;\">02.05. In the future, we\u2019ll also support function calling and other modalities from within semantic functions<\/strong> \u2013 We know that developers will want to use semantic functions to send and return other message types like functions, images, videos, and audio. We\u2019ve designed the prompt template syntax in a way to support these features in the future.<\/p>\n<p>For example, in the future you could include a message with a video inside of it so a model could describe it to the user.<\/p>\n<pre class=\"prettyprint language-html\"><code class=\"language-html\">{{#message role=\"system\"}}\r\nYou are a helpful assistant that describes videos.\r\n{{\/message}}\r\n\r\n{{#message role=\"user\"}}\r\nCan you describe this video for me?\r\n{{\/message}}\r\n\r\n{{#message role=\"user\"}}\r\n{{video title=\"Video title\" description=\"Video description\" url=\"https:\/\/www.example.com\/video.mp4\"}}\r\n{{\/message}}<\/code><\/pre>\n<h3 style=\"clear: both; padding-top: 1rem;\">03. Improving the effectiveness of Semantic Kernel planners.<\/h3>\n<p>Today\u2019s planners (Action, Sequential, and Stepwise) only have access to a limited set of information about functions (i.e., name, description, and input parameters). Based on research performed by the Semantic Kernel team, planners can perform much better if they\u2019re also given the expected output, examples (both good and bad), and descriptions of complex types.<\/p>\n<p>As part of this effort, we\u2019d also like to incorporate other learnings from Microsoft research into both new and existing planners.<\/p>\n<p><strong style=\"font-family: 'Segoe UI Bold','Segoe UI',Tahoma,Geneva,Verdana,sans-serif;\">03.01 There will be additional ways to semantically describe functions<\/strong> \u2013 With new attributes, you\u2019ll be able to describe the output of a function as well provide good and bad examples.<\/p>\n<pre>[SKFunction]\r\n[Description(\"Adds two numbers.\")]\r\n[SKOutputDescription(\"The summation of the numbers.\")]\r\n[SKGoodSample(\r\n    inputs: \"{\\\"number1\\\":1, \\\"number2\\\":2}\",\r\n    output:\"3\"\r\n)]\r\n[SKBadSample(\r\n    inputs: \"{\\\"number1\\\":\\\"one\\\", \\\"number2\\\":\\\"two\\\"}\",\r\n    error: \"The value \\\"one\\\" is not a valid number.\"\r\n)]\r\npublic static double Add(\r\n    [Description(\"The first number to add\")] double number1,\r\n    [Description(\"The second number to add\")] double number2\r\n)\r\n{\r\n    return number1 + number2;\r\n}<\/pre>\n<p>We will also use reflection to get the structure of any complex types you may use. If you already use System.Text.Json attributes, we\u2019ll use those to better describe the objects to planners.<\/p>\n<p><strong style=\"font-family: 'Segoe UI Bold','Segoe UI',Tahoma,Geneva,Verdana,sans-serif;\">03.02 OpenAI functions will help power our planners for increased accuracy<\/strong> \u2013 Most of the planners in Semantic Kernel were built before OpenAI introduced function calling. We believe that leveraging function calling in our existing planners will help them yield better results.<\/p>\n<p><strong style=\"font-family: 'Segoe UI Bold','Segoe UI',Tahoma,Geneva,Verdana,sans-serif;\">03.03 Other research will be baked into our planners<\/strong> \u2013 Within Microsoft, we have other initiatives identifying the best strategies to create planners that are fast, reliable, and cheap (i.e., use fewer tokens on cheaper models). The results of this research will also be included in our planners.<\/p>\n<h3 style=\"clear: both; padding-top: 1rem;\">04. Providing a compelling reason to use the kernel.<\/h3>\n<p>Lastly, we wanted to make sure the namesake of Semantic Kernel, IKernel, actually aided the developer experience instead of detracting from it. Today, creating and managing a kernel is too onerous, so many users opt to simply invoke functions without the kernel.<\/p>\n<p>With the changes below, we believe we can both increase the value of the kernel <em>and<\/em> make it easier to use.<\/p>\n<p><strong style=\"font-family: 'Segoe UI Bold','Segoe UI',Tahoma,Geneva,Verdana,sans-serif;\">04.01. Use a function with multiple kernels<\/strong> \u2013 Today, semantic functions are tied 1-to-1 with a kernel. This means that whenever you create a new kernel, you need to reinitialize all your functions as well. As part of v1 we will be breaking this relationship. This will allow you to instantiate your functions <em>once<\/em> as singletons and import them into multiple kernels.<\/p>\n<p>Not only will this create cleaner code, but it will also make your applications more performant because fewer resources will need to be recreated during kernel instantiation.<\/p>\n<p><strong style=\"font-family: 'Segoe UI Bold','Segoe UI',Tahoma,Geneva,Verdana,sans-serif;\">04.02. Introducing plugins to the kernel<\/strong> \u2013 Today, the kernel only has a collection of functions. This means the kernel is unable to store information at the plugin level (e.g., plugin description). This is helpful contextual information that can be used by planners.<\/p>\n<p>To create a plugin, you\u2019ll just need to provide its name and a list of its functions. You can optionally provide other information like the plugin description, logo, and learn more URLs.<\/p>\n<pre>\/\/ Create math plugin with both semantic and native functions\r\nList&lt;ISKFunction&gt; mathFunctions = NativeFunction.GetFunctionsFromObject(new Math());\r\nmathFunctions.Add(SemanticFunction.GetFunctionFromYaml(currentDirectory + \"\/Plugins\/MathPlugin\/GenerateMathProblem.prompt.yaml\"));\r\n\r\nPlugin mathPlugin = new(\r\n    \"Math\",\r\n    functions: mathFunctions\r\n);<\/pre>\n<p>Afterwards, you can add the plugin to the kernel using the new kernel constructor (next section).<\/p>\n<p><strong style=\"font-family: 'Segoe UI Bold','Segoe UI',Tahoma,Geneva,Verdana,sans-serif;\">04.03. Simplifying the creation of a kernel<\/strong> \u2013 Most users today use the KernelBuilder to create new kernels, but this often requires a lot of code and makes it difficult to use dependency injection. For v1, the primary way of creating a kernel will be through the kernel constructor.<\/p>\n<p>In the example below, we demonstrate just how easy it will be to pass in a list of AI services and plugins into a kernel.<\/p>\n<pre>\/\/ Create new kernel\r\nIKernel kernel = new Kernel(\r\n    aiServices: new () { gpt35Turbo },\r\n    plugins: new () { intentPlugin, mathPlugin }\r\n);<\/pre>\n<p><strong style=\"font-family: 'Segoe UI Bold','Segoe UI',Tahoma,Geneva,Verdana,sans-serif;\">04.04. Stream functions from the kernel<\/strong> \u2013 Perhaps the main reason customers cannot use RunAsync() on the kernel today is the lack of streaming support. This will be available with v1.<\/p>\n<pre>var result = await kernel.RunAsync(\r\n    chatFunction,\r\n    variables: new() {{ \"messages\", chatHistory.TakeLast(20) }},\r\n    streaming: true\r\n);<\/pre>\n<p><strong style=\"font-family: 'Segoe UI Bold','Segoe UI',Tahoma,Geneva,Verdana,sans-serif;\">04.05. Evaluate your AI by running the same scenario across different kernels<\/strong> \u2013 Stacked together, these changes make it possible for you to easily setup multiple kernels with different configuration. When used with a product like Prompt flow, this allows you to pick the best setup by running batch evaluations and A\/B tests against different kernels.<\/p>\n<p>For even more control, we will also allow users to manually override request settings when instantiating the kernel and when using the RunAsync() and InvokeAsync() method.<\/p>\n<pre>\/\/ Create a new kernel with overrides\r\nIKernel kernel = new Kernel(\r\n    aiServices: new () { gpt35Turbo, gpt4 },\r\n    requestSettings: new () {\r\n        {\"SimpleChat\", new () { ModelId = \"gpt-4\" }}\r\n    }\r\n);<\/pre>\n<pre>\/\/ Send overrides when with the RunAsync() method\r\nvar result = await kernel.RunAsync(\r\n    chatFunction,\r\n    variables: new() {{ \"messages\", chatHistory }},\r\n    requestSettings: new () {\r\n        {\"SimpleChat\", new () { ModelId = \"gpt-3.5-turbo\" }}\r\n    }\r\n);<\/pre>\n<h1 style=\"padding-top: 2rem;\">Get a sneak peak of using V1 of the SDK<\/h2>\n<p>To validate our design decisions, the Semantic Kernel team has created a repo with samples demonstrating what coding with v1 will look like. You can find it by navigating to the <a href=\"https:\/\/github.com\/matthewbolanos\/sk-v1-proposal\">sk-v1-proposal repo<\/a> on GitHub and going to the <a href=\"https:\/\/github.com\/matthewbolanos\/sk-v1-proposal\/tree\/main\/dotnet\/samples\">\/dotnet\/samples folder<\/a>.<\/p>\n<p>We currently have four scenarios that capture the most common apps built by customers:<\/p>\n<ul>\n<li>Simple chat<\/li>\n<li>Persona chat (i.e., with meta prompt)<\/li>\n<li>Simple RAG (i.e., with grounding)<\/li>\n<li>Dynamic RAG (i.e., with planner-based grounding)<\/li>\n<\/ul>\n<p>To get the samples to work, several <span style=\"text-decoration: line-through;\">hacks<\/span> extensions were built in the dotnet\/src\/extensions folder. The goal for v1 is to get Semantic Kernel to the point where <em>no<\/em> extensions are required to run the samples in the \/dotnet\/samples folder. The way the extensions are written are <u>not<\/u> indicative of how they will be written for v1.<\/p>\n<p>We will also add samples in the Python and Java flavors of Semantic Kernel to get additional feedback on those languages for v1.<\/p>\n<h2 style=\"clear: both; padding-top: 2rem;\">Tell us what you think!<\/h2>\n<p>We\u2019re sharing the proposal for v1 now so we can course correct if necessary. This content will also be used by the contributors of the Python and Java flavors of Semantic Kernel as they go on a similar v1 journey.<\/p>\n<p>To centralize feedback on our v1 proposal, please connect with us on our discussion board on GitHub. There, we\u2019ve created a <a href=\"https:\/\/github.com\/microsoft\/semantic-kernel\/discussions\/3358\">dedicated discussion<\/a> where the you can provide us with feedback.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Semantic Kernel v1.0 has shipped and the contents of this blog entry is now out of date. In a previous article, we announced the beta launch of Semantic Kernel v1. In that article, we shared the initial breaking changes we made for v1: 1) renaming skills to plugins, 2) making Semantic Kernel AI service agnostic [&hellip;]<\/p>\n","protected":false},"author":121401,"featured_media":1424,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-1396","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-semantic-kernel"],"acf":[],"blog_post_summary":"<p>Semantic Kernel v1.0 has shipped and the contents of this blog entry is now out of date. In a previous article, we announced the beta launch of Semantic Kernel v1. In that article, we shared the initial breaking changes we made for v1: 1) renaming skills to plugins, 2) making Semantic Kernel AI service agnostic [&hellip;]<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/posts\/1396","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/users\/121401"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/comments?post=1396"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/posts\/1396\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/media\/1424"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/media?parent=1396"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/categories?post=1396"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/tags?post=1396"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}