{"id":1711,"date":"2023-12-09T06:23:18","date_gmt":"2023-12-09T14:23:18","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/semantic-kernel\/?p=1711"},"modified":"2024-02-29T14:08:40","modified_gmt":"2024-02-29T22:08:40","slug":"migrating-from-the-sequential-and-stepwise-planners-to-the-new-handlebars-and-stepwise-planner","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/agent-framework\/migrating-from-the-sequential-and-stepwise-planners-to-the-new-handlebars-and-stepwise-planner\/","title":{"rendered":"Migrating from the Sequential and Stepwise planners to the new Handlebars and Stepwise planner"},"content":{"rendered":"<p>As part of our <a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/semantic-kernels-ignite-release-beta8-for-the-net-sdk\/\">Ignite release<\/a>, we shipped the <a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/semantic-kernels-ignite-release-beta8-for-the-net-sdk\/#introducing-our-gen-4-and-gen-5-planners\">new gen-4 and gen-5 planners<\/a> developed by the Semantic Kernel team. These new planners can handle many more functions (2-3X more), leverage more complex logic (e.g., loops, conditionals, and complex objects), all while using fewer tokens.<\/p>\n<p>Because of <span style=\"text-decoration: underline;\"><em>just<\/em><\/span> how much better these new planners are, the Semantic Kernel team has decided that the existing Action, Sequential, and V1 Stepwise planners will not move to the v1.0.0 version of Semantic Kernel. While they were revolutionary at the time, there are much better tools for developers to now use. By removing them from v1.0.0, we can make sure that new developers don\u2019t accidentally use old techniques when better solutions exist.<\/p>\n<p>To use the new planners, you will want to install the following packages:<\/p>\n<ul>\n<li><a href=\"https:\/\/www.nuget.org\/packages\/Microsoft.SemanticKernel.Planners.Handlebars\/1.0.0-rc3\">NuGet Gallery | Microsoft.SemanticKernel.Planners.Handlebars 1.0.0-rc3<\/a><\/li>\n<li><a href=\"https:\/\/www.nuget.org\/packages\/Microsoft.SemanticKernel.Planners.OpenAI\/1.0.0-rc3\">NuGet Gallery | Microsoft.SemanticKernel.Planners.OpenAI 1.0.0-rc3<\/a><\/li>\n<\/ul>\n<p>To help existing developers with the migration to the Handlebars and Function Calling Stepwise planners, we\u2019ve created the following migration guide so you can use all the same functionality you had before.<\/p>\n<h2>Action planner <strong>\u2192<\/strong> function calling<\/h2>\n<p>The Action planner was our very first planner. Given a list of functions, which one should be called next? This planner was great at identifying an intent from a user. Since the Action planner was first created, however, OpenAI delivered <a href=\"https:\/\/platform.openai.com\/docs\/guides\/function-calling\">function calling<\/a>, which bakes this functionality <em>directly<\/em> into the model.<\/p>\n<p>If you used Action planner before, you likely had code like the following:<\/p>\n<pre class=\"prettyprint language-cs language-csharp\"><code class=\"language-cs language-csharp\">\/\/ Create an instance of ActionPlanner.\r\nvar planner = new ActionPlanner(kernel, config: config);\r\n\r\n\/\/ We're going to ask the planner to find a function to achieve this goal.\r\nvar goal = \"Write a joke about Cleopatra in the style of Hulk Hogan.\";\r\n\r\n\/\/ The planner returns a plan, consisting of a single function\r\nvar plan = await planner.CreatePlanAsync(goal);\r\n\r\n\/\/ Execute the full plan (which is a single function)\r\nvar result = await plan.InvokeAsync(kernel);<\/code><\/pre>\n<p>To leverage function calling, you can either use <a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/release-candidate-1-for-the-semantic-kernel-net-sdk-is-now-live\/\">automated function calling<\/a> or use manual function using the code below:<\/p>\n<div>\n<pre class=\"prettyprint language-cs language-csharp\"><code class=\"language-cs language-csharp\">\/\/ Enable manual function calling\r\nOpenAIPromptExecutionSettings openAIPromptExecutionSettings = new()\r\n{\r\n    ToolCallBehavior = ToolCallBehavior.EnableKernelFunctions\r\n};\r\n\r\n\/\/ Get the chat completions\r\nvar result = await chatCompletionService.GetChatMessageContentAsync(\r\n    new ChatHistory(\"Turn on the lights\"),\r\n    executionSettings: openAIPromptExecutionSettings,\r\n    kernel: kernel);\r\n\/\/ If a valid function was found, execute it\r\nvar functionCall = ((OpenAIChatMessageContent)result).ToolCalls.OfType&lt;ChatCompletionsFunctionToolCall&gt;().First();\r\nif (functionCall != null)\r\n{\r\n    kernel.Plugins.TryGetFunctionAndArguments(functionCall, out KernelFunction? pluginFunction, out KernelArguments? arguments);\r\n    await pluginFunction.InvokeAsync(kernel, arguments);\r\n}<\/code><\/pre>\n<\/div>\n<p>To use function calling, you must use an OpenAI model that supports it. This includes GPT-3.5-turbo and GPT-4 that are 0613 or newer.<\/p>\n<h2>Sequential planner <strong>\u2192<\/strong> Handlebars planner<\/h2>\n<blockquote><p>The new Handlebars planner has been marked as experimental to make it clear that its interfaces may change in a future release after v1.0.0. The Semantic Kernel team has the <em>full<\/em> intention to fully support the Handlebars planners, so they are safe to migrate to from the previous planners. To suppress the experimental warnings, you can refer to the instructions in <a href=\"https:\/\/github.com\/microsoft\/semantic-kernel\/blob\/main\/dotnet\/docs\/EXPERIMENTS.md\">this doc<\/a>.<\/p><\/blockquote>\n<p>With sequential planner, we introduce the ability for a planner to generate an entire plan using a single LLM call. This has several benefits:<\/p>\n<ol>\n<li>You could inspect the entire plan before executing it.<\/li>\n<li>You could save the plan so it could be reused again later.<\/li>\n<li>It used fewer tokens than the Stepwise planner and function calling.<\/li>\n<\/ol>\n<p>These features are still valuable today and something that has yet to be provided out-of-the-box by foundational models because they require a client-side runtime to execute a plan.<\/p>\n<p>The challenge with the sequential planner was that it required showing the AI (with one-shot prompting) how to <em>create<\/em> a plan using a custom XML syntax. This was challenging for the AI to handle, so often the XML would be invalid and the AI would often hallucinate after adding only a few functions.<\/p>\n<p>Thanks to Microsoft research, however, we discovered that models perform <em>much<\/em> better if they are asked to \u201ccode\u201d in a language they\u2019ve already been trained on. Based on this research, we introduced a planner that uses <a href=\"https:\/\/handlebarsjs.com\/guide\/\">Handlebars syntax<\/a> to generate a plan. Not only is it more accurate, but it also allows the model to leverage native features like <a href=\"https:\/\/handlebarsjs.com\/guide\/block-helpers.html#simple-iterators\">loops<\/a> and <a href=\"https:\/\/handlebarsjs.com\/guide\/block-helpers.html#conditionals\">conditions<\/a> without additional prompting.<\/p>\n<p>If you used the Sequential planner before, you likely had code like the following:<\/p>\n<pre class=\"prettyprint language-cs language-csharp\"><code class=\"language-cs language-csharp\">\/\/ Import the plugins for the planner\r\nkernel.ImportFunctions(new HttpPlugin(), \"HttpPlugin\");\r\n\r\n\/\/ Create the plan\r\nvar planner = new SequentialPlanner(kernel);\r\nvar plan = await planner.CreatePlanAsync(ask);\r\n\r\n\/\/ Print the plan to the console\r\nConsole.WriteLine(\"Plan:\\n\");\r\nConsole.WriteLine(JsonSerializer.Serialize(plan, new JsonSerializerOptions { WriteIndented = true }));\r\n\r\n\/\/ Execute the plan\r\nvar result = await kernel.RunAsync(plan);\r\n\r\n\/\/ Print the result to the console\r\nConsole.WriteLine($\"Results: {result}\");<\/code><\/pre>\n<p>In V1.0.0, you\u2019ll want to switch it to the following:<\/p>\n<pre class=\"prettyprint language-cs language-csharp\"><code class=\"language-cs language-csharp\">using Microsoft.SemanticKernel.Planning.Handlebars;\r\n\r\n\/\/ Create the kernel\r\nvar kernel = Kernel.CreateBuilder()\r\n                   .AddAzureOpenAIChatCompletion(\/* Add your configuration *\/)\r\n                   .Build();\r\n\r\nkernel.ImportPluginFromType&lt;HttpPlugin&gt;();\r\n\r\n\/\/ Create the plan\r\nvar planner = new HandlebarsPlanner(new HandlebarsPlannerOptions() { AllowLoops = true });\r\nvar plan = await planner.CreatePlanAsync(kernel, goalFromUser);\r\n\r\n\/\/ Print the plan to the console\r\nConsole.WriteLine($\"Plan: {plan}\");\r\n\r\n\/\/ Execute the plan\r\nvar result = await plan.InvokeAsync(kernel);\r\n\r\n\/\/ Print the result to the console\r\nConsole.WriteLine($\"Results: {result}\");<\/code><\/pre>\n<h3>Saving and loading Handlebars plans.<\/h3>\n<p>With the Sequential planner, you could take a plan and save it so you could run it again without asking a model to recreate it again. This is possible with Handlebars as well, but instead of saving XML, you simply save a Handlebars template.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\"><code class=\"language-cs language-csharp\">var serializedPlan = plan.ToString();<\/code><\/pre>\n<p>You can then load the template back into a new HandlebarsPlan object so you can run it again.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\"><code class=\"language-cs language-csharp\">\r\nHandlebarsPlan reloadedPlan = new HandlebarsPlan(serializedPlan);<\/code><\/pre>\n<p>Afterwards you can invoke your function.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\"><code class=\"language-cs language-csharp\">\/\/ Execute the plan\r\nvar result = await plan.InvokeAsync(kernel);\r\n\r\n\/\/ Print the result to the console\r\nConsole.WriteLine($\"Results: {result}\");<\/code><\/pre>\n<h2>Stepwise planner <strong>\u2192<\/strong> Function calling stepwise planner<\/h2>\n<blockquote><p>The new Stepwise planner has been marked as experimental to make it clear that its interfaces may change in a future release after v1.0.0. The Semantic Kernel team has the <em>full<\/em> intention to fully support the Handlebars planners, so they are safe to migrate to from the previous planners. To suppress the experimental warnings, you can refer to the instructions in <a href=\"https:\/\/github.com\/microsoft\/semantic-kernel\/blob\/main\/dotnet\/docs\/EXPERIMENTS.md\">this doc<\/a>.<\/p><\/blockquote>\n<p>One of the most reliable ways of having an LLM complete a request is to use the ReAct methodology. With ReAct, you give the AI the ability to make a function call, reason over it, and then make another function call.<\/p>\n<p>With the introduction of function calling support from OpenAI, this functionality is now automatically baked into the model. This means you no longer need to waste tokens telling the model how to perform ReAct, it\u2019s already baked into the model.<\/p>\n<p>If vanilla function calling is sufficient, you can refer to the sample we provided in our RC1 blog post to automatically call functions to complete a task. If you run into challenges, however, you can also use the FunctionCallingStepWisePlanner. This planner adds some additional reasoning at the beginning of the plan generation to improve the reliability of function calling.<\/p>\n<p>If this is the code you previously had\u2026<\/p>\n<pre class=\"prettyprint language-cs language-csharp\"><code class=\"language-cs language-csharp\">using Microsoft.SemanticKernel.Planning.OpenAI;\r\n\r\nvar kernelSettings = KernelSettings.LoadSettings();\r\nvar kernel = new KernelBuilder()\r\n \u00a0\u00a0 .WithAzureOpenAIChatCompletion(\/* Add your configuration *\/)\r\n \u00a0\u00a0 .Build();\r\n\r\n\/\/ Import the plugins for the planner\r\nkernel.ImportFunctions(new HttpPlugin(), \"HttpPlugin\");\r\n\r\n\/\/ Create the plan\r\nStepwisePlanner planner = new(kernel);\r\nvar plan = planner.CreatePlan(ask);\r\n\r\n\/\/ Execute the plan\r\nvar result = plan.Invoke(kernelWithMath, []).Trim();\r\n\r\n\/\/ Print the result to the console\r\nConsole.WriteLine($\"Results: {result}\");<\/code><\/pre>\n<p>This is how you\u2019d update it to use the new function calling stepwise planner.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\"><code class=\"language-cs language-csharp\">var kernel = Kernel.CreateBuilder()\r\n                   .AddAzureOpenAIChatCompletion(\/* Add your configuration *\/)\r\n                   .Build();\r\n\r\nkernel.ImportPluginFromType&lt;HttpPlugin&gt;();\r\n\r\n\/\/ Create and execute the plan\r\nvar planner = new FunctionCallingStepwisePlanner();\r\nvar result = await planner.ExecuteAsync(kernel, question);\r\n\r\n\/\/ Print the result to the console\r\nConsole.WriteLine($\"Results: {result}\");<\/code><\/pre>\n<p>To use function calling, you must use an OpenAI model that supports it. This includes GPT-3.5-turbo and GPT-4 that are 0613 or newer.<\/p>\n<h2>Filtering plugins for your planners<\/h2>\n<p>Because the previous planners did not perform well once there was more than a half-dozen functions, we had added the ability to automatically filter plugins using vector similarity search <em>directly<\/em> into the planners.<\/p>\n<p>Unfortunately, there were a few challenges with this approach:<\/p>\n<ol>\n<li>It locked users into a single way of filtering functions.<\/li>\n<li>Embeddings had to be re-generated during every plan creation to rehydrate the vector DB.<\/li>\n<li>And in many scenarios, it didn\u2019t work as well as it should\u2026<\/li>\n<\/ol>\n<p>Take for example, the following request from a user:<\/p>\n<blockquote><p>I&#8217;m going to go to the hardware store so I can pick up some paint so I can paint both the ceiling and the floor the exact same color. Before I go though, I need to figure out how much paint I need to order. Can you tell me how much paint I should order for both the ceiling and floor if each can covers 30 sqft and the room is 20 feet long and 30 feet wide?<\/p><\/blockquote>\n<p>Because of the way vector similarity search works, this ask is unlikely to retrieve the relevant functions (i.e., multiplication and division) since they aren\u2019t mentioned. Instead, irrelevant functions about home improvement or the Ceiling and Floor math operations would be returned instead.<\/p>\n<p>Additionally, we found the need for filtering to be required less since the new planners can handle many more functions (2-3x) before they start hallucinating.<\/p>\n<p>Because of these reasons, we\u2019ve chosen not to support this functionality natively within the new planners. Instead, we recommend customers develop their own filtering logic for their planners (e.g., detect the intent of the user with function calling and dynamically load the right plugins for that task).<\/p>\n<p>If, however, you already use vector similarity search and need a replacement, you can use the following code during your migration. In this code, we demonstrate how you can create a filtered set of functions that are used within a planner. You could even update this code to use your own <em>custom<\/em> filtering logic (e.g., filter off of a keyword or tag) to return back an even better list of available functions.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\"><code class=\"language-cs language-csharp\">\/\/ Create the plugins you want to search on\r\nkernel.ImportPluginFromType&lt;HttpPlugin&gt;();\r\n\r\n\/\/ Create memory to store the functions\r\nvar memoryStorage = new VolatileMemoryStore();\r\nvar textEmbeddingGenerator = new AzureOpenAITextEmbeddingGenerationService(\/* Add your configuration *\/);\r\nvar memory = new SemanticTextMemory(memoryStorage, textEmbeddingGenerator);\r\n\r\n\/\/ Save functions to memory\r\nforeach (KernelFunction function in kernel.Plugins[\"HttpPlugin\"])\r\n{\r\n    var fullyQualifiedName = nameof(HttpPlugin) + \"-\" + function.Name;\r\n    await memory.SaveInformationAsync(\r\n        \"functions\",\r\n        fullyQualifiedName + \": \" + function.Description,\r\n        fullyQualifiedName,\r\n        additionalMetadata: function.Name\r\n        );\r\n}\r\n\r\n\/\/ Retrieve the \"relevant\" functions\r\nvar relevantRememberedFunctions = memory.SearchAsync(\"functions\", ask, 30, minRelevanceScore: 0.75);\r\nvar relevantFoundFunctions = new List&lt;KernelFunction&gt;();\r\n\/\/ Populate a plugin with the filtered results\r\nawait foreach (MemoryQueryResult relevantFunction in relevantRememberedFunctions)\r\n{\r\n    if (kernel.Plugins[\"HttpPlugin\"].TryGetFunction(relevantFunction.Metadata.AdditionalMetadata, out var function))\r\n    {\r\n        relevantFoundFunctions.Add(function);\r\n    }\r\n}\r\nKernelPlugin relevantFunctionsPlugin = KernelPluginFactory.CreateFromFunctions(\"Http\", relevantFoundFunctions);\r\n\r\nvar kernelWithRelevantFunctions = Kernel.CreateBuilder()\r\n                                        .AddAzureOpenAIChatCompletion(\/* Add your configuration *\/)\r\n                                  .Build();\r\n\r\nkernelWithRelevantFunctions.Plugins.Add(relevantFunctionsPlugin);<\/code><\/pre>\n<p>Afterwards, you can create a plan using the new kernel with the filtered functions!<\/p>\n<h2>Please give us feedback!<\/h2>\n<p>If during your migration you find that there is still functionality in the previous planners that you depended on, please let us know. We can either work with you to find a suitable replacement or identify how to close the gaps in our new gen-4 and gen-5 planners.<\/p>\n<p>To give us feedback, please create an issue on our GitHub repo.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>As part of our Ignite release, we shipped the new gen-4 and gen-5 planners developed by the Semantic Kernel team. These new planners can handle many more functions (2-3X more), leverage more complex logic (e.g., loops, conditionals, and complex objects), all while using fewer tokens. Because of just how much better these new planners are, [&hellip;]<\/p>\n","protected":false},"author":121401,"featured_media":1721,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-1711","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-semantic-kernel"],"acf":[],"blog_post_summary":"<p>As part of our Ignite release, we shipped the new gen-4 and gen-5 planners developed by the Semantic Kernel team. These new planners can handle many more functions (2-3X more), leverage more complex logic (e.g., loops, conditionals, and complex objects), all while using fewer tokens. Because of just how much better these new planners are, [&hellip;]<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/posts\/1711","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/users\/121401"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/comments?post=1711"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/posts\/1711\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/media\/1721"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/media?parent=1711"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/categories?post=1711"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/tags?post=1711"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}