Migrating from the Sequential and Stepwise planners to the new Handlebars and Stepwise planner

Matthew Bolanos

As part of our Ignite release, we shipped the new gen-4 and gen-5 planners developed by the Semantic Kernel team. These new planners can handle many more functions (2-3X more), leverage more complex logic (e.g., loops, conditionals, and complex objects), all while using fewer tokens.

Because of just how much better these new planners are, the Semantic Kernel team has decided that the existing Action, Sequential, and V1 Stepwise planners will not move to the v1.0.0 version of Semantic Kernel. While they were revolutionary at the time, there are much better tools for developers to now use. By removing them from v1.0.0, we can make sure that new developers don’t accidentally use old techniques when better solutions exist.

To use the new planners, you will want to install the following packages:

To help existing developers with the migration to the Handlebars and Function Calling Stepwise planners, we’ve created the following migration guide so you can use all the same functionality you had before.

Action planner function calling

The Action planner was our very first planner. Given a list of functions, which one should be called next? This planner was great at identifying an intent from a user. Since the Action planner was first created, however, OpenAI delivered function calling, which bakes this functionality directly into the model.

If you used Action planner before, you likely had code like the following:

// Create an instance of ActionPlanner.
var planner = new ActionPlanner(kernel, config: config);

// We're going to ask the planner to find a function to achieve this goal.
var goal = "Write a joke about Cleopatra in the style of Hulk Hogan.";

// The planner returns a plan, consisting of a single function
var plan = await planner.CreatePlanAsync(goal);

// Execute the full plan (which is a single function)
var result = await plan.InvokeAsync(kernel);

To leverage function calling, you can either use automated function calling or use manual function using the code below:

// Enable manual function calling
OpenAIPromptExecutionSettings openAIPromptExecutionSettings = new()
{
    ToolCallBehavior = ToolCallBehavior.EnableKernelFunctions
};

// Get the chat completions
var result = await chatCompletionService.GetChatMessageContentAsync(
    new ChatHistory("Turn on the lights"),
    executionSettings: openAIPromptExecutionSettings,
    kernel: kernel);
// If a valid function was found, execute it
var functionCall = ((OpenAIChatMessageContent)result).ToolCalls.OfType<ChatCompletionsFunctionToolCall>().First();
if (functionCall != null)
{
    kernel.Plugins.TryGetFunctionAndArguments(functionCall, out KernelFunction? pluginFunction, out KernelArguments? arguments);
    await pluginFunction.InvokeAsync(kernel, arguments);
}

To use function calling, you must use an OpenAI model that supports it. This includes GPT-3.5-turbo and GPT-4 that are 0613 or newer.

Sequential planner Handlebars planner

The new Handlebars planner has been marked as experimental to make it clear that its interfaces may change in a future release after v1.0.0. The Semantic Kernel team has the full intention to fully support the Handlebars planners, so they are safe to migrate to from the previous planners. To suppress the experimental warnings, you can refer to the instructions in this doc.

With sequential planner, we introduce the ability for a planner to generate an entire plan using a single LLM call. This has several benefits:

  1. You could inspect the entire plan before executing it.
  2. You could save the plan so it could be reused again later.
  3. It used fewer tokens than the Stepwise planner and function calling.

These features are still valuable today and something that has yet to be provided out-of-the-box by foundational models because they require a client-side runtime to execute a plan.

The challenge with the sequential planner was that it required showing the AI (with one-shot prompting) how to create a plan using a custom XML syntax. This was challenging for the AI to handle, so often the XML would be invalid and the AI would often hallucinate after adding only a few functions.

Thanks to Microsoft research, however, we discovered that models perform much better if they are asked to “code” in a language they’ve already been trained on. Based on this research, we introduced a planner that uses Handlebars syntax to generate a plan. Not only is it more accurate, but it also allows the model to leverage native features like loops and conditions without additional prompting.

If you used the Sequential planner before, you likely had code like the following:

// Import the plugins for the planner
kernel.ImportFunctions(new HttpPlugin(), "HttpPlugin");

// Create the plan
var planner = new SequentialPlanner(kernel);
var plan = await planner.CreatePlanAsync(ask);

// Print the plan to the console
Console.WriteLine("Plan:\n");
Console.WriteLine(JsonSerializer.Serialize(plan, new JsonSerializerOptions { WriteIndented = true }));

// Execute the plan
var result = await kernel.RunAsync(plan);

// Print the result to the console
Console.WriteLine($"Results: {result}");

In V1.0.0, you’ll want to switch it to the following:

using Microsoft.SemanticKernel.Planning.Handlebars;

// Create the kernel
var kernel = Kernel.CreateBuilder()
                   .AddAzureOpenAIChatCompletion(/* Add your configuration */)
                   .Build();

kernel.ImportPluginFromType<HttpPlugin>();

// Create the plan
var planner = new HandlebarsPlanner(new HandlebarsPlannerOptions() { AllowLoops = true });
var plan = await planner.CreatePlanAsync(kernel, goalFromUser);

// Print the plan to the console
Console.WriteLine($"Plan: {plan}");

// Execute the plan
var result = await plan.InvokeAsync(kernel);

// Print the result to the console
Console.WriteLine($"Results: {result}");

Saving and loading Handlebars plans.

With the Sequential planner, you could take a plan and save it so you could run it again without asking a model to recreate it again. This is possible with Handlebars as well, but instead of saving XML, you simply save a Handlebars template.

var serializedPlan = plan.ToString();

You can then load the template back into a new HandlebarsPlan object so you can run it again.


HandlebarsPlan reloadedPlan = new HandlebarsPlan(serializedPlan);

Afterwards you can invoke your function.

// Execute the plan
var result = await plan.InvokeAsync(kernel);

// Print the result to the console
Console.WriteLine($"Results: {result}");

Stepwise planner Function calling stepwise planner

The new Stepwise planner has been marked as experimental to make it clear that its interfaces may change in a future release after v1.0.0. The Semantic Kernel team has the full intention to fully support the Handlebars planners, so they are safe to migrate to from the previous planners. To suppress the experimental warnings, you can refer to the instructions in this doc.

One of the most reliable ways of having an LLM complete a request is to use the ReAct methodology. With ReAct, you give the AI the ability to make a function call, reason over it, and then make another function call.

With the introduction of function calling support from OpenAI, this functionality is now automatically baked into the model. This means you no longer need to waste tokens telling the model how to perform ReAct, it’s already baked into the model.

If vanilla function calling is sufficient, you can refer to the sample we provided in our RC1 blog post to automatically call functions to complete a task. If you run into challenges, however, you can also use the FunctionCallingStepWisePlanner. This planner adds some additional reasoning at the beginning of the plan generation to improve the reliability of function calling.

If this is the code you previously had…

using Microsoft.SemanticKernel.Planning.OpenAI;

var kernelSettings = KernelSettings.LoadSettings();
var kernel = new KernelBuilder()
    .WithAzureOpenAIChatCompletion(/* Add your configuration */)
    .Build();

// Import the plugins for the planner
kernel.ImportFunctions(new HttpPlugin(), "HttpPlugin");

// Create the plan
StepwisePlanner planner = new(kernel);
var plan = planner.CreatePlan(ask);

// Execute the plan
var result = plan.Invoke(kernelWithMath, []).Trim();

// Print the result to the console
Console.WriteLine($"Results: {result}");

This is how you’d update it to use the new function calling stepwise planner.

var kernel = Kernel.CreateBuilder()
                   .AddAzureOpenAIChatCompletion(/* Add your configuration */)
                   .Build();

kernel.ImportPluginFromType<HttpPlugin>();

// Create and execute the plan
var planner = new FunctionCallingStepwisePlanner();
var result = await planner.ExecuteAsync(kernel, question);

// Print the result to the console
Console.WriteLine($"Results: {result}");

To use function calling, you must use an OpenAI model that supports it. This includes GPT-3.5-turbo and GPT-4 that are 0613 or newer.

Filtering plugins for your planners

Because the previous planners did not perform well once there was more than a half-dozen functions, we had added the ability to automatically filter plugins using vector similarity search directly into the planners.

Unfortunately, there were a few challenges with this approach:

  1. It locked users into a single way of filtering functions.
  2. Embeddings had to be re-generated during every plan creation to rehydrate the vector DB.
  3. And in many scenarios, it didn’t work as well as it should…

Take for example, the following request from a user:

I’m going to go to the hardware store so I can pick up some paint so I can paint both the ceiling and the floor the exact same color. Before I go though, I need to figure out how much paint I need to order. Can you tell me how much paint I should order for both the ceiling and floor if each can covers 30 sqft and the room is 20 feet long and 30 feet wide?

Because of the way vector similarity search works, this ask is unlikely to retrieve the relevant functions (i.e., multiplication and division) since they aren’t mentioned. Instead, irrelevant functions about home improvement or the Ceiling and Floor math operations would be returned instead.

Additionally, we found the need for filtering to be required less since the new planners can handle many more functions (2-3x) before they start hallucinating.

Because of these reasons, we’ve chosen not to support this functionality natively within the new planners. Instead, we recommend customers develop their own filtering logic for their planners (e.g., detect the intent of the user with function calling and dynamically load the right plugins for that task).

If, however, you already use vector similarity search and need a replacement, you can use the following code during your migration. In this code, we demonstrate how you can create a filtered set of functions that are used within a planner. You could even update this code to use your own custom filtering logic (e.g., filter off of a keyword or tag) to return back an even better list of available functions.

// Create the plugins you want to search on
kernel.ImportPluginFromType<HttpPlugin>();

// Create memory to store the functions
var memoryStorage = new VolatileMemoryStore();
var textEmbeddingGenerator = new AzureOpenAITextEmbeddingGenerationService(/* Add your configuration */);
var memory = new SemanticTextMemory(memoryStorage, textEmbeddingGenerator);

// Save functions to memory
foreach (KernelFunction function in kernel.Plugins["HttpPlugin"])
{
    var fullyQualifiedName = nameof(HttpPlugin) + "-" + function.Name;
    await memory.SaveInformationAsync(
        "functions",
        fullyQualifiedName + ": " + function.Description,
        fullyQualifiedName,
        additionalMetadata: function.Name
        );
}

// Retrieve the "relevant" functions
var relevantRememberedFunctions = memory.SearchAsync("functions", ask, 30, minRelevanceScore: 0.75);
var relevantFoundFunctions = new List<KernelFunction>();
// Populate a plugin with the filtered results
await foreach (MemoryQueryResult relevantFunction in relevantRememberedFunctions)
{
    if (kernel.Plugins["HttpPlugin"].TryGetFunction(relevantFunction.Metadata.AdditionalMetadata, out var function))
    {
        relevantFoundFunctions.Add(function);
    }
}
KernelPlugin relevantFunctionsPlugin = KernelPluginFactory.CreateFromFunctions("Http", relevantFoundFunctions);

var kernelWithRelevantFunctions = Kernel.CreateBuilder()
                                        .AddAzureOpenAIChatCompletion(/* Add your configuration */)
                                  .Build();

kernelWithRelevantFunctions.Plugins.Add(relevantFunctionsPlugin);

Afterwards, you can create a plan using the new kernel with the filtered functions!

Please give us feedback!

If during your migration you find that there is still functionality in the previous planners that you depended on, please let us know. We can either work with you to find a suitable replacement or identify how to close the gaps in our new gen-4 and gen-5 planners.

To give us feedback, please create an issue on our GitHub repo.

0 comments

Discussion is closed.

Feedback usabilla icon