Semantic Kernel (SK) is a lightweight SDK that lets you mix conventional programming languages, like C# and Python, with the latest in Large Language Model (LLM) AI “prompts” with prompt templating, chaining, and planning capabilities. Its Planner Skill allows users to create and execute plans based on semantic queries. Recently, the addition of embeddings exposed via SemanticTextMemory
has made the Planner Skill even more versatile.
In this blog post, we will discuss how integrating embeddings into the Planner Skill to determine related skills enhances its usability and functionality.
What It Does
When you create a Plan with Planner Skill today, it uses all registered functions to compose a natural language manual. However, when the number of functions and skills grows, it can hinder the quality of the generated results. By enabling semantic filtering, Plan creation will use Semantic Memory to store embeddings for registered functions. It will then search the Semantic Memory Store based on the Plan goal to filter what functions are used when generating the natural language manual.
Additionally, users can limit the number of functions returned from semantic filtering, specify skills and function to exclude and specify functions to include regardless of their semantic relevance.
Enable Semantic Filtering with Planner Skill
First, to support Semantic Filtering with Planner Skill, you need to register an embeddings backend and include an IMemoryStore
implementation with the kernel used for Planning. For example:
var kernel = new KernelBuilder()
.Configure(
config =>
{
config.AddAzureOpenAITextCompletion(...);
config.AddAzureOpenAIEmbeddingGeneration(...);
})
.WithMemoryStorage(new VolatileMemoryStore())
.Build();
Then, to enable Semantic Filtering when using the Planner Skill, you simply need to supply ContextVariable
for setting a relevance threshold when creating a plan. When you do this, upon creation of a plan using the Planner Skill, the IMemoryStore
implementation associated with the kernel instance will be used to search for relevant functions.
var planner = kernel.ImportSkill(new PlannerSkill(kernel), "planning");
// import any other skills or functions ....
var context = new ContextVariables("Create a book with 3 chapters about a group of kids in a club called 'The Thinking Caps.'");
// Set the RelevancyThreshold to enable semantic filtering
context.Set(PlannerSkill.Parameters.RelevancyThreshold, "0.78");
var results = await kernel.RunAsync(context, planner["CreatePlan"]);
Lastly, the other configuration values can be set like this:
// To limit the number of relevant functions to include
context.Set(PlannerSkill.Parameters.MaxRelevantFunctions, "5");
// To exclude certain functions or entire Skills from being included in plan creation requests
context.Set(PlannerSkill.Parameters.ExcludedFunctions, "NovelChapter");
context.Set(PlannerSkill.Parameters.ExcludedSkills, "email");
// Or to ensure specific functions are included in plan creation requests regardless of semantic filtering
context.Set(PlannerSkill.Parameters.IncludedFunctions, "DadJoke");
So, there you have it! With the integration of embeddings and Semantic Memory, the Planner Skill in the Semantic Kernel just got even cooler. It’s like giving your conversational agent a superpower – the ability to understand context and generate more accurate plans. Follow along for a closer look at how the Planner does this using Semantic Memory.
How It Works
First, all registered functions need to be saved to memory. When CreatePlan
is called, we use the SKContext
as context to retrieve pointer to the Memory
instance and call SaveInformationAsync
with known registered functions in the SkillCollection
on theSKContext
.
// Loop through available functions and save them to memory.
foreach (var function in availableFunctions)
{
var functionName = function.ToFullyQualifiedName();
var key = string.IsNullOrEmpty(function.Description) ? functionName : function.Description;
await context.Memory.SaveInformationAsync(PlannerMemoryCollectionName, key, functionName, function.ToManualString(),
context.CancellationToken);
}
Then, we can use the goal text as a semantic query against the Memory
in the SKContext
.
// Search for functions that match the semantic query.
var memories = context.Memory.SearchAsync(PlannerMemoryCollectionName, semanticQuery, config.MaxRelevantFunctions, config.RelevancyThreshold.Value,
context.CancellationToken);
This is just one basic example of how embeddings and semantic searching can bring LLM capabilities into applications using the Planner. As the Semantic Kernel evolves in its Alpha stage, we’ll prioritize other methods of using embeddings for plan creation that will be even more powerful.
Next steps
Read the documentation about the Planner
Join the community and let us know what you think: https://aka.ms/sk/discord
0 comments