Functions are a key component of Semantic Kernel. As an AI Orchestrator, Semantic Kernel coordinates function execution together with Large Language Model (LLM) inference to allow the model return better responses or take action. Semantic Kernel groups related functions as plugins and provides capabilities to generate plugins from native code (e.g. C# classes or functions), REST API endpoints defined using the Open API specification or gRPC endpoints.
Consider the following scenario, if I ask an LLM to tell me about the current weather in Dublin it won’t be able to provide that information. The reason the LLM cannot respond is because that information is not in it’s training data set so it just doesn’t know. So the LLM will respond to state it cannot answer the question or might even hallucinate and return an invalid response. One approach to solve this type of problem is to define a function which can return the weather information for any city and then using Function Calling to enable the LLM to call the function to get the current weather in Dublin.
Developers will typically have existing functions that they want the LLM to call. These functions can come in the form of existing C#, Python, Java code or REST API endpoints. The good news is that Semantic Kernel makes it very easy to integrate these functions. For C#, Python, Java code developers can add [KernelFunction]
 attributes to the functions you want to use and then import based on the class or an instance of a class. If you have an Open API schema for the REST API endpoints this can used to import those endpoints as new functions. Developers also have the option to define new functions dynamically.
There are some issues when it comes to reusing existing functions. These functions were created for use by developers and not LLM’s so there are a number challenges developers may face:
- Functions may take parameters which the LLM cannot infer to possibly should not have access to. For example if the function should act on behalf of the current user, this is information that host application will have but the LLM will not.
- Function names may make sense to developers but not to the LLM. The LLM will try to select the correct function based on the user ask so the recommendation is to name functions intuitively, with detailed descriptions.
- Parameter names may make sense to developers but not to the LLM. The LLM has to extract parameter values from the user ask and the recommendation is to name function parameters intuitively, with detailed descriptions. It also helps to use enumerations for function arguments when possible.
The solve these problems it is possible to transform a function from it’s current form into a new form which better suits use with an LLM. Before covering how to transform a function let’s start by looking at what is a KernelFunction
 and the options that exist to import one.
What is a KernelFunction?
A KernelFunction
 is a platform agnostic representation of the function which can be presented to an LLM and when requested invoked by the Semantic Kernel. It has the following properties:
- Name – The name of the function. This should be clear and intuitive for the LLM to reason over. Existing function names where created by developers for developers and may not be suitable for use with an LLM. For example a function name may included acronyms that could be misunderstood by an LLM.
- Description – The description of the function. Existing descriptions may focus on what a function does. The description for a LLM should focus on when to call this function and what can be achieved.
- Parameters – A list of parameters the function takes. Each parameter has itself the following properties:
- Name – The name of the parameter. This should be clear and intuitive for the LLM to reason over.
- Description – The description of the parameter. The LLM may need additional infirmation such as a maximum length for a string or the format for a date string.
- Type – The type of the parameter. The LLM needs this information in JSON schema format.
- Default Value – The default value for the parameter. The LLM can use this value if not value can be extracted from the user ask.
- Required Flag – Flag indicating whether or not the parameter is required. The LLM can use this to omit optional parameters as needed.
- Return Parameter – Parameter describing the return value of the function.
- Additional Properties – Dictionary of additional properties. This will contain information that is useful to developers e.g., the security requirements to call an Open API endpoint.
Each KernelFunction
 can registered with the Semantic Kernel and subsequently invoked (either synchronously or asynchronously). KernelFunction
‘s are invoked by the Semantic Kernel in the following scenarios:
- Prompt Rendering – Prompt templates can include a references to
KernelFunction
‘s and these will be invoked when the prompt is rendered. Please refer to the Prompts section of the Semantic Kernel Learn Site. - Function Calling – Functions can be presented to the LLM for it to call. This allows the LLM to retrieve relevant information related to the current user ask or to perform actions on behalf of the user. This technique is called Function Calling and is described in detail here.
How to create a KernelFunction
Semantic Kernel allows developers to create new functions or reuse existing functions. Semantic Kernel provides a series of factories which allow KernelFunction
‘s and KernelPlugin
‘s to be created from existing functions. Two commonly used approaches to create new KernelPlugin
‘s are reusing existing native functions and creating from an Open API specification.
Creating KernelFunction’s
Existing native functions or prompts can be used to create a KernelFunction
. The sample code below shows how to do this for a C# delegate and also a prompt.
// Create a function that returns the current time
KernelFunction getCurrentUtcTime = KernelFunctionFactory.CreateFromMethod(() => DateTime.UtcNow.ToString("R"), "GetCurrentUtcTime");
// Create a semantic function which identifies action items in a conversation transcript
KernelFunction identifyActionItems = KernelFunctionFactory.CreateFromPrompt("Given a section of a conversation transcript, identify action items.");
The first example creates a function using the delegate and the specified name. The function description, parameter and return parameter definitions can also be provided as optional parameters when creating the function.
The second example creates a function from a prompt. When invoking a function created from a prompt the associated Kernel must have a chat completion service.
The next sections show how to reuse existing methods and prompts.
Reusing Existing Methods and Prompts
Existing native C#, Python and Java functions can be reused for prompt rendering or function calling by creating a KernelPlugin
from them. The sample code below shows how to do this for an existing C# class. Note, that the method has a [KernelFunction]
attribute, this is used to identify the class methods which should be converted to KernelFunction
‘s. The method also has a [Description] attribute which provides the function description, parameters can also be described using this attribute.
/// <summary>
/// A class with a method that returns the current time.
/// </summary>
public class TimeInformation
{
[KernelFunction]
[Description("Retrieves the current time in UTC.")]
public string GetCurrentUtcTime() => DateTime.UtcNow.ToString("R");
}
// Create a plugin from a class that contains kernel functions
KernelPlugin timeInformationPlugin = KernelPluginFactory.CreateFromType<TimeInformation>("TimeInformation");
// Create a plugin from an object that contains kernel functions
var timeInformation = new TimeInformation();
KernelPlugin timeInformationPlugin = KernelPluginFactory.CreateFromObject(timeInformation, "TimeInformation");
The first example creates a KernelPlugin
using the class Type. This will use reflection to find class methods that have the [KernelFunction]
 attribute and then use the method information to generate a KernelFunction
 which will invoke that method. A new instance of the TimeInformation class will be automatically created. By default the method and parameter names are used as the KernelFunction
 name and it’s parameter names.
The second example creates a KernelPlugin
 using the provided class instance. The class instance can be created with any dependencies before being used to create a KernelPlugin
.
There are lot’s more ways to create a KernelFunction
, take a look here.
Reusing Open API Operations
Another common scenario is reusing existing Open API operations.
Loading Open API Specifications
Your Open API specification may be a resource file in your application, hosted on a remote server or may be loaded from disk. We recommend loading it once and creating aKernelPlugin
 from which can be reused with multiple Kernel instances. This is preferable to creating a new KernelPlugin
 for each Kernel instance you create to reduce time and compute spent parsing the specification as these can be large.The sample code below shows how to load an Open API specification from a file and use it to create a new KernelPlugin
.
// Read the Open API specification from a file
var stream = System.IO.File.OpenRead("Plugins/OpenApi/repair-service.json");
// Create a plugin from the Open API specification
KernelPlugin repairService = await OpenApiKernelPluginFactory.CreateFromOpenApiAsync("RepairService", stream);
The KernelPlugin that is created contains functions for each operation in the
There are lot’s more ways to create a KernelPlugin
 from an Open API specification, take a look here.
To learn more about KernelPlugin
‘s, take a look here.
Why transform a KernelFunction?
Now that we know how to create KernelFunction‘s and KernelPlugin
‘s let’s consider the situations where we would want to transform them.
- Change function and/or parameter names and/or descriptions – If the function (or parameter) names or descriptions are not clear and intuitive then you may want to change them so better help the LLM reason over them.
- A function name may contain an acronym which you could replace with a term that LLM will recognise.
- A function description may describe what it does, it will work better for function calling if it describes when the function should be called.
- A parameter name might be short and unintuitive e.g., consider replacing abbreviations with the full term.
- A parameter description may be lacking some formatting information e.g., date strings must for formatted in ISO 8601 format.
- Simplify function parameters – It is important to reduce the possibility of a function failing due to an invalid parameter being passed. For example if we only expect the LLM to provide certain values for a parameter it may be better to replace it with an enumeration containing just those values.
- Remove function parameters –Â A function may require parameters which the LLM will not know e.g., the current users id or email. The calling application should already have this information and it will be more reliable for it to provide the value. For this case you may want to remove the parameter so the LLM does not need to provide it and then resolve the parameter value when the function is invoked.
For tips and best practices for function calling take a look here.
How to transform a KernelFunction
Consider the following scenario:
- Our application has two functions:
- GetFavoriteColor – This takes an email address as a parameter and returns that users favorite colour e.g. Bob’s favorite colour is Green
- GetFavoriteAnimal – This takes an email address and an animal type as parameters and returns that users favorite animal of the specific type e.g. Bob#s favorite mammal is a Dog.
- Users can ask the LLM questions and it should be able to use the users favorite colour and animal when responding e.g., if the user asked “What is my favorite creepy crawly?” the LLM should call the function to get the users favorite invertebrate animal.
Transform Plugin Sample
The full sample is available in the Semantic Kernel repository, see TransformPlugin.cs.For this basic sample the functions just know about one user Bob, whose email address is bob@contoso.com. So the functions look like this:
/// <summary>
/// A plugin that returns favorite information for a user.
/// </summary>
public class UserFavorites
{
[KernelFunction]
[Description("Returns the favorite color for the user.")]
public string GetFavoriteColor([Description("Email address of the user.")] string email)
{
return email.Equals("bob@contoso.com", StringComparison.OrdinalIgnoreCase) ? "Green" : "Blue";
}
[KernelFunction]
[Description("Returns the favorite animal of the specified type for the user.")]
public string GetFavoriteAnimal([Description("Email address of the user.")] string email, [Description("Type of animal.")] AnimalType animalType)
{
if (email.Equals("bob@contoso.com", StringComparison.OrdinalIgnoreCase))
{
return GetBobsFavoriteAnimal(animalType);
}
return GetDefaultFavoriteAnimal(animalType);
}
private string GetBobsFavoriteAnimal(AnimalType animalType) => animalType switch
{
AnimalType.Mammals => "Dog",
AnimalType.Birds => "Sparrow",
AnimalType.Reptiles => "Lizard",
AnimalType.Amphibians => "Salamander",
AnimalType.Fish => "Tuna",
AnimalType.Invertebrates => "Spider",
_ => throw new ArgumentOutOfRangeException(nameof(animalType), $"Unexpected animal type: {animalType}"),
};
private string GetDefaultFavoriteAnimal(AnimalType animalType) => animalType switch
{
AnimalType.Mammals => "Horse",
AnimalType.Birds => "Eagle",
AnimalType.Reptiles => "Snake",
AnimalType.Amphibians => "Frog",
AnimalType.Fish => "Shark",
AnimalType.Invertebrates => "Ant",
_ => throw new ArgumentOutOfRangeException(nameof(animalType), $"Unexpected animal type: {animalType}"),
};
}
Looking at the function implementations we can see that Bob’s favourite invertebrate is a spider and is favourite colour is green.
Not let’s try to use our functions so the LLM can provide relevant responses to user questions. The sample code below adds the UserFavorites
 plugin to the Kernel
 and enables automatic function calling.
// Create a kernel with OpenAI chat completion
IKernelBuilder kernelBuilder = Kernel.CreateBuilder();
kernelBuilder.AddOpenAIChatCompletion(
modelId: "OpenAI Model", // sample responses were generated with gpt-4o
apiKey: "OpenAI API Key";
kernelBuilder.Plugins.AddFromType<UserFavorites>();
Kernel kernel = kernelBuilder.Build();
// Invoke the kernel with a prompt and allow the AI to automatically invoke functions
OpenAIPromptExecutionSettings settings = new() { FunctionChoiceBehavior = FunctionChoiceBehavior.Auto() };
Console.WriteLine(await kernel.InvokePromptAsync("What color should I paint the fence?", new(settings)));
Console.WriteLine(await kernel.InvokePromptAsync("I am going diving what animals would I like to see?", new(settings)));
The responses will look something like this:
- If you would like a suggestion based on your preferences, I can find out your favorite color if you provide your email address.
- To help you with that, I would need to know your favorite type of aquatic animals. If you provide your email, I can check your preferences, if available, for your favorite type of fish or other marine creatures.
Without the current users email address the LLM cannot call the functions. This is just one example of why the LLM cannot call a function, but we can generalise this problem as one where the applications has context required to call a function that the LLM doesn’t need to have.
Let’s transform the plugin into a form which is usable by the LLM. The following sample code is just one approach that can be used.
// Create a new Plugin which hides parameters that require PII
var plugin = KernelPluginFactory.CreateFromType<UserFavorites>();
var transformedPlugin = CreatePluginWithParameters(
plugin,
(KernelParameterMetadata parameter) => parameter.Name != "email",
(KernelFunctionMetadata function, KernelArguments arguments) => arguments.Add("email", "bob@contoso.com"));
The sample code introduces a method called CreatePluginWithParameters
which transform the specified plugin by removing parameters and for each parameter that is remove a delegate is provided which adds the missing parameter to the arguments when the corresponding function is invoked.
The following sample code performs these options and works as follows:
- The
IncludeKernelParameter
delegate will be called for each parameter of each function in the plugin. Returning false will cause the parameter to be omitted from the transformed plugins. Our sample provides an implementation which skips all parameters named “email”. - The
UpdateKernelArguments
delegate will be called to add additional arguments when a function is being called. The delegate get’s passed theKernelFunctionMetadata
andKernelArguments
so it can decide what arguments to add (if any) based on the function being called. Our sample always adds Bob’s email address but it could be optimized to check which function is being called.
public delegate bool IncludeKernelParameter(KernelParameterMetadata parameter);
public delegate void UpdateKernelArguments(KernelFunctionMetadata function, KernelArguments arguments);
/// <summary>
/// Create a <see cref="KernelPlugin"/> instance from the provided instance where each function only includes
/// permitted parameters. The <see cref="IncludeKernelParameter"/> delegate is called to determine whether or not
/// parameter will be included. The <see cref="UpdateKernelArguments"/> delegate is called to update the arguments
/// and allow additional values to be included.
/// </summary>
public static KernelPlugin CreatePluginWithParameters(KernelPlugin plugin, IncludeKernelParameter includeKernelParameter, UpdateKernelArguments updateKernelArguments)
{
List<KernelFunction>? functions = new();
foreach (KernelFunction function in plugin)
{
functions.Add(CreateFunctionWithParameters(function, includeKernelParameter, updateKernelArguments));
}
return KernelPluginFactory.CreateFromFunctions(plugin.Name, plugin.Description, functions);
}
/// <summary>
/// Create a <see cref="KernelFunction"/> instance from the provided instance which only includes permitted parameters.
/// The function method will add additional argument values before calling the original function.
/// </summary>
private static KernelFunction CreateFunctionWithParameters(KernelFunction function, IncludeKernelParameter includeKernelParameter, UpdateKernelArguments updateKernelArguments)
{
var method = (Kernel kernel, KernelFunction currentFunction, KernelArguments arguments, CancellationToken cancellationToken) =>
{
updateKernelArguments(currentFunction.Metadata, arguments);
return function.InvokeAsync(kernel, arguments, cancellationToken);
};
var options = new KernelFunctionFromMethodOptions()
{
FunctionName = function.Name,
Description = function.Description,
Parameters = CreateParameterMetadataWithParameters(function.Metadata.Parameters, includeKernelParameter),
ReturnParameter = function.Metadata.ReturnParameter,
};
return KernelFunctionFactory.CreateFromMethod(method, options);
}
/// <summary>
/// Create a list of KernelParameterMetadata instances from the provided instances which only includes permitted parameters.
/// </summary>
private static List<KernelParameterMetadata> CreateParameterMetadataWithParameters(IReadOnlyList<KernelParameterMetadata> parameters, IncludeKernelParameter includeKernelParameter)
{
List<KernelParameterMetadata>? parametersToInclude = new();
foreach (var parameter in parameters)
{
if (includeKernelParameter(parameter))
{
parametersToInclude.Add(parameter);
}
}
return parametersToInclude;
}
For this sample we are only removing the email parameters but you could change parameters and their metadata or do the same for entire functions.
If you run the sample code with the transformed plugin and ask the same questions, this time the LLM will respond something like this:
- You might consider painting the fence green, as it’s your favorite color!
- You would likely enjoy seeing Tuna while diving! They are fascinating fish and can often be spotted in various diving locations. Enjoy your dive!
This post has provided some sample code to help you transform functions so they are more suitable for use with an LLM.
Please reach out if you have any questions or feedback through our Semantic Kernel GitHub Discussion Channel. We look forward to hearing from you!
We would also love your support — if you’ve enjoyed using Semantic Kernel, give us a star on GitHub.
0 comments
Be the first to start the discussion.