Hello to all Python enthusiasts working with Semantic Kernel! We are very happy to share some of the latest work we have been doing, namely we introduced not one but two new options for templates: Handlebars and Jinja2. And we also made some updates to the ‘semantic-kernel’ (default) template as well! This is all now available as of package version 0.9.3b1.
Templating
Templating is a very core feature of semantic kernel, because it allows you, as the app developer, to define what gets sent to the models. It therefore is one of the most important ways you program against the large language models. The templates serve two purposes. Firstly, it allows you to inject runtime variables into your prompt. Secondly, it allows you to orchestrate “static” functions to be called (as opposed to using a planner for dynamic function calls), for instance this can be a function that does intent recognition on the user input, that changes the casing of texts (if for some reason you want to make the user TALK LIKE THIS), or extend the context in other ways, through search, queries, lookups, etc. It also allows you to decide which things are always sent to a model and what changes overtime. For instance, when creating a application that supports chat interactions, you can decide that the system message is always fixed (or at least tied to the application releases), it is then followed by the chat conversation so far, then a certain instruction to the model is repeated and then the latest user message is added, this could result in a template like this (using ‘semantic-kernel’ templates):
{{$system_message}}{{$chat_history}}{{$repeat_instruction}}{{$user_input}}
and when calling the kernel to do the completion, the command is this:
kernel.invoke(
chat_function,
system_message=system_message,
repeat_instruction=repeat_instruction,
chat_history=chat_history,
user_input=user_input
)
and in this the last two parameters are updated as the application is used, while the first two are default strings. One note on this example is that there are a number of ways to achieve this same thing, like this template:
you are an intelligent assistant, you are designed to ... {{$chat_history}} remember to respond with ... {{$user_input}}
and you can even do this: {{$chat_history}}
where you use code to update the chat_history in the right way!
The reason all of these templates can be used to create the same thing is that under the covers in the python SDK we parse these templates into a intermediate structure which is a mix of plain text and XML, this looks something like this:
system prompt<chat_history><message role="user">Hi, my name is Eduard</message></chat_history>I want to know more about Semantic Kernel!
If the service used is a text completion service this gets sent to the model directly, while for a chat model, this will get parsed back into a single ChatHistory object, and that object will then be parsed to a dictionary that the model expects. The format is different for instance for Google models versus OpenAI models. Knowing about this structure means you can also leverage the same things, by manually inputting the same xml tags into your prompt, additional fields of messages, especially for OpenAI are represented in this format as well, as part of the fields within the message tag, so a tool call response message would look like this: <message role=tool tool_call_id=call_123876>tool response</message>
.
On to what’s new!
Handlebars Templating
The first new template format is Handlebars. Handlebars was created to be a lightweight templating language “on steroids” (their words), it is already available in the .NET SDK. The reason we wanted to include new templating approaches is because there are some more advanced, but well known structures that these language support out of the box, like loops, conditionals and other functions, for instance with a chat_history
object (which is iterable) we can now define a template like this:
`{{#each chat_history}}{{#message role=role}}{{~content~}}{{/message}} {{/each}}`
this will render each message to a xml object with the role and content in it. The way this works is that there are a number of helpers built-in to the package, some are added by us for Semantic Kernel and all the functions that are registered in the kernel are also exposed as functions. Some of the differences between the semantic-kernel templates and handlebars, are that in handlebars there is no distinction between variables and values (in semantic-kernel a variable is denoted with a $-prefix, while a value is marked with single quotes), in handlebars everything between a handlebar (the double brackets) is looked up against the helpers and the parameters passed in through KernelArguments, and if it calls a helper then that function is executed, while if it is a variable name in arguments that is used, and if it is neither then the text itself is returned, so the template {{input}}
results in either the text ‘input’ or the value of the argument called ‘input’ that is passed in KernelArguments. If you add arguments into a handlebar, like this {{add 1 2}} then it will look for a function called ‘add’ that is can send these parameters to. To use this template language, you need to set the parameter ‘template_format’ to ‘handlebars’ in your PromptTemplateConfig
or by using the kernel.create_function_from_prompt
method Down below a full list of all helpers and their existence in the different template languages is shown.
Jinja2 Templating
Next up is Jinja2, this is a fast and powerful templating language for Python, designed to generate dynamic content efficiently. Its templating system allows developers to use variables, loops, and conditional statements, enabling the creation of complex and dynamic prompts. Its extensibility facilitates the integration of custom filters and functions, allowing for precise data preprocessing and formatting to meet specific needs. We’ve enriched the environment with numerous helper functions, including those for interacting with chat history objects (such as message_to_prompt and message), alongside utilities for arrays, ranges, concatenation, and logical operations. Additionally, Jinja2 underscores the importance of security with automatic variable escaping, effectively guarding against injection attacks and ensuring the safety of dynamically generated text. It’s important to note that, in leveraging Python’s capabilities, Jinja2 adapts to Python’s syntax conventions. Therefore, the usual fully qualified names for Semantic Kernel plugins and functions, typically expressed with hyphens (plugin-function), should be adapted to Python’s naming requirements by using underscores (e.g., plugin_function). This adjustment ensures seamless integration and access to registered plugins and their functionalities, broadening the scope of what can be achieved with your SDK.
The Jinja2 logoLet’s look at an example of how one can craft a Jinja2 prompt that operates on the application’s chat history. For brevity, some of the code is ommitted. You can find the full example as part of our kernel samples titled “azure_chat_gpt_api_jinja2.py.”
chat_function = kernel.create_function_from_prompt(
prompt="""{{system_message}}{% for item in chat_history %}{{ message(item) }}{% endfor %}""",
function_name="chat",
plugin_name="chat",
template_format="jinja2",
prompt_execution_settings=req_settings,
)
chat_history = ChatHistory(system_message="You are a helpful chatbot.")
chat_history.add_user_message("User message")
chat_history.add_assistant_message("Assistant message")
After rendering the prompt, we have the following text:
You are a helpful chatbot.<message role="user">User message</message><message role="assistant">Assistant message</message>
As mentioned above, this prompt then gets converted back into a dictionary of key value pairs that the model expects. In this simple example we can see the usefulness of a prompt templating engine like Jinja2, and how it gives us the ability to now loop over objects and call functions to format data in desired ways.
Semantic-Kernel Templating
The semantic-kernel templating also has a new capability and that is related to the chat_history class that was recently introduced in the Python SDK. If you add a variable block into your semantic-kernel formatted template, like system message{{$chat_history}}
, the rendering engine will try to cast whatever it finds to a string, a string of a ChatHistory is actually the XML structure shown above, <chat_history><message role=user>user text</message></chat_history>
(of <chat_history />
when there are no messages in there). So this gives you the ability to natively use chat_history with all it’s detail in all three template formats.
Conclusion
In conclusion, we feel that these additional capabilities will give developers even more tools to do great things with Semantic Kernel in ways that make sense to many of you from other efforts. If you have comments, or would like to see additional helpers feel free to create a issue on the repo, or post something in the Discord for Semantic Kernel!
Available helpers
Name | Example (handlebars) | Description | Handlebars | Jinja2 |
set | {{set name=’arg’ value=’test’}} | This can be used to set a value during runtime, used in combination with get. | ✅ | ✅ |
get | {{get ‘arg’}} | This can be used to get a value during runtime, used in combination with set. | ✅ | ✅ |
array | {{ array ‘test1’ ‘test2’ ‘test3’ }} == [‘test1’, ‘test2’, ‘test3’] | Create a list out of the supplied values. | ✅ | ✅ |
range | {{ range 0 5 1}} == [0,1,2,3,4] | Get a range of numbers, following the Python range parameters | ✅ | ✅ |
concat | {{ concat ‘test1’ ‘test2’ }} == ‘test1test2’ | Combines the two strings together | ✅ | ✅ |
or | {{ or true false }} == true | Boolean operation or | ✅ | |
add | {{ add 1 2 }} == 3 | Add two or more values together | ✅ | |
subtract | {{ subtract 3 2 1 }} == 0 | Subtract values from each other | ✅ | |
equals | {{ equals 1 1 }} == true | Checks equality | ✅ | |
less_than | {{ less_than 1 2 }} == true | Checks less than between two numbers | ✅ | |
greater_than | {{ greater_than 1 2 }} == false | Checks greater than between two numbers | ✅ | |
less_than_or_equal | {{ less_than_or_equal 1 1 }} == true | Checks less than or equal between two numbers | ✅ | |
greater_than_or_equal | {{ greater_than_or_equal 1 1 }} == true | Checks greater than or equal between two numbers | ✅ | |
json | {{ json { “key”: “value” } }} == ‘{“key”: “value”}’ | Does json.dumps on the object supplied | ✅ | ✅ |
camel_case | {{ camel_case ‘test_string’}} == TestString | Changes a string to CamelCase | ✅ | ✅ |
snake_case | {{ snake_case ‘TestString’ }} == test_string | Changes a string to snake_case | ✅ | ✅ |
message |
{{#each chat_history}}
{{#message role=role}}{{~content~}}{{/message}}{{/each}} |
Takes a message as input, allows getting keys from the object and representing those | ✅ | ✅ |
message_to_prompt *applies to Semantic Kernel implicitly |
{{#each chat_history}}{{message_to_prompt}}{{/each}}
|
This uses the same xml representation of a message that is used to represent a full ChatHistory | ✅ | ✅ |
–built in helpers– | ||||
if | {{#if bar}}bar{{else}}no bar{{/if}} | Using either boolean values, or one of the equivalence functions above, you can use one or the other values | ✅ | ✅ |
each | {{#each range 0 5}}{{this}}{{/each}} == 01234 | Loop through a array, you can use {{this}} to refer to the item itself |
✅ | |
for | {% for i in range(0, 5) %}{{ i }}{% endfor %} | Loop over each item in a sequence. | ✅ | |
unless | {{#unless test}}{{test2}}{{/unless}} | Equivalent to if not test: test2 |
✅ | |
with | {{#with test}}{{key}}{{/with}} | This works to get into a item, calling the template before with test={"key":"value"} returns ‘value’ as the output. |
✅ | ✅ |
lookup | {{lookup test ‘key’}} | Similar to the with, but looking up a single key called `key’ | ✅ |
Note that this list is no exhaustive. Jinja2 comes with packed with built-in filters, tests, and functions. For more information about what is available in the package, please visit Jinja2’s documentation: built-in filters, built-in tests, built-in functions.
Summary
Please reach out if you have any questions or feedback through our Semantic Kernel GitHub Discussion Channel. We look forward to hearing from you! We would also love your support, if you’ve enjoyed using Semantic Kernel, give us a star on GitHub.
0 comments