Spring 2024 roadmap for Semantic Kernel

Matthew Bolanos

Now that it’s February, we wanted to share what we had planned for Semantic Kernel from now until Microsoft Build. Most of our next immediate investments fall into one of three buckets: V1.0 parity across all our languages, additional connectors, and last but not least, agents. If you want a deep dive into our plans, watch our Spring 2024 roadmap video! Otherwise get a quick summary below.


V1.0 Parity across Python and Java

With the V1.0 version of .NET, we committed to not introduce anymore breaking changes to non-experimental features. This has given customers additional confidence to build production AI applications on top of Semantic kernel. By March of this year, we plan on releasing either Beta or Release Candidates of both our Python and Java libraries. By Microsoft Build, we will finish parity and launch V1.0 for Python and Java.

As part of V1.0, Python and Java will get many of the improvements that came to the .NET version that made it much easier and more powerful to use. This includes automatic function calling, events, YAML prompt files, and Handlebars templates. With the YAML prompt files, you’ll be able to create prompt and agent assets in Python and then reshare it with .NET and Java developers.

If you’re interested in learning more, check out our full backlog on GitHub for Python and Java.


More connectors!

Since Semantic Kernel was first introduced, many new models have been introduced. We plan on working with the community to introduce connectors to the most popular models and their deployment types. These include Gemini, Llama-2, Phi-2, Mistral, and Claude deployed on Hugging Face, Azure AI, Google AI, Bedrock and locally.

We’ve also gotten great feedback on our experimental memory connectors. Over the next few months, we’ll be updating the abstractions for our connectors so that they are less opinionated. This will make the easier to use and allow us to support even more scenarios with them.

Lastly, we know that multi-modal experiences are the next frontier for AI applications. We’ll make it easier to support these experiences by providing additional connectors to models that support audio, images, video, documents, and more!


First-class agent support

Lastly, we want to ensure that Semantic Kernel customers are able to develop autonomous agents that can complete tasks on behalf of users. We already have an experimental implementation that uses the OpenAI Assistants API (check out John Maeda’s SK basics samples), but as part of our final push, we want to fully abstract our agent interface to support agents built with any model.

To achieve this, we’re leveraging the research provided by the Autogen team to create an abstraction that can support any number of experiences, including those where agents work together as a team.


Feedback is always welcome!

As an open source project, everything we do (including planning) is done out in the open. We do this so that you as a community can give us feedback every step of the way. If you have recommendations to any of the features we have planned for Spring 2024 (or even recommendations for things that aren’t on our radar), let us know by filing an issue on GitHub, starting a discussion on GitHub, or starting a conversation on Discord.


Discussion is closed. Login to edit/delete existing comments.

  • Louis-Guillaume 'LG' MORANDMicrosoft employee 1

    great news but they were breaking changes after 1.0 of .Net version. For instance, some demo repos do not work anymore with SK1.3 and are abandonned (https://github.com/microsoft/chat-copilot/issues/790)

    My feedback: we need demo apps using SK and we need them up to date with the SDK because if we are a great SDK, we also need to learn how to use it 🙂

    • Matthew BolanosMicrosoft employee 0

      Thanks for the feedback!

      Since releasing V1.0.1, we’ve made sure to avoid any breaking changes to ensure that all samples (and production apps) built on V1.0.1 and above continue to work no matter what new features we add. One of the best samples that show how to use Semantic Kernel on V1.0.1+ is the Semantic Kernel AI-in-a-box sample provided by the Azure team. I highly recommend checking it out to see how you can build an AI application that can be deployed using Bot Framework. All of the samples for our learn site, have also been updated to the V1.0.1 version and will continue to work moving forward without any breaking changes.

      We’re not done building new samples though. We’ve gotten great feedback that developers are also eager to see how to best use Semantic Kernel with Dependency Injection. Because of this, we’re hard at work pulling together an E2E sample that demonstrates how to build a production ready .NET application that uses the best engineering patterns with Semantic Kernel. Expect that to land in the next week or so!

      As you called out, Chat Copilot is currently on a pre-V1.0.1 version of Semantic Kernel. Because of the advancements in AI (multi-modal and multi-agent) and because samples like Semantic Kernel-in-a-box are now available by Azure, we’ve chosen not to upgrade Chat Copilot. Additionally, we got feedback that Chat Copilot was a bit too complex to reference and learn from. Moving forward, we want to build E2E samples that are much easier to understand. The good news is that we have a new person joining the team next week to help us deliver on these samples (and other learning content). So keep an eye out for additional samples that demonstrate the full power of Semantic Kernel over the next several weeks!

      If there is a particular sample you’re interested in seeing, let us know by creating an issue on Semantic Kernel and I’ll work with the new team member to prioritize them.

Feedback usabilla icon