Now that it’s February, we wanted to share what we had planned for Semantic Kernel from now until Microsoft Build. Most of our next immediate investments fall into one of three buckets: V1.0 parity across all our languages, additional connectors, and last but not least, agents. If you want a deep dive into our plans, watch our Spring 2024 roadmap video! Otherwise get a quick summary below.
V1.0 Parity across Python and Java
With the V1.0 version of .NET, we committed to not introduce anymore breaking changes to non-experimental features. This has given customers additional confidence to build production AI applications on top of Semantic kernel. By March of this year, we plan on releasing either Beta or Release Candidates of both our Python and Java libraries. By Microsoft Build, we will finish parity and launch V1.0 for Python and Java.
As part of V1.0, Python and Java will get many of the improvements that came to the .NET version that made it much easier and more powerful to use. This includes automatic function calling, events, YAML prompt files, and Handlebars templates. With the YAML prompt files, you’ll be able to create prompt and agent assets in Python and then reshare it with .NET and Java developers.
If you’re interested in learning more, check out our full backlog on GitHub for Python and Java.
More connectors!
Since Semantic Kernel was first introduced, many new models have been introduced. We plan on working with the community to introduce connectors to the most popular models and their deployment types. These include Gemini, Llama-2, Phi-2, Mistral, and Claude deployed on Hugging Face, Azure AI, Google AI, Bedrock and locally.
We’ve also gotten great feedback on our experimental memory connectors. Over the next few months, we’ll be updating the abstractions for our connectors so that they are less opinionated. This will make the easier to use and allow us to support even more scenarios with them.
Lastly, we know that multi-modal experiences are the next frontier for AI applications. We’ll make it easier to support these experiences by providing additional connectors to models that support audio, images, video, documents, and more!
First-class agent support
Lastly, we want to ensure that Semantic Kernel customers are able to develop autonomous agents that can complete tasks on behalf of users. We already have an experimental implementation that uses the OpenAI Assistants API (check out John Maeda’s SK basics samples), but as part of our final push, we want to fully abstract our agent interface to support agents built with any model.
To achieve this, we’re leveraging the research provided by the Autogen team to create an abstraction that can support any number of experiences, including those where agents work together as a team.
Feedback is always welcome!
As an open source project, everything we do (including planning) is done out in the open. We do this so that you as a community can give us feedback every step of the way. If you have recommendations to any of the features we have planned for Spring 2024 (or even recommendations for things that aren’t on our radar), let us know by filing an issue on GitHub, starting a discussion on GitHub, or starting a conversation on Discord.
great news but they were breaking changes after 1.0 of .Net version. For instance, some demo repos do not work anymore with SK1.3 and are abandonned (https://github.com/microsoft/chat-copilot/issues/790)
My feedback: we need demo apps using SK and we need them up to date with the SDK because if we are a great SDK, we also need to learn how to use it 🙂
Thanks for the feedback!
Since releasing V1.0.1, we've made sure to avoid any breaking changes to ensure that all samples (and production apps) built on V1.0.1 and above continue to work no matter what new features we add. One of the best samples that show how to use Semantic Kernel on V1.0.1+ is the Semantic Kernel AI-in-a-box sample provided by the Azure team. I highly recommend checking it out to see how you can build an...