July 30th, 2024

What’s coming next? Summer / Fall roadmap for Semantic Kernel

It feels like yesterday when we went live with v1.0+ of all our SDKs (Python, Java, and C#) at Microsoft Build. Since then, the Semantic Kernel team has been hard at work making Semantic Kernel even better. Now that we’ve made some progress, we’d like to share what we have planned over the next few months for Microsoft Ignite.

To see everything we have planned, check out the video below where I cover the 4 main areas we plan on investing in. You can also keep reading for a quick recap of our planned enhancements.

Enhancing enterprise requirements

We hear time and time again that the main value customers have with Semantic Kernel are the enterprise features it provides, like filters and Open Telemetry metrics. We want to continue making Semantic Kernel the best enterprise grade AI SDK, so we want to start helping you leveraging these capabilities to make your AI apps more performant and reliable.

Take for example, slow AI apps. With our enterprise investments, we want to make it easier for you to diagnose why your LLM calls are slow while also provide techniques like semantic caching to further speed up responses.

Small model support

In addition to helping you improve performance, we also want to help you control costs. Not all tasks require the power of something like GPT-4o. Many times, a local model or small language model (SLM) will do. To help developers offload tasks to smaller, cheaper models, we’ll be providing connectors to both ONNX runtime and Azure’s Model as a Service.

Together with these two connectors, you can start saving money by only using the models that you need to complete a user’s request.

Improved memory connectors

The memory connectors in Semantic Kernel haven’t seen a meaningful update since they were introduced, but that will all soon change. With the new memory abstractions, you as a developer will be able to bring in your own custom model to read and write data into your vector DB.

Take for example, a store that needs to save information about products so you can perform semantic searching on them. In other SDKs, you simply get an untyped record back for a product search. With our new memory connectors, however, you’ll receive back the same type that you put in (in this case a product). This makes it easier to enforce type safety and to retrieve data from a vector DB.

Automation with agents

Finally, we will be coordinating with the rest of Microsoft to enable you to orchestrate multiple agents together to complete a business process. Just like humans, AI agents have been shown to perform better once they’re given prescribed steps to complete a task. With Semantic Kernel, we’ll make it easy to define a process across all three of our SDKs.

Watch our progress live!

If you are interested in seeing when each of these features will go live (and to see other items we’re working on), you can always check out our public backlog on GitHub. Simply go to our bucketized view to see everything in progress by the team.

2 comments

Discussion is closed. Login to edit/delete existing comments.

  • Cecil Phillip

    Nice. Any comments on Ollama support?

  • José Luis Latorre Millás

    Outstanding, seems like a solid statement and clear goals for a beautiful V. 2.0 of Semantic Kernel.