Last week, Semantic Kernel had a huge milestone during Microsoft Build. Both its Python and Java libraries achieved V1.0 status, guaranteeing that neither of them would have breaking changes for non-experimental features moving forward. This is a big deal for our customers, as it means they can now confidently build their applications on top of our libraries without worrying about future compatibility issues.
More importantly though, this means that all three SDKs (Python, C#, and Java) all share the same core functionality. This is a huge win for our customers, as it means that they can now easily switch between languages. This is especially important for our customers who have teams that use differing languages, as it means that they can now easily share code without having to worry about compatibility issues.
For most customers, development starts with AI researchers and data scientists building POCs in Python. When it then comes time to deploy these models into production, the enterprise app developers can then take over. This is where the new V1.0 status of our libraries really shines, as it means that app developers can now easily take the prompts and plugins built by AI researchers and instantly deploy them into production using an enterprise-grade AI SDK.
To demonstrate just how easy it is to bridge the gap between AI researchers and app developers, we recorded the live demo we did at Microsoft Build.
In this demo, we show how an AI researcher can build an AI agent with a set of OpenAPI defined plugins in Python, and then transfer those exact same plugins to Java and .NET developers so they can recreate the same agent in a production environment. This is a huge win for our customers, as it means that they can now easily bridge the chasm between their AI researchers and app developers.
Semantic Kernel Community Video
Thank you again for our amazing Semantic Kernel community for sending in your videos! If you missed the video at Microsoft BUILD watch it here.
But that’s not all…
In addition to launching V1.0 of our Python and Java libraries, we also launched a host of additional features that make it even easier to build enterprise-grade AI applications. These include:
- Logic Apps as plugins: Easily let your AI agents interact with existing systems and services in your organization with no code Logic Apps.
- Open Telemetry semantic convention support: Integrate the telemetry from your AI agents with your existing observability tools like .NET Aspire.
- Hooks and filters: Have control over the lifecycle of your AI agents with hooks and filters so that you can implement approvals, semantic caching, and more.
- Azure Container Apps Dynamic Sessions: Recreate the Code Interpreter experience from the Assistance API in your own AI agents with plugins that can run their own isolated Python sessions.
On-Demand Microsoft BUILD Sessions
We recommend you checking out these sessions which featured Semantic Kernel to learn more about getting started and building production AI with Semantic Kernel:
- Scott and Mark learn AI – in this session Scott Hanselman (VP Developer Community) and Mark Russinovich (Azure CTO) diving into AI using the .NET Kernel
- Infusing your .NET Apps with AI: Practical Tools and Techniques – in this session Stephen Toub (Partner Software Engineer), Luis Quintanilla (Senior Product Manager), and Vin Kamat (Principal Architect from HR Block) walk you through getting started with the .NET kernel and Vin from HR Block shares best practices on getting AI to production
- Building RAG at scale with Semantic Kernel and DataStax Astra DB – in this session Greg Stachnick (Product manager at DataStax) walks you through using Semantic Kernel with Astra DB for RAG at scale
- Generative AI application stack – providing long term memory to LLMs – in this session Prakul Agarwal (Senior Product Manager at MongoDB) shares how to use Semantic Kernel and MongoDB together for long term memory solutions
- Bridge the chasm between your ML and app devs with Semantic Kernel – in this session from the Semantic Kernel team – we hear from Matthew Bolonos and Evan Chaki from the Semantic Kernel team. Two customers – Hiro Kobashi (Research Director at Fujitsu) and from Adam Tybor (Chief AI and Data Architect) and Dan Schocke (Director) both from Accenture share how they are using Semantic Kernel at scale.
We’re excited to see what you build with these new features. If you have any questions or need help getting started, feel free to reach out to us on our GitHub discussions page.
Thank you for the opportunity to participate in the Community video!!! 🥰