Semantic Kernel at Microsoft BUILD 2023: Highlights from the Q&A Session
We’ve got some exciting news to share with you about our latest AI solutions that were unveiled at Microsoft BUILD 2023. In addition to our main keynote session – Building AI solutions with Semantic Kernel, we also held a Q&A session where attendees had the opportunity to ask us all their top questions about how to use these new AI solutions in their businesses. Check out what we discussed and learn more about how you can get started with Semantic Kernel today.
During the Q&A session at BUILD, we received several stimulating questions from attendees. Here are some of the highlights:
Creating new documents based on historical examples
Question: “I have a use case to fill in sections of new drafts of documents based on historical documents for our business. Can I use AI for this?”
Answer: This is a common use case that we hear from many customers.
To get started with this you will need to:
- Select a vector memory storage solution – this allows the AI to find your documents and leverage those
- If they are large documents, you will likely need to select a chunking strategy – this is how the documents will be broken apart before they are sent to the vector memory storage solution
- Think about what UI you want to use for your end users
Semantic Kernel supports several vector memory providers:
You can use Copilot chat starter app to see this solution in practice. Your end users can upload a file, it will be stored in the vector store confirmed in the config file.
Allowing employees to talk to their enterprise data
Question: “How do I securely allow my employees to talk to their data which is in SQL and do it in a trusted manner so the users can’t do prompt injection?”
Answer: This is the other top use case that we hear from many customers.
You will want to start by having your users auth into your app, so you know who they are. Use that authorization to pass over to your SQL database or other enterprise database. This will ensure the user has access to only the data that you gave them in the past, so you do not get data leakage.
Using views and stored procedures is a great way to increase your security posture with users. Rather than having the LLMs create SQL statements to execute you can keep them on track using these methods.
Adding consistency with AI LLMs
Question: “Are there any best practices for creating these new AI solutions so they are consistent?”
Answer: One way to add consistency for your end users is to create static plans. You can create plans in our VS Code Extension ( https://marketplace.visualstudio.com/items?itemName=ms-semantic-kernel.semantic-kernel ) and then use those static plans to run the same steps each time users ask for the same thing.
Multi-tenant solutions with LLM
Question: “How should I think about multi-tenant solutions using AI?”
Answer: With Multi-tenant solutions, the same rules apply as for keeping SQL secure. You will want to segment out the users by tenant by having them auth into your solution. LLMs don’t hold onto or cache any information on their own. Any data cross-talk that happens in a multi-tenant AI solution will be based on permissions and/or data systems not being configured correctly.
Multi-user chat solutions
Question: “How can I allow users to invite other employees into a chat and how would data sharing work in that use case?”
Answer: Our Copilot chat starter app is a good reference app to see how this can work. It allows you to invite others into a chat with a user and the LLM bot. Just like a Microsoft Word doc, when you share the document with another user, they can see what is in the document. The chat would work the same way.
Stay tuned for more updates on Semantic Kernel as we continue to innovate. In the meantime, make sure to watch our BUILD 2023 Semantic Kernel keynote video (https://aka.ms/sk-build23 ) and visit our learn site (https://aka.ms/sk/learn ) to get started.