November 13th, 2024

Customer Case Study: Suntory and Reliability in AI with Semantic Kernel

Image Suntory Global Spirits Logo

Suntory Global Spirits is the spirits division of Suntory Holdings, a multinational beverage and food company based in Japan. Suntory is renowned for its high-quality alcoholic beverages, including premium Japanese whisky, gin, vodka, and other spirits. We’ll turn it over to Urko Benito from Suntory to dive into their AI journey with Semantic Kernel. 

The Pursuit of Precision and Reliability in AI

From late 2023 through the first half of 2024, our team was deeply engaged in a phase of intensive learning and exploration. With a broad list of potential use cases ranging from chatbots to complex data analysis processes, our goal was to identify the tools and services best suited to our specific needs. One of our initial challenges was developing chatbots capable of integrating with traditional systems like SAP and Salesforce, where the use of natural language in multiple languages was crucial to improving accessibility and efficiency, making interactions more intuitive for users. We were not just looking for a functional solution for isolated cases; we needed an AI platform that could support the full development cycle, from initial testing through prototyping to final product deployment in production. In evaluating various options, our priority was to build a robust solution based on Python and supported by a microservices architecture. Moreover, it had to meet scalability and reliability standards required in an enterprise environment—something “Enterprise-ready” that could evolve and adapt, rather than just an experimental prototype or lab concept. This focus on a comprehensive and scalable solution was essential for achieving our long-term goals and meeting our organization’s specific requirements.

The Need for Control and Reliability in AI

One of the biggest challenges in working with AI is minimizing errors or “hallucinations” in system responses. Our objective was to grant the bot some freedom to leverage external plugins and respond autonomously while ensuring its interactions were always useful and precise. While prompting allowed us to partially adjust the bot’s behavior, the use of plugins with semantic descriptions added another layer of capability. This enabled Semantic Kernel to understand what it could do without needing a strictly defined process, allowing for flexibility without sacrificing clarity. To guide the system without restricting its adaptability, we implemented a directional planning approach. Instead of setting rigid workflows where predefined steps are followed for specific tasks, we wanted the Kernel to expand its capabilities using plugins and determine autonomously how to perform tasks.

Finally, we achieved consistent accuracy with Semantic Kernel. To further ensure precision, we implemented a monitoring mechanism to verify the system had returned the correct data. We achieved this through several strategies: in some cases, with parallel calls to compare results; in others, using a coherence-check plugin; and occasionally by analyzing metadata to confirm the reliability of the response. With this approach, we managed to balance Semantic Kernel’s autonomy with the certainty required for critical environments, delivering AI responses that are logical, accurate, and reliable.

Finding the Best Fit with Semantic Kernel

As an early adopter in 0.X release, we continued to gain confidence in Semantic Kernel. We then integrated more advanced elements like Azure Bot Framework and finally orchestrated all components within our system using AKS and microservices. It was a true odyssey, but the effort paid off when we saw the results.

The 1.x release, with the introduction of Agents, marked a significant milestone in Semantic Kernel’s evolution, adding stability and adaptability. This progress opened the possibility of a future where Semantic Kernel could create a comprehensive and dynamic solution.

The Future with the Process Framework

The introduction of the Process Framework in Semantic Kernel marks a major advancement for our future strategy. This new capability promises to help us manage guided processes more efficiently, reducing exclusive reliance on prompting to direct each interaction and minimizing the risk of inconsistent responses. Moving forward, the Process Framework could provide an additional structure to ensure response coherence without requiring us to delegate all responsibility to the bot. While we aim to maintain a certain level of freedom in interactions, we will explore how to incorporate this tool to achieve an optimal balance between control and autonomy, especially in critical tasks where clear guidance is essential.

Use Case Example: Precise Searches and Information Access in the Corporate ERP  

Image Screenshot 2024 11 13 133643

One key use case where Semantic Kernel demonstrated its value was in the creation of a chatbot that accesses our ERP from the corporate intranet. The goal was to streamline access to information and expedite responses, allowing users to query specific data, such as order statuses, without the usual complications. However, achieving this required that responses be precise, fast, and that the system also understand each user’s context.

To meet these requirements, we incorporated the OpenPlugin function, designing a set of microservices as modular blocks to enhance the chatbot’s capabilities. This enabled the bot not only to perform order queries but also to access invoice information, identify specific user context (name, time zone, etc.), and tailor responses accordingly.

We tested various methods, from simple FindByNumber functions to overloaded methods handling multiple tasks at once. We quickly discovered that complex methods complicated Semantic Kernel’s planning, leading to execution errors. We then simplified the methods to a single parameter with relevant context. This simplification proved effective and facilitated the Kernel’s planning with each query. User testing yielded positive results: processes that previously took a day were now completed in just 18 seconds, according to our statistics.

 

We began testing in a single department within a controlled environment, verifying that each result was accurate. Initially, we found minor interpretation errors with certain characters, like commas, dashes, and periods, but these adjustments were quickly resolved. The success led to scaling: our initial test of 10 users grew to 50, then 100, and now over 500 employees use the chatbot to check order statuses. This growth was supported by a performance-enhancing strategy: we integrated ChatHistory and a distributed cache system to optimize frequent queries and speed up access. Finally, we connected these statistics to a Power BI dashboard, where we analyze conversation topics, common queries, and usage patterns. This visibility allows us to adjust the system to meet users’ needs, maximizing efficiency and satisfaction in accessing corporate information.

Conclusion: A Promising Future with Semantic Kernel

Betting on Semantic Kernel in its infancy during its pre-1.0 version was undoubtedly a calculated decision. Microsoft’s backing and commitment to making Semantic Kernel the core of its services gave us the credibility needed to trust in its potential and future. And it has paid off. The investment in Semantic Kernel has been worthwhile. Today, we can deploy chatbots in just a few hours, equip them with specific capabilities through OpenPlugins, and meet the demanding standards of an Enterprise environment. Semantic Kernel has not only significantly improved our system’s reliability but also laid a solid foundation for future growth. For those looking to take the precision and control of their AI projects to the next level, Semantic Kernel from Microsoft is an option worth considering. Its ability to solve problems accurately, supported by a flexible and reliable structure, offers a competitive advantage to any team seeking to maximize the impact of their AI solutions, balancing innovation and stability in every implementation.

0 comments