Posts by this author

Dec 10, 2024
0
1

ONNX GenAI Connector for Python (Experimental) 

ONNX GenAI Connector for Python (Experimental)  With the latest update we added support for running models locally with the onnxruntime-genai. The onnxruntime-genai package is powered by the ONNX Runtime in the background, but first let’s clarify what ONNX, ONNX Runtime and ONNX Runtime-GenAI are.  ONNX ONNX is an open-source format for AI model...

Semantic Kernel
Dec 6, 2024
0
1

Customer Case Study: How to use Elasticsearch Vector Store Connector for Microsoft Semantic Kernel for AI Agent development

How to use Elasticsearch Vector Store Connector for Microsoft Semantic Kernel for AI Agent development Today we're excited to feature the Elastic team to share more about their Semantic Kernel Elasticsearch Vector Store connector for Microsoft Semantic Kernel. Read the entire announcement here. I'll turn it over to Srikanth Manvi and Florian Bernd...

Semantic KernelAnnouncementsCustomer Story
Dec 3, 2024
0
2

Tracing your AI apps with Azure AI Foundry

We previously introduced observability within the Semantic Kernel. For further insights, please refer to our previous blog post, and you can also explore our learn site for additional details. To summarize, observability is an essential aspect of your application stack, particularly in today's landscape where AI plays a significant role in numer...

Semantic Kernel
Dec 2, 2024
0
0

使用Semantic Kernel来开启您的AI旅程

Semantic Kernel (SK) 是由微软开发的一款开源开发工具包,旨在帮助开发者和企业利用最新的人工智能技术构建更智能和复杂的AI驱动解决方案,例如检索增强生成 (RAG) 和智能体 (Agent)。 核心概念 内核(Kernel) 您可以将SK的内核视为一个容器,该容器包含您所需的所有与AI相关的组件,如提示语模板、AI服务和插件。如果将所有的服务和插件提供给内核,则AI会根据需求自动使用它们。SK还提供多项企业级功能,使您可以保证您的AI符合安全要求且在部署后能够持续监控AI的表现。 提示语(Prompt Engineering) 提示语在使用大型语言模型中至关重要。优质的提示语能够显著提升用户体验。SK通过提示语模板(template)的方式来实现自然语言与...

Semantic Kernel
Nov 25, 2024
0
1

Microsoft Semantic Kernel Office Hours Update

Microsoft Semantic Kernel Office Hours Update Over the upcoming Holidays period there are several office hours sessions that we'll be cancelling. Below is an updated view of our upcoming office hours availability. We will resume with regularly scheduled Office Hours on January 8th for our weekly and monthly sessions. During this time, feel free...

Semantic Kernel
Nov 21, 2024
0
1

Customer Case Study: Announcing the Microsoft Semantic Kernel Elasticsearch Connector

Today we're excited to feature the Elastic team to share more about their Semantic Kernel Elasticsearch connector. Read the entire announcement here. I'll turn it over to Srikanth Manvi to dive into it. In collaboration with the Microsoft Semantic Kernel team, we are announcing the availability of Semantic Kernel Elasticsearch Ve...

Customer StoryAnnouncementsAnnouncement
Nov 18, 2024
0
0

Customer Case Study: Zipp by Sticos – Powered by Semantic Kernel

Today we're featuring Sticos on our Semantic Kernel blog to highlight their customer journey with AI and Semantic Kernel. Who is Sticos Our Vision: Sticos is determined to become the most customer driven company in the world. Together we make incomprehensible legislation easy and practical! What We Do: We combine legal ...

Semantic KernelCustomer Story
Nov 18, 2024
0
0

Customer Case Study: Fujitsu Kozuchi AI Agent Powered by Semantic Kernel

Customer Case Study: Fujitsu Kozuchi AI Agent Powered by Semantic Kernel Japanese multinational Fujitsu, a pioneer of information and communications technology, has been transforming industries since 1935. With a workforce of 124,000 dedicated professionals across 50 countries, Fujitsu’s mission is to “make the world more sustainable by building t...

Semantic Kernel
Nov 13, 2024
0
1

Customer Case Study: Suntory and Reliability in AI with Semantic Kernel

Suntory Global Spirits is the spirits division of Suntory Holdings, a multinational beverage and food company based in Japan. Suntory is renowned for its high-quality alcoholic beverages, including premium Japanese whisky, gin, vodka, and other spirits. We'll turn it over to Urko Benito from Suntory to dive into their AI journey with Semantic K...

Semantic KernelCustomer Story
Nov 4, 2024
0
3

Managing Chat History for Large Language Models (LLMs)

Large Language Models (LLMs) operate with a defined limit on the number of tokens they can process at once, referred to as the context window. Exceeding this limit can have significant cost and performance implications. Therefore, it is essential to manage the size of the input sent to the LLM, particularly when using chat completion models. This i...

Semantic KernelAnnouncement