{"id":3477,"date":"2024-10-03T14:19:12","date_gmt":"2024-10-03T21:19:12","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/semantic-kernel\/?p=3477"},"modified":"2025-02-10T10:18:28","modified_gmt":"2025-02-10T18:18:28","slug":"microsoft-hackathon-project-micronaire-using-semantic-kernel","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/agent-framework\/microsoft-hackathon-project-micronaire-using-semantic-kernel\/","title":{"rendered":"Microsoft Hackathon: Project Micronaire using Semantic Kernel"},"content":{"rendered":"<p>During our internal Microsoft Hackathon, there were a number of projects using Semantic Kernel. Today we&#8217;d like to highlight one of the projects focused on using Semantic Kernel and the new Vector Store abstraction. I&#8217;m going to turn it over to the Hackathon team to share more about their project and work using Semantic Kernel.<\/p>\n<h4>Summary<\/h4>\n<p>In Microsoft&#8217;s 2024 Global Hackathon, we built the prototype for Micronaire as well as a demo project that showcased the capabilities of the library. We built a Romeo and Juliet chat bot that used RAG (Retrieval Augmented Generation) to recall paragraphs of the original Romeo and Juliet text. We utilized the new Vector Store abstraction from Semantic Kernel with their implementation for the Qdrant Database in order to store embeddings of these paragraphs. Once we had a working chat bot, we used Micronaire to evaluate its performance and generate graphs of the data.<\/p>\n<p>RAG pipelines enable developers to augment their chat experiences with informational documents that their agents can leverage to provide better answers. Evaluating these pipelines is a new area of study, with solutions like\u00a0<a href=\"https:\/\/github.com\/amazon-science\/RAGChecker\">RAGChecker<\/a>\u00a0and\u00a0<a href=\"https:\/\/docs.ragas.io\/en\/latest\/getstarted\/index.html\">RAGAS<\/a>\u00a0being state of the art frameworks that are implemented in Python. This project aims to take these ideas and bring them to DotNet through\u00a0<a href=\"https:\/\/github.com\/microsoft\/semantic-kernel\">Semantic Kernel<\/a>.<\/p>\n<p>Micronaire brings actionable metrics to RAG pipeline evaluation by taking a set of ground truth questions and answers as well as a RAG pipeline, evaluating the pipeline against the ground truth using our metrics (see below), and then producing an evaluation report.<\/p>\n<p>Micronaire has been released as open-source code here: <a href=\"https:\/\/github.com\/microsoft\/micronaire\">microsoft\/micronaire: A RAG evaluation pipeline for Semantic Kernel (github.com)<\/a><\/p>\n<p>Here&#8217;s a link to the Hackathon Project video to learn more: <a href=\"https:\/\/learn-video.azurefd.net\/vod\/player?id=60a35fb0-3c05-454a-9307-afec267c7854\">https:\/\/learn-video.azurefd.net\/vod\/player?id=60a35fb0-3c05-454a-9307-afec267c7854<\/a><\/p>\n<h4>Why the name?<\/h4>\n<p><a href=\"https:\/\/en.wikipedia.org\/wiki\/Units_of_textile_measurement#Micronaire\">Micronaire<\/a>\u00a0is a measure of the air permeability of cotton fiber and is an indication of fineness and maturity. This plays into the evaluation of RAG (like rag textiles).<\/p>\n<h4>Semantic Kernel Usage<\/h4>\n<p>Micronaire uses Semantic Kernel in two ways. First, it uses Semantic Kernel as a way to target RAG pipelines that users of the library have written in Semantic Kernel\u2019s C-Sharp implementation. Second, it uses Semantic Kernel as an AI Orchestrator to run its evaluation and call text completion APIs.<\/p>\n<h4>Using Micronaire<\/h4>\n<p>After bringing in the Micronaire library, users need to implement Micronaire\u2019s expected RAG interface for their Semantic Kernel chat bot experience. The interface is shown below:<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/10\/image-1.png\"><img decoding=\"async\" class=\"alignnone wp-image-3484 size-full\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/10\/image-1.png\" alt=\"Image image 1\" width=\"1675\" height=\"658\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/image-1.png 1675w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/image-1-300x118.png 300w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/image-1-1024x402.png 1024w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/image-1-768x302.png 768w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/image-1-1536x603.png 1536w\" sizes=\"(max-width: 1675px) 100vw, 1675px\" \/><\/a><\/p>\n<p>Next, users must generate a set of ground truth questions and answers for their evaluation in a JSON list. A simple example of this is shown below:<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/10\/image-2.png\"><img decoding=\"async\" class=\"alignnone wp-image-3485 size-full\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/10\/image-2.png\" alt=\"Image image 2\" width=\"1678\" height=\"379\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/image-2.png 1678w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/image-2-300x68.png 300w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/image-2-1024x231.png 1024w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/image-2-768x173.png 768w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/image-2-1536x347.png 1536w\" sizes=\"(max-width: 1678px) 100vw, 1678px\" \/><\/a><\/p>\n<p>Users must register Micronaire with dependency injection to allow for Micronaire objects to be created:<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/10\/image-3.png\"><img decoding=\"async\" class=\"alignnone wp-image-3486 size-full\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/10\/image-3.png\" alt=\"Image image 3\" width=\"1669\" height=\"112\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/image-3.png 1669w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/image-3-300x20.png 300w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/image-3-1024x69.png 1024w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/image-3-768x52.png 768w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/image-3-1536x103.png 1536w\" sizes=\"(max-width: 1669px) 100vw, 1669px\" \/><\/a><\/p>\n<p>Finally, users can grab the Micronaire evaluator from the container and call the evaluation function, passing in their RAG pipeline as well as the path to their ground truth data:<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/10\/image-4.png\"><img decoding=\"async\" class=\"alignnone wp-image-3487 size-full\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/10\/image-4.png\" alt=\"Image image 4\" width=\"1677\" height=\"82\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/image-4.png 1677w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/image-4-300x15.png 300w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/image-4-1024x50.png 1024w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/image-4-768x38.png 768w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/image-4-1536x75.png 1536w\" sizes=\"(max-width: 1677px) 100vw, 1677px\" \/><\/a><\/p>\n<h4>Metrics<\/h4>\n<p>Micronaire will generate a set of metrics across different categories. These categories represent all the approaches implemented in the library. Most of these approaches use claim evaluation (see below) to pull out fine-grained claims made by<\/p>\n<h4>LLM Evaluation<\/h4>\n<p>LLM evaluation uses an LLM to try to evaluate the performance of the RAG pipeline. It uses direct LLM calls which depends on the performance of the evaluating LLM, so using the strongest model possible is best. The following are the metrics that are generated, which are inspired by work done here: <a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-studio\/concepts\/evaluation-metrics-built-in?tabs=warning#ai-assisted-coherence\">Evaluation and monitoring metrics for generative AI &#8211; Azure AI Studio | Microsoft Learn<\/a>.<\/p>\n<h4>Groundedness<\/h4>\n<p>Measures how well the model&#8217;s generated answers align with information from the source data (user-defined context). Higher groundedness is better.<\/p>\n<h4>Relevance<\/h4>\n<p>Relevance measures the extent to which the model&#8217;s generated responses are pertinent and directly related to the queries given. Higher relevance is better.<\/p>\n<h4>Coherence<\/h4>\n<p>Measures how well the language model can produce output that flows smoothly, reads naturally, and resembles human-like language. A higher coherence is better.<\/p>\n<h4>Fluency<\/h4>\n<p>Measures the grammatical proficiency of a generative AI&#8217;s predicted answer. Higher fluency is better.<\/p>\n<h4>Retrieval Score<\/h4>\n<p>Measures the extent to which the model&#8217;s retrieved documents are pertinent and directly related to the queries given. Higher retrieval scores are better.<\/p>\n<h4>Similarity<\/h4>\n<p>Measures the similarity between a source data (ground truth) sentence and the generated response by an AI model. A higher similarity is better.<\/p>\n<h3>Overall Claim Evaluation<\/h3>\n<p>Overall claim evaluation uses extracted claims to evaluate the overall performance of the RAG pipeline. It uses the following metrics:<\/p>\n<h4>Precision<\/h4>\n<p>Precision tracks the proportion of correct claims to the total number of claims from the generated answer.<\/p>\n<h4>Recall<\/h4>\n<p>Recall tracks the proportion of correct claims to the total number of ground truth claims.<\/p>\n<h4>F1 Score<\/h4>\n<p>The F1 Score is the harmonic mean of the precision and recall scores. It can be used as an overall performance metric for the RAG pipeline.<\/p>\n<h3>Retrieval Claim Evaluation<\/h3>\n<p>Retrieval claim evaluation uses extracted claims to evaluate just the retrieval (the R in RAG) part of the RAG pipeline. It uses the following metrics:<\/p>\n<h4>Claim Recall<\/h4>\n<p>This is the proportion of claims in the ground-truth answer that are entailed in the retrieved context by the retriever. We say a ground-truth answer claim is covered if it is entailed in the retrieved context claims.<\/p>\n<h3><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/10\/claimsrecall.png\"><img decoding=\"async\" class=\"alignnone wp-image-3488 size-full\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/10\/claimsrecall.png\" alt=\"Image claimsrecall\" width=\"658\" height=\"88\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/claimsrecall.png 658w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/claimsrecall-300x40.png 300w\" sizes=\"(max-width: 658px) 100vw, 658px\" \/><\/a><\/h3>\n<h4>Context Precision<\/h4>\n<p>This is the proportion of claims in the retrieved context that are entailed in the ground-truth answer. A context claim is relevant if it is entailed in the ground-truth answer claims.<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/10\/claimspercision.png\"><img decoding=\"async\" class=\"alignnone wp-image-3489 size-full\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/10\/claimspercision.png\" alt=\"Image claimspercision\" width=\"663\" height=\"82\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/claimspercision.png 663w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/claimspercision-300x37.png 300w\" sizes=\"(max-width: 663px) 100vw, 663px\" \/><\/a><\/p>\n<h3><a name=\"_Toc178581359\"><\/a>Generation Claim Evaluation<\/h3>\n<p>Generation claim evaluation uses extracted claims to evaluate just the generation (the G in RAG) part of the RAG pipeline. It uses the following metrics defined in \u00a0<a href=\"https:\/\/github.com\/amazon-science\/RAGChecker\">RAGChecker<\/a>:<\/p>\n<h4>Faithfulness<\/h4>\n<p>This is a measure of how faithful generator uses the retrieved context to generate final response. A higher faithfulness means a better pipeline since it means generator is actually using the retrieved context.<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/10\/faithfulness.png\"><img decoding=\"async\" class=\"alignnone wp-image-3483 size-full\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/10\/faithfulness.png\" alt=\"Image faithfulness\" width=\"690\" height=\"76\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/faithfulness.png 690w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/faithfulness-300x33.png 300w\" sizes=\"(max-width: 690px) 100vw, 690px\" \/><\/a><\/p>\n<h4>Relevant Noise Sensitivity<\/h4>\n<p>This is a measure of how sensitive the generator is to noise mixed with useful information in a relevant retrieved context. \u00a0A lower noise sensitivity is better.<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/10\/relevant.png\"><img decoding=\"async\" class=\"alignnone wp-image-3482 size-full\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/10\/relevant.png\" alt=\"Image relevant\" width=\"853\" height=\"78\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/relevant.png 853w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/relevant-300x27.png 300w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/relevant-768x70.png 768w\" sizes=\"(max-width: 853px) 100vw, 853px\" \/><\/a><\/p>\n<h4>Irrelevant Noise Sensitivity<\/h4>\n<p>This is a measure of how sensitive the generator is to noise in a retrieved context that is irrelevant. A lower noise sensitivity is better<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/10\/Irrelevant.png\"><img decoding=\"async\" class=\"alignnone wp-image-3481 size-full\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/10\/Irrelevant.png\" alt=\"Image Irrelevant\" width=\"888\" height=\"73\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/Irrelevant.png 888w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/Irrelevant-300x25.png 300w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/Irrelevant-768x63.png 768w\" sizes=\"(max-width: 888px) 100vw, 888px\" \/><\/a><\/p>\n<h4>Hallucination<\/h4>\n<p>This is a measure of the propensity of the generator to make up incorrect claims with no backing by retrieved context. A lower hallucination rate is better.<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/10\/Hallucination.png\"><img decoding=\"async\" class=\"alignnone wp-image-3480 size-full\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/10\/Hallucination.png\" alt=\"Image Hallucination\" width=\"789\" height=\"73\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/Hallucination.png 789w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/Hallucination-300x28.png 300w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/Hallucination-768x71.png 768w\" sizes=\"(max-width: 789px) 100vw, 789px\" \/><\/a><\/p>\n<h4>Self-Knowledge Score<\/h4>\n<p>This is a measure of the generator\u2019s ability to make up correct claims that are not backed by the retrieved context. A higher Self-Knowledge score is better.<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/10\/self-knowledge.png\"><img decoding=\"async\" class=\"alignnone wp-image-3479 size-full\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/10\/self-knowledge.png\" alt=\"Image self knowledge\" width=\"795\" height=\"82\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/self-knowledge.png 795w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/self-knowledge-300x31.png 300w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/self-knowledge-768x79.png 768w\" sizes=\"(max-width: 795px) 100vw, 795px\" \/><\/a><\/p>\n<h4>Context Utilization<\/h4>\n<p>This is a measure of how effectively the generator uses retrieved context to generate the ground truth answer.<\/p>\n<h4 aria-level=\"2\"><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/10\/final-.png\"><img decoding=\"async\" class=\"alignnone wp-image-3478 size-full\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/10\/final-.png\" alt=\"Image final\" width=\"802\" height=\"99\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/final-.png 802w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/final--300x37.png 300w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/10\/final--768x95.png 768w\" sizes=\"(max-width: 802px) 100vw, 802px\" \/><\/a><\/h4>\n<h4 aria-level=\"2\"><span data-contrast=\"none\">Claim Extraction<\/span><span data-ccp-props=\"{&quot;134245418&quot;:true,&quot;134245529&quot;:true,&quot;201341983&quot;:0,&quot;335559738&quot;:160,&quot;335559739&quot;:80,&quot;335559740&quot;:259}\">\u00a0<\/span><\/h4>\n<p><span data-contrast=\"auto\">Claim extraction is the process of extracting a set of fine-grained claims of (potentially false) statements from a text. Micronaire achieves this through a chain-of-thought prompt that attempts to first extract subject, relationship, object triples and then combine those into a simple sentence that represents a single claim in a text.<\/span><span data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}\">\u00a0<\/span><\/p>\n<p>Hallucination =Number of incorrect claims with no backing by retrieved context \/ Total number of claims in the generated response<\/p>\n<h4><strong>Conclusion<\/strong><\/h4>\n<p>We want to thank the Project Micronaire hackathon team for joining the blog today and sharing their work.\u00a0Please reach out if you have any questions or feedback through our\u00a0<a href=\"https:\/\/github.com\/microsoft\/semantic-kernel\/discussions\/categories\/general\" target=\"_blank\" rel=\"noopener\">Semantic Kernel GitHub Discussion Channel<\/a>. We look forward to hearing from you!\u00a0We would also love your support, if you\u2019ve enjoyed using Semantic Kernel, give us a star on\u00a0<a href=\"https:\/\/github.com\/microsoft\/semantic-kernel\" target=\"_blank\" rel=\"noopener\">GitHub<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>During our internal Microsoft Hackathon, there were a number of projects using Semantic Kernel. Today we&#8217;d like to highlight one of the projects focused on using Semantic Kernel and the new Vector Store abstraction. I&#8217;m going to turn it over to the Hackathon team to share more about their project and work using Semantic Kernel. [&hellip;]<\/p>\n","protected":false},"author":149071,"featured_media":2302,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1],"tags":[48,63,9],"class_list":["post-3477","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-semantic-kernel","tag-ai","tag-microsoft-semantic-kernel","tag-semantic-kernel"],"acf":[],"blog_post_summary":"<p>During our internal Microsoft Hackathon, there were a number of projects using Semantic Kernel. Today we&#8217;d like to highlight one of the projects focused on using Semantic Kernel and the new Vector Store abstraction. I&#8217;m going to turn it over to the Hackathon team to share more about their project and work using Semantic Kernel. [&hellip;]<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/posts\/3477","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/users\/149071"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/comments?post=3477"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/posts\/3477\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/media\/2302"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/media?parent=3477"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/categories?post=3477"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/tags?post=3477"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}