{"id":4184,"date":"2025-02-18T15:56:48","date_gmt":"2025-02-18T23:56:48","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/semantic-kernel\/?p=4184"},"modified":"2025-02-19T20:00:51","modified_gmt":"2025-02-20T04:00:51","slug":"using-openais-o3-mini-reasoning-model-in-semantic-kernel","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/agent-framework\/using-openais-o3-mini-reasoning-model-in-semantic-kernel\/","title":{"rendered":"Using OpenAI\u2019s o3-mini Reasoning Model in Semantic Kernel"},"content":{"rendered":"<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2025\/02\/o3-mini-blog.jpg\"><img decoding=\"async\" class=\"wp-image-4186 aligncenter\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2025\/02\/o3-mini-blog-1024x585.jpg\" alt=\"Image o3 mini blog\" width=\"928\" height=\"530\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2025\/02\/o3-mini-blog-1024x585.jpg 1024w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2025\/02\/o3-mini-blog-300x171.jpg 300w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2025\/02\/o3-mini-blog-768x439.jpg 768w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2025\/02\/o3-mini-blog-1536x878.jpg 1536w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2025\/02\/o3-mini-blog.jpg 1792w\" sizes=\"(max-width: 928px) 100vw, 928px\" \/><\/a><\/p>\n<p><a href=\"https:\/\/openai.com\/index\/openai-o3-mini\/#:~:text=We%E2%80%99re%20releasing%20OpenAI%20o3,mini\">OpenAI\u2019s\u00a0<strong>o3-mini<\/strong>\u00a0is a newly released\u00a0<strong>small reasoning model<\/strong><\/a>\u00a0(launched January 2025) that delivers advanced problem-solving capabilities at a fraction of the cost of previous models. It excels in STEM domains (science, math, coding) while maintaining\u00a0<strong>low latency and cost<\/strong> similar to the earlier o1-mini model.<\/p>\n<p class=\"code-line\" dir=\"auto\" data-line=\"4\">This model is also available as\u00a0<a href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-the-availability-of-the-o3-mini-reasoning-model-in-microsoft-azure-openai-service\" data-href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-the-availability-of-the-o3-mini-reasoning-model-in-microsoft-azure-openai-service\">Azure OpenAI Service<\/a>, emphasizing its\u00a0<strong>efficiency gains and new features<\/strong>\u00a0like reasoning effort control and tool use.<\/p>\n<p class=\"code-line\" dir=\"auto\" data-line=\"6\">Throughout this post We&#8217;ll explore how to use\u00a0<code>o3-mini<\/code>\u00a0and other reasoning models with Semantic Kernel in both C# and Python.<\/p>\n<p class=\"code-line\" dir=\"auto\" data-line=\"8\"><a href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-the-availability-of-the-o3-mini-reasoning-model-in-microsoft-azure-openai-service\" data-href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-the-availability-of-the-o3-mini-reasoning-model-in-microsoft-azure-openai-service\"><strong>Key Features of OpenAI o3-mini:<\/strong><\/a><\/p>\n<ul class=\"code-line\" dir=\"auto\" data-line=\"10\">\n<li class=\"code-line\" dir=\"auto\" data-line=\"10\">\n<p class=\"code-line\" dir=\"auto\" data-line=\"10\"><strong>Reasoning Effort Control:<\/strong>\u00a0Adjust the model\u2019s \u201cthinking\u201d level (low, medium, high) to balance response\u00a0<strong>speed vs depth<\/strong>. This parameter lets the model spend more time on complex queries when set to high,\u00a0<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-services\/openai\/how-to\/reasoning#:~:text=the%20message%20response%20content%20but,reasoning_tokens\" data-href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-services\/openai\/how-to\/reasoning#:~:text=the%20message%20response%20content%20but,reasoning_tokens\">using additional hidden reasoning tokens<\/a>\u00a0for a more thorough answer.<\/p>\n<\/li>\n<li class=\"code-line\" dir=\"auto\" data-line=\"11\">\n<p class=\"code-line\" dir=\"auto\" data-line=\"11\"><strong>Structured Outputs:<\/strong>\u00a0Supports JSON Schema-based output constraints, enabling the model to produce well-defined JSON or other structured formats for downstream automation.<\/p>\n<\/li>\n<li class=\"code-line\" dir=\"auto\" data-line=\"12\">\n<p class=\"code-line\" dir=\"auto\" data-line=\"12\"><strong>Function and Tool Integration:<\/strong>\u00a0Natively calls functions and external tools (similar to previous OpenAI models), making it easier to build AI agents that perform actions or calculations as part of their responses.<\/p>\n<\/li>\n<li class=\"code-line\" dir=\"auto\" data-line=\"13\">\n<p class=\"code-line\" dir=\"auto\" data-line=\"13\"><strong>Developer Messages:<\/strong>\u00a0Introduces a new\u00a0<code>\"developer\"<\/code>\u00a0role (replacing the old system role) for instructions, allowing more flexible and explicit system prompts. (Azure OpenAI ensures backward compatibility by mapping legacy system messages to this new role.)<\/p>\n<\/li>\n<li class=\"code-line\" dir=\"auto\" data-line=\"14\">\n<p class=\"code-line\" dir=\"auto\" data-line=\"14\"><strong>Enhanced STEM Performance:<\/strong>\u00a0Improved abilities in coding, mathematics, and scientific reasoning, outperforming earlier models on many technical benchmarks.<\/p>\n<\/li>\n<li class=\"code-line\" dir=\"auto\" data-line=\"16\">\n<p class=\"code-line\" dir=\"auto\" data-line=\"16\"><strong>Performance &amp; Efficiency:<\/strong>\u00a0Early evaluations show that o3-mini provides\u00a0<strong>more accurate reasoning and faster responses<\/strong>\u00a0than its predecessors. OpenAI\u2019s internal testing reported\u00a0<em>39% fewer major errors<\/em>\u00a0on challenging questions compared to the older o1-mini, while also delivering answers about\u00a0<em>24% faster<\/em>. In fact, with medium effort, o3-mini matches the larger o1 model\u2019s performance on tough math and science problems, and at\u00a0<strong>high effort it can even outperform the full o1 model<\/strong>\u00a0on certain tasks (<a href=\"https:\/\/openai.com\/index\/openai-o3-mini\/#:~:text=Mathematics%3A%20With%20low%20reasoning%20effort%2C,consensus%29%20with%2064%20samples\" data-href=\"https:\/\/openai.com\/index\/openai-o3-mini\/#:~:text=Mathematics%3A%20With%20low%20reasoning%20effort%2C,consensus%29%20with%2064%20samples\">OpenAI o3-mini | OpenAI<\/a>). These gains come with substantial cost savings: o3-mini is roughly\u00a0<em>63% cheaper to use than<\/em>\u00a0o1-mini, thanks to optimizations that dramatically reduce the per-token pricing.<\/p>\n<\/li>\n<li class=\"code-line\" dir=\"auto\" data-line=\"18\">\n<p class=\"code-line\" dir=\"auto\" data-line=\"18\"><strong>Pricing:<\/strong>\u00a0One of o3-mini\u2019s biggest appeals is its\u00a0<a href=\"https:\/\/openai.com\/index\/openai-o3-mini\/\" data-href=\"https:\/\/openai.com\/index\/openai-o3-mini\/\"><strong>cost-effectiveness<\/strong><\/a>. According to OpenAI\u2019s pricing, o3-mini usage is billed at about\u00a0<strong>$1.10 per million input tokens<\/strong>\u00a0and\u00a0<strong>$4.40 per million output tokens<\/strong><\/p>\n<ul class=\"code-line\" dir=\"auto\" data-line=\"20\">\n<li class=\"code-line\" dir=\"auto\" data-line=\"20\"><a href=\"https:\/\/platform.openai.com\/docs\/pricing\" data-href=\"https:\/\/platform.openai.com\/docs\/pricing\">Open AI Pricing<\/a><\/li>\n<li class=\"code-line\" dir=\"auto\" data-line=\"21\"><a href=\"https:\/\/azure.microsoft.com\/en-us\/pricing\/details\/cognitive-services\/openai-service\/\" data-href=\"https:\/\/azure.microsoft.com\/en-us\/pricing\/details\/cognitive-services\/openai-service\/\">Azure OpenAI Service Pricing<\/a><\/li>\n<\/ul>\n<p class=\"code-line\" dir=\"auto\" data-line=\"23\"><em>You can get also a 50% discount for cached or batched tokens, further lowering effective costs in certain scenarios<\/em>.<\/p>\n<\/li>\n<\/ul>\n<p class=\"code-line\" dir=\"auto\" data-line=\"25\">Now, let\u2019s see\u00a0<strong>how to use o3-mini in Semantic Kernel<\/strong>. Because o3-mini follows the same OpenAI Chat Completion API format, we can plug it into SK using the existing OpenAI connector.<\/p>\n<p class=\"code-line\" dir=\"auto\" data-line=\"27\">We\u2019ll demonstrate minimal code in C# and Python to send a prompt to the o3-mini model and get a response. We\u2019ll also show how to configure the important\u00a0<code>reasoning_effort<\/code>\u00a0setting to get the best results from the model.<\/p>\n<h4 id=\"in-net-c\" class=\"code-line\" dir=\"auto\" data-line=\"29\">In .NET (C#)<\/h4>\n<p class=\"code-line\" dir=\"auto\" data-line=\"31\">For a C# project using Semantic Kernel, you can add o3-mini as an OpenAI chat completion service. Make sure you have your OpenAI API key (or Azure OpenAI endpoint and key if using Azure). Using the SK connectors, we create a chat completion service pointing to the o3-mini model and (optionally) specify a high reasoning effort for more complex queries:<\/p>\n<pre><code class=\"code-line language-csharp\" dir=\"auto\" data-line=\"33\"><span class=\"hljs-keyword\">using<\/span> Microsoft.SemanticKernel.ChatCompletion;\r\n<span class=\"hljs-keyword\">using<\/span> Microsoft.SemanticKernel.Connectors.OpenAI;\r\n\r\n<span class=\"hljs-meta\">#<span class=\"hljs-keyword\">pragma<\/span> <span class=\"hljs-keyword\">warning<\/span> disable SKEXP0010 \/\/ Reasoning effort is still in preview for OpenAI SDK.<\/span>\r\n\r\n <span class=\"hljs-comment\">\/\/ Initialize the OpenAI chat completion service with the o3-mini model.<\/span>\r\n <span class=\"hljs-keyword\">var<\/span> chatService = <span class=\"hljs-keyword\">new<\/span> OpenAIChatCompletionService(\r\n     modelId: <span class=\"hljs-string\">\"o3-mini\"<\/span>,  <span class=\"hljs-comment\">\/\/ OpenAI API endpoint<\/span>\r\n     apiKey: <span class=\"hljs-string\">\"YOUR_OPENAI_API_KEY\"<\/span>  <span class=\"hljs-comment\">\/\/ Your OpenAI API key<\/span>\r\n );\r\n\r\n <span class=\"hljs-comment\">\/\/ (If using Azure OpenAI Service, use AzureOpenAIChatCompletionService<\/span>\r\n <span class=\"hljs-comment\">\/\/ with your endpoint URI, API key, and deployed model name instead.)<\/span>\r\n\r\n <span class=\"hljs-comment\">\/\/ Create a new chat history and add a user message to prompt the model.<\/span>\r\n ChatHistory chatHistory = [];\r\n chatHistory.AddUserMessage(<span class=\"hljs-string\">\"<span data-teams=\"true\">Why is the sky blue in one sentence?<\/span>\"<\/span>);\r\n\r\n <span class=\"hljs-comment\">\/\/ Configure reasoning effort for the chat completion request.<\/span>\r\n <span class=\"hljs-keyword\">var<\/span> settings = <span class=\"hljs-keyword\">new<\/span> OpenAIPromptExecutionSettings { ReasoningEffort = <span class=\"hljs-string\">\"high\"<\/span> };\r\n\r\n <span class=\"hljs-comment\">\/\/ Send the chat completion request to o3-mini<\/span>\r\n <span class=\"hljs-keyword\">var<\/span> reply = <span class=\"hljs-keyword\">await<\/span> chatService.GetChatMessageContentAsync(chatHistory, settings);\r\n Console.WriteLine(<span class=\"hljs-string\">\"o3-mini reply: \"<\/span> + reply);\r\n<\/code><\/pre>\n<p class=\"code-line\" dir=\"auto\" data-line=\"60\"><em>Note:<\/em>\u00a0If you\u2019re using\u00a0<strong>Azure OpenAI<\/strong>, the setup is very similar \u2013 you would use\u00a0<code>AzureOpenAIChatCompletionService<\/code>\u00a0instead providing the &#8220;<code>&lt;deployment-name&gt;<\/code>&#8220;, &#8220;<code>&lt;https:\/\/your-endpoint&gt;<\/code>&#8220;, &#8220;{<code>&lt;api-key&gt;<\/code>&#8220;). The\u00a0<code>reasoning_effort<\/code>\u00a0parameter is\u00a0<a href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-the-availability-of-the-o3-mini-reasoning-model-in-microsoft-azure-openai-service\/#:~:text=,%E2%80%9Cdeveloper%E2%80%9D%20attribute%20replaces%20the%20system\" data-href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-the-availability-of-the-o3-mini-reasoning-model-in-microsoft-azure-openai-service\/#:~:text=,%E2%80%9Cdeveloper%E2%80%9D%20attribute%20replaces%20the%20system\">supported in the Azure OpenAI Chat Completion API<\/a>\u00a0as well, but make sure you have the latest Azure OpenAI SDK set to the latest REST API version (2024-12-01-preview or later) that includes this parameter.<\/p>\n<h4 id=\"python\" class=\"code-line\" dir=\"auto\" data-line=\"62\">Python<\/h4>\n<p class=\"code-line\" dir=\"auto\" data-line=\"64\">Using o3-mini in Python with Semantic Kernel is just as straightforward. We can utilize the SK OpenAI connector classes to call the model. Below is an example how to use Semantic Kernel targeting o3-mini and enabling high reasoning effort:<\/p>\n<pre><code>import asyncio\r\n\r\nfrom semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion, OpenAIChatPromptExecutionSettings\r\nfrom semantic_kernel.contents import ChatHistory\r\n\r\n\r\nasync def main():\r\n    # Initialize\u00a0the\u00a0OpenAI\u00a0chat\u00a0completion\u00a0with\u00a0o3-mini\u00a0model\r\n    chat_service = OpenAIChatCompletion(ai_model_id=\"o3-mini\", instruction_role=\"developer\")\r\n\r\n    # Start a chat history and add a user prompt\r\n    chat_history = ChatHistory()\r\n    chat_history.add_developer_message(\"You are a helpful assistant.\")\r\n    chat_history.add_user_message(\"Why is the sky blue in one sentence?\")\r\n\r\n    # Create settings and set high reasoning effort for a more detailed response\r\n    # Ask\u00a0o3-mini\u00a0to\u00a0use\u00a0high\u00a0reasoning\u00a0mode\r\n    settings = OpenAIChatPromptExecutionSettings(reasoning_effort=\"high\")  \r\n\r\n    # Get the model's response\r\n    response = await chat_service.get_chat_message_content(chat_history, settings)\r\n    print(\"o3-mini reply:\", response)\r\n\r\n\r\n# Run\u00a0the\u00a0async\u00a0main\u00a0function\r\nif __name__ == \"__main__\":\r\n    asyncio.run(main())\r\n<\/code><\/pre>\n<p>Just like that, with only a few lines of code in C# or Python, you can\u00a0<strong>start leveraging o3-mini\u2019s reasoning capabilities<\/strong>\u00a0within your Semantic Kernel applications. Whether you\u2019re building an AI agent that needs rigorous problem-solving or a chat assistant that can handle complex queries, o3-mini provides a powerful yet cost-efficient option. The\u00a0<code>reasoning_effort<\/code>\u00a0knob gives you fine control \u2013 for example, use high effort for difficult questions where accuracy matters most, and medium or low effort for casual or time-sensitive interactions (<a href=\"https:\/\/openai.com\/index\/openai-o3-mini\/#:~:text=window%29%20platform,when%20latency%20is%20a%20concern\" data-href=\"https:\/\/openai.com\/index\/openai-o3-mini\/#:~:text=window%29%20platform,when%20latency%20is%20a%20concern\">OpenAI o3-mini | OpenAI<\/a>).<\/p>\n<p>We have also created a python sample within Semantic Kernel using reasoning models here: <a href=\"https:\/\/github.com\/microsoft\/semantic-kernel\/blob\/main\/python\/samples\/concepts\/reasoning\/simple_reasoning.py\">Simple reasoning<\/a>.<\/p>\n<p class=\"code-line\" dir=\"auto\" data-line=\"100\">We encourage you to experiment with o3-mini in your SK workflows. Its combination of\u00a0<strong>advanced reasoning skills, developer-friendly features, and low operational cost<\/strong>\u00a0makes it an exciting addition to the toolkit. With Semantic Kernel abstracting away much of the integration hassle, swapping in o3-mini is seamless. Give it a try and see how it\u00a0<strong>elevates your AI-driven applications<\/strong>\u00a0\u2013 whether you\u2019re generating code, solving math problems, or orchestrating complex multi-step AI tasks. Happy building!<\/p>\n<p class=\"code-line\" dir=\"auto\" data-line=\"102\"><strong>References:<\/strong><\/p>\n<ul class=\"code-line\" dir=\"auto\" data-line=\"104\">\n<li style=\"list-style-type: none;\">\n<ul class=\"code-line\" dir=\"auto\" data-line=\"104\">\n<li class=\"code-line\" dir=\"auto\" data-line=\"104\">\n<p class=\"code-line\" dir=\"auto\" data-line=\"104\">OpenAI Blog \u2013\u00a0<em>\u201cOpenAI o3-mini: Pushing the frontier of cost-effective reasoning.\u201d<\/em>\u00a0(Jan 31, 2025) (<a href=\"https:\/\/openai.com\/index\/openai-o3-mini\/#:~:text=We%E2%80%99re%20releasing%20OpenAI%20o3,mini\" data-href=\"https:\/\/openai.com\/index\/openai-o3-mini\/#:~:text=We%E2%80%99re%20releasing%20OpenAI%20o3,mini\">OpenAI o3-mini | OpenAI<\/a>) (<a href=\"https:\/\/openai.com\/index\/openai-o3-mini\/#:~:text=window%29%20platform,when%20latency%20is%20a%20concern\" data-href=\"https:\/\/openai.com\/index\/openai-o3-mini\/#:~:text=window%29%20platform,when%20latency%20is%20a%20concern\">OpenAI o3-mini | OpenAI<\/a>)<\/p>\n<\/li>\n<li class=\"code-line\" dir=\"auto\" data-line=\"105\">\n<p class=\"code-line\" dir=\"auto\" data-line=\"105\">Microsoft Azure AI Blog \u2013\u00a0<em>\u201cAnnouncing the availability of the o3-mini reasoning model in Azure OpenAI Service.\u201d<\/em>\u00a0(<a href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-the-availability-of-the-o3-mini-reasoning-model-in-microsoft-azure-openai-service\" data-href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/announcing-the-availability-of-the-o3-mini-reasoning-model-in-microsoft-azure-openai-service\">Announcing the availability of the o3-mini reasoning model in Microsoft Azure OpenAI Service | Microsoft Azure Blog<\/a><\/p>\n<\/li>\n<li class=\"code-line\" dir=\"auto\" data-line=\"107\">\n<p class=\"code-line\" dir=\"auto\" data-line=\"107\">OpenAI API Documentation \u2013\u00a0<em>Reasoning models and\u00a0<code>reasoning_effort<\/code>\u00a0parameter<\/em>\u00a0(<a href=\"https:\/\/platform.openai.com\/docs\/api-reference\/chat\/create#chat-create-reasoning_effort\" data-href=\"https:\/\/platform.openai.com\/docs\/api-reference\/chat\/create#chat-create-reasoning_effort\">Open AI | API Reference<\/a>)<\/p>\n<\/li>\n<li class=\"code-line\" dir=\"auto\" data-line=\"109\">\n<p class=\"code-line\" dir=\"auto\" data-line=\"109\">Performance Insights \u2013 (<a href=\"https:\/\/openai.com\/index\/openai-o3-mini\/#:~:text=Mathematics%3A%20With%20low%20reasoning%20effort%2C,consensus%29%20with%2064%20samples\" data-href=\"https:\/\/openai.com\/index\/openai-o3-mini\/#:~:text=Mathematics%3A%20With%20low%20reasoning%20effort%2C,consensus%29%20with%2064%20samples\">OpenAI o3-mini | OpenAI<\/a>)<\/p>\n<\/li>\n<li class=\"code-line\" dir=\"auto\" data-line=\"111\">\n<p class=\"code-line\" dir=\"auto\" data-line=\"111\">Pricing Details \u2013 (<a href=\"https:\/\/platform.openai.com\/docs\/pricing\" data-href=\"https:\/\/platform.openai.com\/docs\/pricing\">Open AI Pricing<\/a>), (<a href=\"https:\/\/azure.microsoft.com\/en-us\/pricing\/details\/cognitive-services\/\" data-href=\"https:\/\/azure.microsoft.com\/en-us\/pricing\/details\/cognitive-services\/\">Azure OpenAI Service Pricing<\/a>)<\/p>\n<\/li>\n<li dir=\"auto\" data-line=\"109\">Additional Semantic Kernel Python Reasoning Code Sample with Function Calling located <a href=\"https:\/\/github.com\/microsoft\/semantic-kernel\/blob\/main\/python\/samples\/concepts\/reasoning\/simple_reasoning_function_calling.py\">here<\/a>.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>OpenAI\u2019s\u00a0o3-mini\u00a0is a newly released\u00a0small reasoning model\u00a0(launched January 2025) that delivers advanced problem-solving capabilities at a fraction of the cost of previous models. It excels in STEM domains (science, math, coding) while maintaining\u00a0low latency and cost similar to the earlier o1-mini model. This model is also available as\u00a0Azure OpenAI Service, emphasizing its\u00a0efficiency gains and new features\u00a0like [&hellip;]<\/p>\n","protected":false},"author":63983,"featured_media":2364,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[47,17,1],"tags":[48,82,63,13,9],"class_list":["post-4184","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-announcement","category-announcements","category-semantic-kernel","tag-ai","tag-announcement","tag-microsoft-semantic-kernel","tag-openai","tag-semantic-kernel"],"acf":[],"blog_post_summary":"<p>OpenAI\u2019s\u00a0o3-mini\u00a0is a newly released\u00a0small reasoning model\u00a0(launched January 2025) that delivers advanced problem-solving capabilities at a fraction of the cost of previous models. It excels in STEM domains (science, math, coding) while maintaining\u00a0low latency and cost similar to the earlier o1-mini model. This model is also available as\u00a0Azure OpenAI Service, emphasizing its\u00a0efficiency gains and new features\u00a0like [&hellip;]<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/posts\/4184","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/users\/63983"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/comments?post=4184"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/posts\/4184\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/media\/2364"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/media?parent=4184"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/categories?post=4184"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/tags?post=4184"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}