{"id":1196,"date":"2025-09-03T08:00:50","date_gmt":"2025-09-03T15:00:50","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/foundry\/?p=1196"},"modified":"2025-09-04T11:14:43","modified_gmt":"2025-09-04T18:14:43","slug":"whats-new-in-azure-ai-foundry-august-2025","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/foundry\/whats-new-in-azure-ai-foundry-august-2025\/","title":{"rendered":"What&#8217;s new in Azure AI Foundry | August 2025"},"content":{"rendered":"<h2>TL;DR<\/h2>\n<p><strong>Models<\/strong>: GPT\u20115 family now in Azure AI Foundry (gpt\u20115 requires registration; launch regions: East US 2, Sweden Central). Also new: Sora API updates (image\u2192video, inpainting; Global Standard in East US 2 and Sweden Central), Mistral Document AI (OCR), Black Forest Labs FLUX.1 Kontext [pro] and FLUX1.1 [pro], OpenAI gpt\u2011oss (with Foundry Local support), and VibeVoice long\u2011form TTS (coming soon).<\/p>\n<p><strong>Agents<\/strong>: Browser Automation tool (public preview) and expanded Agent Service regional availability (Brazil South, Germany West Central, Italy North, South Central US).<\/p>\n<p><strong>Tools<\/strong>: Browser Automation integrates with Microsoft Playwright Testing Workspaces; refreshed MCP samples; updated Deep Research guidance and samples.<\/p>\n<p><strong>Platform<\/strong>: Model Router adds GPT\u20115 support (limited access); Responses API is GA. August updates across Python, .NET, Java, and JavaScript\/TypeScript; Agent Service Java SDK enters public preview. Doc updates include new status dashboard (Preview), API lifecycle v1 guidance, updated quotas\/limits (incl. GPT\u20115 and Model Router), tracing\/observability, CMK clarifications, enterprise chat web app tutorial, and region\/model availability matrices.<\/p>\n<h2>\ud83d\udcec Subscribe to &#8220;What\u2019s New in Foundry&#8221; monthly<\/h2>\n<p>Prefer a nudge each month?<\/p>\n<ol>\n<li>Copy this RSS feed URL<\/li>\n<li>Use a preferred RSS Reader like Feedly<\/li>\n<li>Add this \ud83d\udc47\ud83c\udffb URL to follow<\/li>\n<\/ol>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-primary\" href=\"https:\/\/devblogs.microsoft.com\/foundry\/category\/whats-new\/feed\/\" target=\"_blank\" rel=\"noopener\">Subscribe\u00a0via\u00a0RSS<\/a><\/div>\n<h2>Models<\/h2>\n<h3>GPT\u20115 arrives: models, pricing, regions, and access<\/h3>\n<p><code>gpt-5<\/code>, <code>gpt-5-mini<\/code>, <code>gpt-5-nano<\/code>, and <code>gpt-5-chat<\/code> are now available in Azure AI Foundry. Registration is required for <code>gpt-5<\/code> access; mini, nano, and chat do not require registration.<\/p>\n<h4>Details<\/h4>\n<ul>\n<li>Roles and context windows\n<ul>\n<li><code>gpt-5<\/code>: next\u2011gen reasoning with long horizon tasks; up to ~272K tokens context.<\/li>\n<li><code>gpt-5-chat<\/code>: multimodal conversational model; ~128K tokens context.<\/li>\n<li><code>gpt-5-mini<\/code>: fast, tool\u2011calling\/real\u2011time friendly baseline.<\/li>\n<li><code>gpt-5-nano<\/code>: ultra\u2011low\u2011latency Q&amp;A and lightweight tasks.<\/li>\n<\/ul>\n<\/li>\n<li>Provisioned throughput: <code>gpt-5<\/code> supports provisioned throughput (PTUs); mini\/nano\/chat are available as standard deployments.<\/li>\n<li>Deployment options: Global and Data Zone (US, EU).<\/li>\n<li>API: Available via the v1 Responses API (recommended) and Chat Completions.<\/li>\n<\/ul>\n<h4>Freeform tool calling<\/h4>\n<p><iframe width=\"560\" height=\"315\" src=\"https:\/\/www.youtube.com\/embed\/y43sgs-Y8-U\" allowfullscreen><\/iframe><\/p>\n<p>Freeform tool calling in GPT\u20115 enables the model to send raw text payloads like Python scripts, SQL queries, or configuration files directly to external tools without needing to wrap them in structured JSON\u2014eliminating rigid schemas and reducing integration overhead. In this episode, April shows how the model executes model\u2011generated SQL to compute results, routes the output to Python for formatting and visualization, and returns reproducible artifacts (CSVs, charts) from a natural\u2011language prompt.<\/p>\n<p>Try it yourself in Azure AI Foundry:<\/p>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/devblogs.microsoft.com\/foundry\/unlocking-gpt-5s-freeform-tool-calling-a-new-era-of-seamless-integration\/\" target=\"_blank\" rel=\"noopener\">Learn\u00a0more<\/a><\/div>\n<h4>Pricing (per 1M tokens)<\/h4>\n<table>\n<thead>\n<tr>\n<th>Model<\/th>\n<th>Input (G)<\/th>\n<th>Cached input (G)<\/th>\n<th>Output (G)<\/th>\n<th><\/th>\n<th>Input (DZ)<\/th>\n<th>Cached input (DZ)<\/th>\n<th>Output (DZ)<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>gpt\u20115<\/td>\n<td>$1.25<\/td>\n<td>$0.125<\/td>\n<td>$10.00<\/td>\n<td><\/td>\n<td>$1.375<\/td>\n<td>$0.1375<\/td>\n<td>$11.00<\/td>\n<\/tr>\n<tr>\n<td>gpt\u20115\u2011mini<\/td>\n<td>$0.25<\/td>\n<td>$0.025<\/td>\n<td>$2.00<\/td>\n<td><\/td>\n<td>$0.275<\/td>\n<td>$0.0275<\/td>\n<td>$2.20<\/td>\n<\/tr>\n<tr>\n<td>gpt\u20115\u2011nano<\/td>\n<td>$0.05<\/td>\n<td>$0.005<\/td>\n<td>$0.40<\/td>\n<td><\/td>\n<td>$0.055<\/td>\n<td>$0.0055<\/td>\n<td>$0.44<\/td>\n<\/tr>\n<tr>\n<td>gpt\u20115\u2011chat<\/td>\n<td>$1.25<\/td>\n<td>$0.125<\/td>\n<td>$10.00<\/td>\n<td><\/td>\n<td>\u2014<\/td>\n<td>\u2014<\/td>\n<td>\u2014<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Legend: G = Global. DZ = Data Zone (US, EU).<\/p>\n<h4>Regions &amp; access<\/h4>\n<ul>\n<li>Regions (at launch): <code>eastus2<\/code> and <code>swedencentral<\/code><\/li>\n<li>Access: Registration is required for <code>gpt-5<\/code> (<a href=\"https:\/\/aka.ms\/oai\/gpt5access\">request access<\/a>). If you already have o3 access, no additional request is required.<\/li>\n<li><code>gpt\u20115\u2011mini<\/code>, <code>gpt\u20115\u2011nano<\/code>, and <code>gpt\u20115\u2011chat<\/code> do not require registration.<\/li>\n<\/ul>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-primary\" href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/openai\/how-to\/reasoning\" target=\"_blank\" rel=\"noopener\">Quickstart: Reasoning models<\/a><\/div>\n<h3>Model router: GPT\u20115 series support<\/h3>\n<p>The model router now supports dynamic selection across the GPT\u20115 family for cost and quality optimization. Access is limited for the latest router version\u2014request via the GPT\u20115 access form (if you already have o3 access, no additional request is required). When the router selects a reasoning model, some parameters (for example, <code>temperature<\/code>\/<code>top_p<\/code>) may be ignored; <code>reasoning_effort<\/code> isn\u2019t settable through the router. Billing is based on the underlying model the router selects.<\/p>\n<h4>Quickstart: route with Chat Completions (Python)<\/h4>\n<pre><code class=\"language-python\">import os\r\nfrom openai import OpenAI\r\n\r\nclient = OpenAI(\r\n    api_key=os.getenv(\"AZURE_OPENAI_API_KEY\"),\r\n    base_url=\"https:\/\/YOUR-RESOURCE-NAME.openai.azure.com\/\",\r\n)\r\n\r\n# Replace with your model router deployment name\r\nresponse = client.chat.completions.create(\r\n    model=\"model-router\",\r\n    messages=[\r\n        {\"role\": \"user\", \"content\": \"Write a two-sentence product description for a smart thermostat.\"}\r\n    ],\r\n)\r\n\r\nprint(response.choices[0].message.content)<\/code><\/pre>\n<h4>Watch: Model Router demo<\/h4>\n<p><iframe width=\"560\" height=\"315\" src=\"https:\/\/www.youtube.com\/embed\/2NL2XpigH0A\" allowfullscreen><\/iframe><\/p>\n<h3>Black Forest Labs FLUX image models<\/h3>\n<p>FLUX.1 Kontext [pro] and FLUX1.1 [pro] are now available \u201cDirect from Azure\u201d in Foundry Models. Kontext is multimodal\u2014text-to-image plus in\u2011context image editing\u2014supporting edit instructions, character\/style\/object reference without fine\u2011tuning, and robust multi\u2011step refinements with minimal drift (up to ~8\u00d7 faster at 1024\u00d71024). FLUX1.1 [pro] focuses on text\u2011to\u2011image generation only, achieves top-tier Elo on Artificial Analysis (evaluated as \u201cblueberry\u201d), and is significantly faster than earlier FLUX releases, with an Ultra mode up to 4 MP.<\/p>\n<h4>Model details<\/h4>\n<table>\n<thead>\n<tr>\n<th>Model<\/th>\n<th>Performance notes<\/th>\n<th>Resolution \/ modes<\/th>\n<th>Regions<\/th>\n<th>Pricing (per 1K images)<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>FLUX.1 Kontext [pro]<\/td>\n<td>Up to 8\u00d7 faster at 1024\u00d71024; robust multi\u2011edit consistency<\/td>\n<td>1024\u00d71024 (default)<\/td>\n<td><code>swedencentral<\/code>, <code>eastus2<\/code><\/td>\n<td>$40<\/td>\n<\/tr>\n<tr>\n<td>FLUX1.1 [pro]<\/td>\n<td>High Elo on Artificial Analysis; ~10s for 4 MP; up to 6\u00d7 faster than FLUX 1\u2011pro*<\/td>\n<td>Up to 4 MP (Ultra mode)<\/td>\n<td><code>swedencentral<\/code>, <code>eastus2<\/code><\/td>\n<td>$40<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>*Performance depends on configuration.<\/p>\n<h4>Get started<\/h4>\n<ul>\n<li>Find them under Foundry Models \u2192 \u201cDirect from Azure\u201d; deploy to get an endpoint\/key and try them in the Image Playground.<\/li>\n<\/ul>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/techcommunity.microsoft.com\/blog\/azure-ai-foundry-blog\/black-forest-labs-flux-1-kontext-pro-and-flux1-1-pro-now-available-in-azure-ai-f\/4434659\" target=\"_blank\" rel=\"noopener\">Read\u00a0the\u00a0announcement<\/a><\/div>\n<h3>Mistral Document AI (OCR) \u2014 serverless in Foundry<\/h3>\n<p>Unlock high\u2011fidelity, layout\u2011aware document understanding with structured outputs. Mistral Document AI combines vision + language to preserve tables, figures, and headings; returns structured JSON and markdown\u2011like tables; and supports multilingual scans, PDFs, and complex forms. It\u2019s sold \u201cDirect from Azure\u201d and deploys in one click as a serverless endpoint in Azure AI Foundry.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2025\/09\/mistral-doc-structured-output.png\" alt=\"Mistral Document AI converting a scientific PDF into structured JSON with preserved tables and headings\" \/>\nSource: @MistralAI post <a href=\"https:\/\/x.com\/MistralAI\/status\/1957516008043729243\">2025-08-18<\/a><\/p>\n<h4>Watch: Mistral Document AI demo<\/h4>\n<p>See how Mistral Document AI parses complex PDFs, preserves layout, and returns structured JSON \u2014 including tables, headings, and figures. The demo walks through serverless inference in Azure AI Foundry and exporting results for downstream RAG and automation workflows.<\/p>\n<p><iframe width=\"560\" height=\"315\" src=\"https:\/\/www.youtube.com\/embed\/MUu9o8tDwi0\" allowfullscreen><\/iframe><\/p>\n<h4>Details<\/h4>\n<table>\n<thead>\n<tr>\n<th>Model<\/th>\n<th>Global<\/th>\n<th>Data Zone<\/th>\n<th>Regions<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>mistral-document-ai-2505<\/td>\n<td>$3.00<\/td>\n<td>$3.30<\/td>\n<td><code>eastus2<\/code>, <code>swedencentral<\/code><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h4>Quickstart (Python, serverless REST)<\/h4>\n<pre><code class=\"language-python\">import os, base64, requests\r\n\r\nendpoint = os.environ[\"MISTRAL_DOC_AI_ENDPOINT\"]  # e.g., https:\/\/{project}.eastus2.models.ai.azure.com\/inference\r\napi_key = os.environ[\"MISTRAL_DOC_AI_KEY\"]\r\n\r\nheaders = {\r\n    \"Content-Type\": \"application\/json\",\r\n    \"Authorization\": f\"Bearer {api_key}\",\r\n}\r\n\r\nwith open(\"sample.pdf\", \"rb\") as f:\r\n    encoded = base64.b64encode(f.read()).decode(\"utf-8\")\r\n\r\npayload = {\r\n    \"model\": \"mistral-document-ai-2505\",\r\n    \"document\": {\r\n        \"type\": \"document_url\",\r\n        \"document_url\": f\"data:application\/pdf;base64,{encoded}\",\r\n    },\r\n}\r\n\r\nresponse = requests.post(endpoint, json=payload, headers=headers)\r\nprint(response.json()[\"pages\"][0][\"markdown\"])  # first page as markdown<\/code><\/pre>\n<p>Get started with these <a href=\"https:\/\/github.com\/azure-ai-foundry\/foundry-samples\/tree\/main\/samples\/mistral\/python\">mistral\/python<\/a> Foundry samples in our GitHub repo.<\/p>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/techcommunity.microsoft.com\/blog\/azure-ai-foundry-blog\/unlocking-document-intelligence-mistral-ocr-now-available-in-azure-ai-foundry\/4401836\" target=\"_blank\" rel=\"noopener\">Read\u00a0the\u00a0announcement<\/a><\/div>\n<h3>Sora API \u2014 new updates (Preview)<\/h3>\n<p>We\u2019ve rolled out powerful new capabilities in Sora that unlock even more creative potential:<\/p>\n<h4>What\u2019s new<\/h4>\n<ul>\n<li>Image\u2011to\u2011Video support via API, including frame indexing and region\u2011specific inpainting.<\/li>\n<li>Global Standard expansion: Sora is now available in East US 2 and Sweden Central under the Global Standard SKU.<\/li>\n<\/ul>\n<h4>Get started<\/h4>\n<ul>\n<li><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/openai\/concepts\/video-generation\">Concepts<\/a><\/li>\n<li><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/openai\/video-generation-quickstart?tabs=windows%2Ckeyless\">Quickstart<\/a><\/li>\n<\/ul>\n<h3>VibeVoice \u2014 long conversational TTS (Coming soon to Foundry Models)<\/h3>\n<p>An open-source, frontier text-to-speech framework for expressive, long-form, multi-speaker audio (think podcasts and panel shows).<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2025\/09\/VibeVoice.png\" alt=\"VibeVoice hero image showing long\u2011form, multi\u2011speaker TTS in Azure AI Foundry\" \/>\nSource: <a href=\"https:\/\/microsoft.github.io\/VibeVoice\/\">VibeVoice GitHub Page<\/a><\/p>\n<h4>Overview<\/h4>\n<p>VibeVoice synthesizes expressive, long\u2011form, multi\u2011speaker audio\u2014up to ~90 minutes per session with natural turn\u2011taking across as many as four distinct voices. It brings context\u2011aware expression (including spontaneous emotion and singing), cross\u2011lingual transfer, and occasional background\u2011music moments together with efficient continuous acoustic\/semantic tokenizers operating at 7.5 Hz and a next\u2011token diffusion head guided by an LLM for dialogue flow. Status: Coming soon to Foundry Models; in the meantime you can try the open weights in a live demo playground made with Gradio.<\/p>\n<h4>Models (open weights)<\/h4>\n<table>\n<thead>\n<tr>\n<th>Model<\/th>\n<th>Parameters<\/th>\n<th>Context length<\/th>\n<th>Generation length<\/th>\n<th>Weight<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>VibeVoice-1.5B<\/td>\n<td>1.5 billion<\/td>\n<td>64K tokens<\/td>\n<td>~90 min<\/td>\n<td><a href=\"https:\/\/huggingface.co\/microsoft\/VibeVoice-1.5B\">HF link<\/a><\/td>\n<\/tr>\n<tr>\n<td>VibeVoice-Large<\/td>\n<td>7 billion<\/td>\n<td>32K tokens<\/td>\n<td>~45 min<\/td>\n<td><a href=\"https:\/\/huggingface.co\/microsoft\/VibeVoice-Large\">HF link<\/a><\/td>\n<\/tr>\n<tr>\n<td>VibeVoice-0.5B-Streaming<\/td>\n<td>0.5 billion<\/td>\n<td>TBD<\/td>\n<td>TBD<\/td>\n<td>On the way<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h4>Get started<\/h4>\n<ul>\n<li><a href=\"https:\/\/aka.ms\/VibeVoice-Demo\">Live demo (Gradio)<\/a><\/li>\n<li><a href=\"https:\/\/github.com\/microsoft\/VibeVoice\">GitHub<\/a><\/li>\n<li><a href=\"https:\/\/huggingface.co\/collections\/microsoft\/vibevoice-68a2ef24a875c44be47b034f\">Hugging Face collection<\/a><\/li>\n<\/ul>\n<h3>OpenAI gpt\u2011oss<\/h3>\n<p>OpenAI\u2019s first open\u2011weight models since GPT\u20112\u2014<code>gpt\u2011oss\u2011120b<\/code> and <code>gpt\u2011oss\u201120b<\/code>\u2014are available in Azure AI Foundry. <code>gpt\u2011oss\u201120b<\/code> also runs locally via Foundry Local and Windows AI Foundry.<\/p>\n<blockquote><p><strong>NOTE<\/strong>: <code>gpt\u2011oss\u201120b<\/code> is only available for <a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/quickstarts\/get-started-code?tabs=azure-ai-foundry&amp;pivots=hub-project\">Hub-based projects<\/a> with <a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/how-to\/deploy-models-managed\">managed compute<\/a>\u2015<code>gpt\u2011oss\u2011120b<\/code> can be deployed from either Hub-based projects with managed compute or serverless with <a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/quickstarts\/get-started-code?tabs=azure-ai-foundry&amp;pivots=fdp-project\">Foundry projects<\/a>. Managed compute deployments in Foundry projects are coming soon!<\/p><\/blockquote>\n<h4>Models<\/h4>\n<ul>\n<li><code>gpt\u2011oss\u2011120b<\/code>: reasoning\u2011focused, data\u2011center class GPU (single\u2011GPU capable in cloud deployments).<\/li>\n<li><code>gpt\u2011oss\u201120b<\/code>: lightweight, tool\u2011savvy; optimized for local\/edge including modern Windows PCs with 16GB+ VRAM.<\/li>\n<li>API compatibility with the Responses API is coming\u2014swap into existing apps with minimal changes.<\/li>\n<\/ul>\n<h4>Run locally with Foundry Local<\/h4>\n<ul>\n<li>Install Foundry Local (preview). On Windows: winget install Microsoft.FoundryLocal<\/li>\n<li>Requirements for <code>gpt\u2011oss\u201120b<\/code>: Foundry Local 0.6.87+ and NVIDIA GPU with 16GB+ VRAM.<\/li>\n<li>Quick CLI: foundry model run gpt-oss-20b<\/li>\n<\/ul>\n<h4>Hello world (Python, Foundry Local)<\/h4>\n<pre><code class=\"language-python\">import openai\r\nfrom foundry_local import FoundryLocalManager\r\n\r\n# Use the GPT-OSS 20B model locally.\r\nalias = \"gpt-oss-20b\"  # Requires Foundry Local &gt;= 0.6.87 and ~16GB+ VRAM GPU\r\n\r\n# Start Foundry Local (if not running) and load the model\r\nmanager = FoundryLocalManager(alias)\r\n\r\n# Point OpenAI SDK to the local Foundry endpoint (no real key needed locally)\r\nclient = openai.OpenAI(\r\n    base_url=manager.endpoint,\r\n    api_key=manager.api_key\r\n)\r\n\r\n# Minimal chat request\r\nresp = client.chat.completions.create(\r\n    model=manager.get_model_info(alias).id,\r\n    messages=[{\"role\": \"user\", \"content\": \"Say hello from gpt-oss locally.\"}]\r\n)\r\n\r\nprint(resp.choices[0].message.content)<\/code><\/pre>\n<h4>Learn more<\/h4>\n<ul>\n<li><a href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/openais-open%E2%80%91source-model-gpt%E2%80%91oss-on-azure-ai-foundry-and-windows-ai-foundry\/\">Azure blog<\/a><\/li>\n<li><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/foundry-local\/get-started\">Quickstart<\/a><\/li>\n<\/ul>\n<h2>Agents<\/h2>\n<h3>Foundry Agent Service expands to four new regions<\/h3>\n<p>Agent Service is now available in <code>brazilsouth<\/code>, <code>germanywestcentral<\/code>, <code>italynorth<\/code>, and <code>southcentralus<\/code> bringing the region support total to <a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/agents\/concepts\/model-region-support?tabs=global-standard#available-models\">17 Azure regions<\/a>.<\/p>\n<blockquote><p><strong>NOTE<\/strong>: The <code>file_search_tool<\/code> is currently unavailable in the following regions <code>italynorth<\/code> and <code>brazilsouth<\/code>.<\/p><\/blockquote>\n<h2>Tools<\/h2>\n<h3>Browser Automation tool (Public Preview)<\/h3>\n<p>Ship agents that drive a real browser\u2014search, navigate, fill forms, and even book appointments\u2014using natural language. Browser Automation runs inside your Azure subscription using your Microsoft Playwright Testing Workspace, so you don\u2019t manage VMs or standalone browsers. Because it reasons over the page\u2019s DOM (roles\/labels) instead of pixels, it\u2019s much more resilient than click\u2011by\u2011coordinates flows. Learn how it works in the Browser Automation docs and what a Playwright Workspace is here: <a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/agents\/how-to\/tools\/browser-automation\">Browser Automation<\/a> \u00b7 <a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/playwright-testing\/overview-what-is-microsoft-playwright-testing\">Playwright Workspaces<\/a>.<\/p>\n<p><iframe width=\"560\" height=\"315\" src=\"https:\/\/www.youtube.com\/embed\/FBQRc-M18ws\" allowfullscreen><\/iframe><\/p>\n<p>Where it shines: automating multi\u2011step bookings and calendars, product discovery with criteria\u2011based navigation and summaries, and robust web form interactions (submissions, profile updates, document uploads). Multi\u2011turn conversations mean you can correct and iterate without restarting the flow.<\/p>\n<p>Setup is straightforward: create a Playwright Workspace and token, add a Serverless connection to its wss:\/\/ regional endpoint in your Foundry project (Management center \u2192 Connected resources), and grant the project identity the Contributor role on the Workspace. Then attach the tool to your agent by referencing the connection ID.<\/p>\n<p>Heads\u2011up: review the transparency notes and warnings before pointing the tool at production sites; prefer low\u2011privilege, isolated environments.<\/p>\n<blockquote><p><strong>NOTE<\/strong>: Before running the code sample, ensure you have installed the latest pre-release of azure-ai-agents.<\/p><\/blockquote>\n<pre class=\"prettyprint language-default\"><code class=\"language-default\">pip install --pre azure-ai-agents&gt;=1.2.0b2<\/code><\/pre>\n<p>Add the tool to an agent (Agents REST):<\/p>\n<pre><code class=\"language-json\">{\r\n    \"model\": \"MODEL_DEPLOYMENT_NAME\",\r\n    \"instructions\": \"You can browse and fill forms on the specified site to accomplish the user's goal.\",\r\n    \"tools\": [\r\n        {\r\n            \"type\": \"browser_automation\",\r\n            \"connection_id\": \"AZURE_PLAYWRIGHT_CONNECTION_NAME\"\r\n        }\r\n    ]\r\n}<\/code><\/pre>\n<p>Quickstart (Python SDK):<\/p>\n<pre><code class=\"language-python\">import os\r\nfrom azure.identity import DefaultAzureCredential\r\nfrom azure.ai.projects import AIProjectClient\r\nfrom azure.ai.agents.models import MessageRole, BrowserAutomationTool\r\n\r\nproject_client = AIProjectClient(\r\n    endpoint=os.environ[\"PROJECT_ENDPOINT\"],\r\n    credential=DefaultAzureCredential()\r\n)\r\n\r\nconnection_id = os.environ[\"AZURE_PLAYWRIGHT_CONNECTION_ID\"]  # from Connected resources (wss:\/\/...)\r\nmodel = os.environ[\"MODEL_DEPLOYMENT_NAME\"]\r\n\r\nbrowser_tool = BrowserAutomationTool(connection_id=connection_id)\r\n\r\nwith project_client:\r\n    agent = project_client.agents.create_agent(\r\n        model=model,\r\n        name=\"browser-agent\",\r\n        instructions=\"Use the browser tool to complete the task.\",\r\n        tools=browser_tool.definitions,\r\n    )\r\n\r\n    thread = project_client.agents.threads.create()\r\n    project_client.agents.messages.create(\r\n        thread_id=thread.id,\r\n        role=MessageRole.USER,\r\n        content=(\r\n            \"Go to https:\/\/finance.yahoo.com, search for MSFT, switch the chart to YTD, \"\r\n            \"and report the percent change.\"\r\n        ),\r\n    )\r\n\r\n    run = project_client.agents.runs.create_and_process(thread_id=thread.id, agent_id=agent.id)\r\n    print(\"Run status:\", run.status)<\/code><\/pre>\n<p>Learn more in the docs and the announcement post: <a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/agents\/how-to\/tools\/browser-automation\">Docs<\/a> \u00b7 <a href=\"https:\/\/devblogs.microsoft.com\/foundry\/announcing-the-browser-automation-tool-preview-in-azure-ai-foundry-agent-service\/\">Blog<\/a><\/p>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/agents\/how-to\/tools\/browser-automation-samples\" target=\"_blank\" rel=\"noopener\">Run\u00a0the\u00a0samples<\/a><\/div>\n<h2>Platform (API, SDK, UI, and more)<\/h2>\n<h3>Responses API \u2014 Generally Available (GA)<\/h3>\n<p>Build intelligent, tool\u2011using agents with stateful, multi\u2011turn conversations in a single API call. Now GA, the Responses API automatically maintains conversation state, stitches multiple tool calls with model reasoning and outputs in one flow, supports popular Azure OpenAI models\u2014including the GPT\u20115 series and fine\u2011tuned variants\u2014for predictable, structured outputs, and scales on Azure\u2019s enterprise\u2011grade identity, security, and compliance.<\/p>\n<h4>Tools (built\u2011in and current limits)<\/h4>\n<ul>\n<li>Built\u2011in: File Search, Function Calling, Code Interpreter (Python), Computer Use, Image Generation, and Remote MCP Server.<\/li>\n<li>Not supported: Web search tool. Use Grounding with Bing Search instead.<\/li>\n<li>Coming soon: Image generation multi\u2011turn editing\/streaming and image uploads referenced from prompts.<\/li>\n<\/ul>\n<h4>API support<\/h4>\n<ul>\n<li>v1 API is required for access to the latest features. Learn more: https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/openai\/api-version-lifecycle#api-evolution<\/li>\n<\/ul>\n<h4>Regions<\/h4>\n<ul>\n<li>Available in: <code>australiaeast<\/code>, <code>eastus<\/code>, <code>eastus2<\/code>, <code>francecentral<\/code>, <code>japaneast<\/code>, <code>norwayeast<\/code>, <code>polandcentral<\/code>, <code>southindia<\/code>, <code>swedencentral<\/code>, <code>switzerlandnorth<\/code>, <code>uaenorth<\/code>, <code>uksouth<\/code>, <code>westus<\/code>, <code>westus3<\/code>.<\/li>\n<li>Note: Not every model is available in every region\u2014see model region availability: https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/openai\/concepts\/models<\/li>\n<\/ul>\n<h4>Model support<\/h4>\n<ul>\n<li><code>GPT\u20115<\/code>, <code>GPT\u20114o<\/code>, and <code>GPT\u20114.1<\/code>, <code>o\u2011series<\/code> model families, plus <code>gpt\u2011image\u20111<\/code> and <code>computer\u2011use\u2011preview<\/code>.<\/li>\n<\/ul>\n<h4>Notes &amp; known limits<\/h4>\n<ul>\n<li>PDFs as input are supported, but file upload purpose <code>user_data<\/code> isn\u2019t currently supported. Background mode with streaming may show performance issues (fix coming).<\/li>\n<\/ul>\n<h4>Quickstart (Python, API key)<\/h4>\n<pre><code class=\"language-python\">import os\r\nfrom openai import OpenAI\r\n\r\nclient = OpenAI(\r\n    api_key=os.getenv(\"AZURE_OPENAI_API_KEY\"),\r\n    base_url=\"https:\/\/YOUR-RESOURCE-NAME.openai.azure.com\/openai\/v1\/\",\r\n)\r\n\r\nresponse = client.responses.create(\r\n    model=\"MODEL_DEPLOYMENT_NAME\",  # e.g., gpt-4.1-nano or your deployment name\r\n    input=\"This is a test.\",\r\n)\r\n\r\nprint(response.model_dump_json(indent=2))<\/code><\/pre>\n<h4>Get started<\/h4>\n<ul>\n<li><a href=\"https:\/\/techcommunity.microsoft.com\/blog\/azure-ai-services-blog\/the-responses-api-in-azure-ai-foundry-is-now-generally-available\/4446567\">Blog<\/a><\/li>\n<li><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/openai\/latest#create-response\">API reference (latest)<\/a><\/li>\n<\/ul>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/openai\/how-to\/responses?tabs=python-key\" target=\"_blank\" rel=\"noopener\">Build\u00a0with\u00a0Responses\u00a0API<\/a><\/div>\n<h3>Python SDK release highlights<\/h3>\n<ul>\n<li>Agents: 1.1.0 stable is based on 1.0.2 (excludes beta features); 1.2.0b1 beta continues the experimental line and adds tool_resources support for async runs, with multiple fixes and public type promotions.<\/li>\n<li>AI Evaluation: 1.10.0 adds <code>evaluate_query<\/code> for RAI evaluators (default False) and delivers significant performance\/variance improvements plus new grader and threshold controls.<\/li>\n<li>AI Projects: 1.0.0 stable removes preview features and renames classes\/APIs; 1.1.0b1 adds Evaluations cancel\/delete; 1.1.0b2 fixes a Red\u2011Team regression.<\/li>\n<\/ul>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/azure.github.io\/azure-sdk\/releases\/2025-08\/python.html\" target=\"_blank\" rel=\"noopener\">View\u00a0Python\u00a0SDK\u00a0Release\u00a0Notes<\/a><\/div>\n<h3>.NET SDK release highlights<\/h3>\n<ul>\n<li>AI Agents Persistent 1.1.0 adds tracing for Agents, an include parameter to CreateRunStreaming\/CreateRunStreamingAsync, and a tool_resources parameter to CreateRun\/CreateRunAsync.<\/li>\n<li>AI Agents Persistent 1.2.0-beta.1 fixes an issue where the after parameter was ignored when retrieving pageable lists.<\/li>\n<li>OpenAI Inference 2.3.0-beta.1 carries forward a substantial number of features from the OpenAI library (see OpenAI 2.3.0 release notes for details).<\/li>\n<\/ul>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/azure.github.io\/azure-sdk\/releases\/2025-08\/dotnet.html\" target=\"_blank\" rel=\"noopener\">View\u00a0.NET\u00a0SDK\u00a0Release\u00a0Notes<\/a><\/div>\n<h3>Java SDK Updates<\/h3>\n<h4>Agent Service Java SDK (Public Preview)<\/h4>\n<p>Public preview of the Agent Service Java SDK with Quickstart and tool samples. <a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/agents\/quickstart?pivots=programming-language-java\">Get started<\/a>.<\/p>\n<h5>Tool samples (Java)<\/h5>\n<ul>\n<li><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/agents\/how-to\/tools\/azure-ai-search-samples?pivots=java\">Azure AI Search<\/a><\/li>\n<li><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/agents\/how-to\/tools\/azure-functions-samples?pivots=java\">Azure Functions<\/a><\/li>\n<li><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/agents\/how-to\/tools\/code-interpreter-samples?pivots=java\">Code Interpreter<\/a><\/li>\n<li><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/agents\/how-to\/tools\/file-search-upload-files?pivots=java\">File Search<\/a><\/li>\n<li><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/agents\/how-to\/tools\/bing-code-samples?pivots=java\">Bing Search<\/a><\/li>\n<li><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/agents\/how-to\/tools\/openapi-spec-samples?pivots=java\">OpenAPI tools<\/a><\/li>\n<\/ul>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/azure.github.io\/azure-sdk\/releases\/2025-08\/java.html\" target=\"_blank\" rel=\"noopener\">View\u00a0Java\u00a0SDK\u00a0Release\u00a0Notes<\/a><\/div>\n<h3>JavaScript\/TypeScript release highlights<\/h3>\n<ul>\n<li>AI Agents 1.1.0-beta.1 adds MCP tool, Deep Research tool and sample, and brings back SharepointGroundingTool, BingCustomSearchTool, MicrosoftFabricTool, and SharepointTool; includes a breaking change to DeepResearchDetails field names.<\/li>\n<li>AI Agents 1.1.0-beta.2 fixes message image upload type error and stream event deserialization in runs.create.<\/li>\n<li>AI Agents 1.1.0-beta.3 fixes missing required parameter json_schema in runs.createAndPoll.<\/li>\n<li>AI Agents 1.1.0 (stable) removes preview-only tools (MCP, Deep Research, Sharepoint, BingCustomSearch, MicrosoftFabric) from the stable line.<\/li>\n<li>AI Projects 1.0.0 includes breaking changes removing redTeams\/evaluations and legacy inference helpers, plus renames for telemetry and Azure OpenAI client access.<\/li>\n<\/ul>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/azure.github.io\/azure-sdk\/releases\/2025-08\/js.html\" target=\"_blank\" rel=\"noopener\">View\u00a0JavaScript\/TypeScript\u00a0SDK\u00a0Release\u00a0Notes<\/a><\/div>\n<h3>Documentation Updates<\/h3>\n<ul>\n<li>[New] Capability hosts \u2014 Concept: package tools\/resources for agents; guidance on hosting, security, and deployment (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/agents\/concepts\/capability-hosts\">link<\/a>)<\/li>\n<li>[New] Cost management for fine\u2011tuning \u2014 Track\/limit spend, clean up artifacts, and optimize job configurations (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/openai\/how-to\/fine-tuning-cost-management\">link<\/a>)<\/li>\n<li>[Updated] Deep Research tool \u2014 How to enable and use the Deep Research tool in Agent Service (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/agents\/how-to\/tools\/deep-research\">link<\/a>)<\/li>\n<li>[Updated] Deep Research samples \u2014 End\u2011to\u2011end examples for Deep Research flows (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/agents\/how-to\/tools\/deep-research-samples\">link<\/a>)<\/li>\n<li>[New] Evaluations storage account setup \u2014 Configure storage for evaluations in projects (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/how-to\/evaluations-storage-account\">link<\/a>)<\/li>\n<li>[New] Migrate from hubs to Foundry projects \u2014 Step\u2011by\u2011step migration guidance and considerations (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/how-to\/migrate-project\">link<\/a>)<\/li>\n<li>[New] Serverless API inference for Foundry Models \u2014 Code patterns for calling serverless \u201cDirect from Azure\u201d models (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/concepts\/models-inference-examples\">link<\/a>)<\/li>\n<li>[New] Create Foundry resources with Terraform \u2014 IaC guide for consistent provisioning (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/how-to\/create-resource-terraform\">link<\/a>)<\/li>\n<li>[Updated] Responses API \u2014 Latest how\u2011to for GA Responses API with v1 semantics (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/openai\/how-to\/responses\">link<\/a>)<\/li>\n<li>[Updated] MCP tool samples \u2014 Model Context Protocol samples and patterns (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/agents\/how-to\/tools\/model-context-protocol-samples\">link<\/a>)<\/li>\n<li>[Updated] Customer\u2011managed keys \u2014 Concepts and setup for CMK across projects\/hubs (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/concepts\/encryption-keys-portal\">link<\/a>)<\/li>\n<li>[Updated] Evaluate apps \u2014 Portal guide for evaluating generative AI applications (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/how-to\/evaluate-generative-ai-app\">link<\/a>)<\/li>\n<li>[Updated] Evaluate agents locally \u2014 SDK guide for local agent evaluation workflows (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/how-to\/develop\/agent-evaluate-sdk\">link<\/a>)<\/li>\n<li>[Updated] Evaluate apps locally \u2014 SDK guide for app evaluation workflows (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/how-to\/develop\/evaluate-sdk\">link<\/a>)<\/li>\n<li>[Updated] Foundry Models and capabilities \u2014 Catalog concepts, capabilities, and model families (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/foundry-models\/concepts\/models\">link<\/a>)<\/li>\n<li>[New] Evaluation simulators \u2014 Preview simulators to generate synthetic\/adversarial data with end\u2011to\u2011end samples, multi\u2011turn flows, regions, and JSONL helpers (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/how-to\/develop\/simulator-interaction-data\">link<\/a>)<\/li>\n<li>[Updated] SharePoint tool samples \u2014 Samples for the SharePoint tool (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/agents\/how-to\/tools\/sharepoint-samples\">link<\/a>)<\/li>\n<li>[Updated] View evaluation results in the portal \u2014 Portal views and interpretation tips (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/how-to\/evaluate-results\">link<\/a>)<\/li>\n<li>[Updated] Managed network for hubs \u2014 Clarifies isolation modes (Internet outbound vs Approved outbound), hub\u2011based only, irreversible once enabled, Azure Firewall FQDN rules (cost), and adds the Enterprise Network Connection Approver role (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/how-to\/configure-managed-network\">link<\/a>)<\/li>\n<li>[Updated] Role\u2011based access control \u2014 Expanded role definitions (Azure AI User, Project Manager, Account Owner) with JSON permission blocks and conditional delegation; Contributor can deploy models (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/concepts\/rbac-azure-ai-foundry\">link<\/a>)<\/li>\n<li>[Updated] Customer\u2011enabled disaster recovery \u2014 No automatic failover; guidance for hot\/hot, hot\/warm, hot\/cold; hub\u2011based only; paired regions and replication responsibilities table (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/how-to\/disaster-recovery\">link<\/a>)<\/li>\n<li>[Updated] Deploy Azure OpenAI models \u2014 Refined portal flows from Catalog\/Project, updated inference guidance, and quota\/region pointers (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/how-to\/deploy-models-openai\">link<\/a>)<\/li>\n<li>[New] Azure AI Foundry status dashboard (Preview) \u2014 Live status, incident timelines\/RCAs, historical uptime; subscribe via email\/SMS\/webhook (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/azure-ai-foundry-status-dashboard-documentation\">link<\/a>)<\/li>\n<li>[Updated] Azure OpenAI API lifecycle (v1) \u2014 GA endpoints; preview opt\u2011in via headers or path; use OpenAI client with base_url to \/openai\/v1; APIM OpenAPI 3.1 caveat (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/openai\/api-version-lifecycle\">link<\/a>)<\/li>\n<li>[Updated] Quotas and limits \u2014 Adds GPT\u20115 TPM\/RPM tables, model\u2011router tier limits, o\u2011series capacity unit RPM\/TPM ratios; capacity API to check regional availability (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/openai\/quotas-limits\">link<\/a>)<\/li>\n<li>[Updated] Tracing &amp; observability \u2014 How to instrument agents\/apps with OpenTelemetry, view traces in portal, and export to Azure Monitor (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/how-to\/develop\/trace-agents-sdk\">link<\/a>)<\/li>\n<li>[Updated] Deploy an enterprise chat web app \u2014 Deploy from the playground to App Service with Microsoft Entra authentication and key setup tips (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/tutorials\/deploy-chat-web-app\">link<\/a>)<\/li>\n<li>[Updated] Model availability &amp; regions (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/reference\/region-support\">link<\/a>) \u2014 Canonical region support across Foundry features and model matrices with per\u2011region availability. (<a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/openai\/concepts\/models#model-summary-table-and-region-availability\">link<\/a>)<\/li>\n<\/ul>\n<hr \/>\n<p>Happy building\u2014let us know what you ship with #AzureAIFoundry over in <a href=\"https:\/\/aka.ms\/foundry\/discord\">Discord<\/a> or <a href=\"https:\/\/aka.ms\/foundry\/forum\">GitHub Discussions<\/a>!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>August 2025 highlights GPT\u20115 arrives in Foundry, Model Router adds GPT\u20115 support, Responses API is GA, Browser Automation enters public preview, plus Sora updates, Mistral Document AI, FLUX image models, OpenAI gpt\u2011oss with Foundry Local, and SDK\/documentation updates.<\/p>\n","protected":false},"author":185793,"featured_media":1200,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1,27],"tags":[25,61,12,56,59,38,54,60,2,58,55,53,57],"class_list":["post-1196","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-microsoft-foundry","category-whats-new","tag-agents","tag-azure-sdks","tag-azure-openai","tag-browser-automation","tag-flux","tag-foundry-local","tag-gpt-5","tag-gpt-oss","tag-microsoft-foundry","tag-mistral-document-ai","tag-model-router","tag-responses-api","tag-sora"],"acf":[],"blog_post_summary":"<p>August 2025 highlights GPT\u20115 arrives in Foundry, Model Router adds GPT\u20115 support, Responses API is GA, Browser Automation enters public preview, plus Sora updates, Mistral Document AI, FLUX image models, OpenAI gpt\u2011oss with Foundry Local, and SDK\/documentation updates.<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/posts\/1196","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/users\/185793"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/comments?post=1196"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/posts\/1196\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/media\/1200"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/media?parent=1196"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/categories?post=1196"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/tags?post=1196"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}