{"id":2014,"date":"2026-02-18T17:01:09","date_gmt":"2026-02-19T01:01:09","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/foundry\/?p=2014"},"modified":"2026-03-16T09:45:49","modified_gmt":"2026-03-16T16:45:49","slug":"whats-new-in-microsoft-foundry-dec-2025-jan-2026","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/foundry\/whats-new-in-microsoft-foundry-dec-2025-jan-2026\/","title":{"rendered":"What&#8217;s new in Microsoft Foundry | Dec 2025 &amp; Jan 2026"},"content":{"rendered":"<blockquote>\n<p><strong>Author&#8217;s note<\/strong>: So.. it has been a bit. I have to level with you \u2014 I returned from paternity leave in January and between Microsoft Ignite 2025 and present day, things have changed <em>a lot<\/em>. Without further adieu, here is your monthly (late) drop for all things new with Microsoft Foundry. Expect following editions to be back on track going forward. Thanks for your patience!<\/p>\n<\/blockquote>\n<h2>TL;DR<\/h2>\n<p>December 2025 was one of the biggest months in Microsoft Foundry history. Here&#8217;s everything that shipped:<\/p>\n<ul>\n<li><strong>GPT\u20115.2 (GA):<\/strong> New enterprise reasoning standard \u2014 top benchmark scores across math, science, coding, and multimodal tasks; available as <code>gpt-5.2<\/code> and <code>gpt-5.2-chat-latest<\/code>.<\/li>\n<li><strong>GPT\u20115.1 Codex Max (GA):<\/strong> 77.9% on SWE-Bench, 400K context, 50+ languages \u2014 built for autonomous multi-agent coding pipelines, PR generation, and CI\/CD integration.<\/li>\n<li><strong>Mistral Large 3 (Public Preview):<\/strong> Apache 2.0, 41B active \/ 675B total parameters; $0.50 \/ $1.50 per million tokens. Strong instruction following and multimodal reasoning.<\/li>\n<li><strong>DeepSeek V3.2 + V3.2\u2011Speciale (Public Preview):<\/strong> 128K context, up to 3\u00d7 faster reasoning via Sparse Attention; Speciale drops tool calling entirely for maximum reasoning accuracy.<\/li>\n<li><strong>Kimi\u2011K2 Thinking (Public Preview):<\/strong> Moonshot AI&#8217;s deep reasoning model with a 256K context window, now Direct from Azure.<\/li>\n<li><strong>Cohere Rerank 4 (Fast + Pro):<\/strong> Cross-encoding reranker for RAG pipelines; 100+ languages, serverless pay-as-you-go.<\/li>\n<li><strong>GPT\u2011image\u20111.5 (GA):<\/strong> 4\u00d7 faster generation, ~20% lower cost vs. GPT\u2011image\u20111; adds inpainting and face preservation.<\/li>\n<li><strong>FLUX.2 [pro] (Public Preview):<\/strong> Black Forest Labs&#8217; next-gen image model with multi-reference support, improved text rendering, and enterprise SLAs on Azure.<\/li>\n<li><strong>Audio models (GA, Dec 15):<\/strong> Realtime Mini, ASR (<code>gpt-4o-mini-transcribe<\/code>), and TTS (<code>gpt-4o-mini-tts<\/code>) \u2014 all GA with significant accuracy and latency improvements.<\/li>\n<li><strong>Fine-tuning base models:<\/strong> Ministral 3B, Qwen3 32B, OSS-20B, and Llama 3.3 70B now available for serverless fine-tuning.<\/li>\n<li><strong>\u26a0\ufe0f AzureML SDK v1 EOL: June 30, 2026<\/strong> \u2014 migrate to SDK v2 now; CLI v1 already sunset September 2025.<\/li>\n<li><strong><code>azure-ai-projects<\/code> v2 beta:<\/strong> Agents, inference, evaluations, and memory are now unified in a single package \u2014 the <code>azure-ai-agents<\/code> dependency is gone; <code>2.0.0b3<\/code> shipped January 6, 2026.<\/li>\n<li><strong>Memory in Foundry Agent Service (Public Preview):<\/strong> Managed long-term memory store with automatic extraction, consolidation, and retrieval across agent sessions. Free during preview; pay only for the underlying model calls.<\/li>\n<li><strong>Agent-to-Agent (A2A) Tool (Preview):<\/strong> Let Foundry agents call any A2A-protocol endpoint with explicit auth and clean call\/response semantics \u2014 the structured evolution of Connected Agents.<\/li>\n<li><strong>Foundry MCP Server (Preview):<\/strong> Cloud-hosted MCP at <code>mcp.ai.azure.com<\/code>, live since December 3. Connect from VS Code, Visual Studio, or the Foundry portal \u2014 zero local process management, Entra auth included.<\/li>\n<li><strong>Microsoft Foundry for VS Code \u2014 January 2026:<\/strong> Multi-workflow visualizer, all prompt agents testable in Playground, and code samples for every agent type.<\/li>\n<\/ul>\n<hr \/>\n<h2>Models<\/h2>\n<h3>GPT\u20115.2 \u2014 The New Enterprise Reasoning Standard<\/h3>\n<p><strong>GPT\u20115.2<\/strong> is now generally available in Microsoft Foundry. Built for multi-step problem solving, long-context understanding, and agentic tool-calling, GPT\u20115.2 achieves top scores across math, science, coding, and multimodal benchmarks. Whether you&#8217;re orchestrating agents, reasoning over large document sets, or building production-grade pipelines, GPT\u20115.2 delivers more coherent, compliant, and shippable outputs than any prior generation.<\/p>\n<ul>\n<li><code>gpt-5.2<\/code> \u2014 primary reasoning model for complex enterprise tasks<\/li>\n<li><code>gpt-5.2-chat-latest<\/code> \u2014 optimized for conversational and everyday professional workflows<\/li>\n<\/ul>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-gpt-5-2-in-microsoft-foundry-the-new-standard-for-enterprise-ai\/\" target=\"_blank\">Read&nbsp;Announcement<\/a><\/div>\n<hr \/>\n<h3>GPT\u20115.1 Codex Max \u2014 AI for Autonomous Enterprise Coding<\/h3>\n<p><strong>GPT\u20115.1 Codex Max<\/strong> is now generally available in Microsoft Foundry, purpose-built for engineering-scale coding tasks. It achieves <strong>77.9% on SWE-Bench<\/strong>, supports a <strong>400K token context window<\/strong> across 50+ programming languages, and is designed end-to-end for multi-agent coding workflows \u2014 from refactoring legacy .NET and Java apps to automated pull requests, secure API generation, and CI\/CD pipeline integration. It can be triggered directly from the terminal, VS Code, or GitHub Actions runners.<\/p>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/techcommunity.microsoft.com\/blog\/azure-ai-foundry-blog\/open-ai%E2%80%99s-gpt-5-1-codex-max-in-microsoft-foundry-igniting-a-new-era-for-enterpri\/4475274\" target=\"_blank\">Learn&nbsp;more<\/a><\/div>\n<hr \/>\n<h3>Mistral Large 3 \u2014 Open-Weight Enterprise Intelligence<\/h3>\n<p><strong>Mistral Large 3<\/strong> is now in public preview in Microsoft Foundry, released under the <strong>Apache 2.0 license<\/strong> \u2014 meaning free for commercial use without attribution. With 41B active parameters in a sparse mixture-of-experts architecture (675B total), it delivers strong instruction following, long-context comprehension, and multimodal reasoning, with a straightforward $0.50 \/ $1.50 per million input\/output token price point.<\/p>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/introducing-mistral-large-3-in-microsoft-foundry-open-capable-and-ready-for-production-workloads\/\" target=\"_blank\">Get&nbsp;Started<\/a><\/div>\n<hr \/>\n<h3>DeepSeek V3.2 and V3.2\u2011Speciale<\/h3>\n<p><strong>DeepSeek V3.2<\/strong> and <strong>DeepSeek V3.2\u2011Speciale<\/strong> launched in public preview on December 15, 2025. Both feature a 128K context window and DeepSeek Sparse Attention for up to 3\u00d7 faster reasoning paths. The <strong>Speciale<\/strong> variant is purpose-tuned for maximum reasoning accuracy \u2014 it omits native function\/tool calling entirely to reserve all compute for pure reasoning, making it ideal for research labs, scientific workflows, and high-stakes evaluation pipelines.<\/p>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/techcommunity.microsoft.com\/blog\/azure-ai-foundry-blog\/introducing-deepseek-v3-2-and-deepseek-v3-2-speciale-in-microsoft-foundry\/4477549\" target=\"_blank\">Learn&nbsp;more<\/a><\/div>\n<hr \/>\n<h3>Kimi\u2011K2 Thinking \u2014 Deep Reasoning from Moonshot AI<\/h3>\n<p><strong>Kimi\u2011K2 Thinking<\/strong> from Moonshot AI is now in public preview as a Direct from Azure model. With a <strong>256K context window<\/strong>, it excels at deep reasoning, tool orchestration, and complex multi-step problem solving \u2014 a strong addition to the growing catalog of non-OpenAI frontier reasoning models on Foundry.<\/p>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/techcommunity.microsoft.com\/blog\/azure-ai-foundry-blog\/kimi-k2-thinking-now-in-microsoft-foundry\/4476116\" target=\"_blank\">Get&nbsp;Started<\/a><\/div>\n<hr \/>\n<h3>Cohere Rerank 4 \u2014 State-of-the-Art Retrieval for RAG<\/h3>\n<p><strong>Cohere Rerank v4.0<\/strong> \u2014 available in <strong>Fast<\/strong> and <strong>Pro<\/strong> variants \u2014 is now in the Microsoft Foundry model catalog, deployable via pay-as-you-go serverless endpoints. Designed to improve search relevance and reduce LLM hallucinations in RAG pipelines, Rerank 4 uses cross-encoding AI to re-sort retrieved documents by semantic similarity to your query. Supports 100+ languages and drops into existing keyword or semantic retrieval stacks with minimal code changes.<\/p>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/ai.azure.com\/catalog\/models\/Cohere-rerank-v4.0-fast\" target=\"_blank\">Explore&nbsp;in&nbsp;Catalog<\/a><\/div>\n<hr \/>\n<h2>Images<\/h2>\n<h3>GPT\u2011image\u20111.5 \u2014 Faster, Higher-Quality Image Generation<\/h3>\n<p><strong>GPT\u2011image\u20111.5<\/strong> is now generally available in Microsoft Foundry, delivering <strong>up to 4\u00d7 faster generation<\/strong> and approximately <strong>20% lower API costs<\/strong> compared to GPT\u2011image\u20111. Improvements span text-to-image generation, image-to-image transformation, inpainting, and face preservation \u2014 with output resolutions up to 1024\u00d71536. Access at launch is gated for enterprise customers (MCA-E and EA).<\/p>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/techcommunity.microsoft.com\/blog\/azure-ai-foundry-blog\/introducing-openai%E2%80%99s-gpt-image-1-5-in-microsoft-foundry\/4478139\" target=\"_blank\">Learn&nbsp;more<\/a><\/div>\n<hr \/>\n<h3>FLUX.2 [pro] from Black Forest Labs<\/h3>\n<p><strong>FLUX.2 [pro]<\/strong> from Black Forest Labs is now in public preview in Microsoft Foundry. Building on FLUX.1, it adds <strong>multi-reference support<\/strong> (up to 8 images), improved text rendering for infographics and UI mockups, and enhanced adherence to complex, multi-part prompts. Available with Microsoft-backed SLAs, Responsible AI controls, and global standard deployment across Azure regions.<\/p>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/ai.azure.com\/catalog\/models\/FLUX.2-pro\" target=\"_blank\">Explore&nbsp;in&nbsp;Catalog<\/a><\/div>\n<hr \/>\n<h2>Audio<\/h2>\n<h3>Updated Audio Models: Realtime Mini, ASR, and TTS<\/h3>\n<p>Three new audio models reached general availability on <strong>December 15, 2025<\/strong>, raising the bar across the real-time voice stack:<\/p>\n<table>\n<thead>\n<tr>\n<th>Model<\/th>\n<th>What&#8217;s new<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>gpt-realtime-mini-2025-12-15<\/strong><\/td>\n<td>Feature parity with full gpt-realtime in instruction-following and function-calling; new voices Marin and Cedar; glitch-free audio<\/td>\n<\/tr>\n<tr>\n<td><strong>gpt-4o-mini-transcribe-2025-12-15<\/strong><\/td>\n<td>~50% lower WER on English benchmarks; better multilingual support; up to 4\u00d7 fewer silence hallucinations in noisy environments<\/td>\n<\/tr>\n<tr>\n<td><strong>gpt-4o-mini-tts-2025-12-15<\/strong><\/td>\n<td>More natural, human-like multilingual speech synthesis with reduced artifacts<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>All three are API-only deployments accessible through the Azure OpenAI endpoint in Microsoft Foundry.<\/p>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/techcommunity.microsoft.com\/blog\/azure-ai-foundry-blog\/introducing-updated-gpt-voice-models-in-microsoft-foundry\/4478137\" target=\"_blank\">Learn&nbsp;more<\/a><\/div>\n<hr \/>\n<h2>Fine-Tuning<\/h2>\n<h3>New Open-Source Base Models for Fine-Tuning<\/h3>\n<p>Microsoft Foundry expanded its fine-tuning catalog with four new open-source base models on serverless infrastructure \u2014 pre-announced at Ignite 2025 and now live:<\/p>\n<table>\n<thead>\n<tr>\n<th>Model<\/th>\n<th>Best for<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>Ministral 3B<\/strong><\/td>\n<td>Lightweight, cost-sensitive scenarios<\/td>\n<\/tr>\n<tr>\n<td><strong>Qwen3 32B<\/strong><\/td>\n<td>Multilingual applications<\/td>\n<\/tr>\n<tr>\n<td><strong>OSS-20B<\/strong><\/td>\n<td>Balanced enterprise workloads<\/td>\n<\/tr>\n<tr>\n<td><strong>Llama 3.3 70B<\/strong><\/td>\n<td>Complex reasoning at scale<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Fine-tuning is available on either serverless or managed compute, with Microsoft&#8217;s security, compliance, and Responsible AI guardrails applied uniformly across all models.<\/p>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/concepts\/fine-tuning-overview\" target=\"_blank\">Get&nbsp;Started<\/a><\/div>\n<hr \/>\n<h2>Agents<\/h2>\n<h3>Memory in Foundry Agent Service (Public Preview)<\/h3>\n<p><!-- TODO: Verify this wasn't already covered in the Oct\/Nov What's New post \u2014 the devblog published November 25, 2025 at Ignite. If it was, remove this section. --><\/p>\n<p>Most agents today are stateless \u2014 every conversation starts from zero. <strong>Memory in Foundry Agent Service<\/strong> is a fully managed, long-term memory store natively integrated with the agent runtime, that extracts, consolidates, and retrieves user preferences and context across sessions and devices \u2014 no custom embedding database or retrieval pipeline required.<\/p>\n<p>The process runs in four phases: <strong>Extract<\/strong> (preferences, facts, and key context from each conversation turn), <strong>Consolidate<\/strong> (LLM merges duplicates and resolves conflicts), <strong>Retrieve<\/strong> (hybrid search surfaces relevant memories at conversation start, with core facts like allergies or preferences injected immediately), and <strong>Customize<\/strong> (the <code>user_profile_details<\/code> parameter focuses extraction on what matters for your specific use case).<\/p>\n<p>Enable it with a single click in the Foundry portal, or via SDK:<\/p>\n<pre><code class=\"language-python\">from azure.ai.projects.models import MemoryStoreDefaultDefinition, MemoryStoreDefaultOptions\n\ndefinition = MemoryStoreDefaultDefinition(\n    chat_model=\"gpt-5\",\n    embedding_model=\"text-embedding-3-small\",\n    options=MemoryStoreDefaultOptions(\n        user_profile_enabled=True,\n        user_profile_details=\"Food preferences for a meal planning agent\",\n        chat_summary_enabled=True,\n    ),\n)\nmemory_store = project_client.memory_stores.create(\n    name=\"my_memory_store\",\n    description=\"Example memory store for conversations\",\n    definition=definition,\n)<\/code><\/pre>\n<p>Free during preview \u2014 you pay only for the underlying chat and embedding model calls.<\/p>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/devblogs.microsoft.com\/foundry\/introducing-memory-in-foundry-agent-service\/?utm_source=devblog&amp;utm_medium=blog&amp;utm_campaign=whats-new-dec-2025-jan-2026&amp;utm_content=memory-agent-service\" target=\"_blank\">Read&nbsp;the&nbsp;Deep&nbsp;Dive<\/a><\/div>\n<hr \/>\n<h3>Agent-to-Agent (A2A) Tool (Preview)<\/h3>\n<p><!-- TODO: Confirm exact public availability date for the A2A tool in Dec 2025 \/ Jan 2026. --><\/p>\n<p>The <strong>A2A tool<\/strong> adds inter-agent communication to Foundry agents \u2014 point it at any endpoint that implements the <a href=\"https:\/\/a2a-protocol.org\/latest\/\">A2A protocol<\/a>, and your agent can invoke it as a first-class tool. This is the structured evolution of &#8220;Connected Agents&#8221; in Foundry Classic, with cleaner semantics and explicit authentication options: key-based, OAuth2, or Entra Agent Identity.<\/p>\n<p>The distinction that matters for system design:<\/p>\n<ul>\n<li><strong>A2A tool:<\/strong> Agent A calls Agent B; B&#8217;s answer returns to A; A synthesizes the final user response. Agent A stays in control of the thread.<\/li>\n<li><strong>Multi-agent workflow:<\/strong> Agent B takes full ownership of the thread from the point of handoff \u2014 Agent A is out of the loop.<\/li>\n<\/ul>\n<p>Configure via the Foundry portal (Tools \u2192 Connect tool \u2192 Custom \u2192 Agent2Agent) or in code:<\/p>\n<pre><code class=\"language-python\">from azure.ai.projects.models import A2ATool, PromptAgentDefinition\n\na2a_conn = project_client.connections.get(os.environ[\"A2A_PROJECT_CONNECTION_NAME\"])\nagent = project_client.agents.create_version(\n    agent_name=\"my-agent\",\n    definition=PromptAgentDefinition(\n        model=os.environ[\"FOUNDRY_MODEL_DEPLOYMENT_NAME\"],\n        instructions=\"You are a helpful assistant.\",\n        tools=[A2ATool(project_connection_id=a2a_conn.id)],\n    ),\n)<\/code><\/pre>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/agents\/how-to\/tools\/agent-to-agent?utm_source=devblog&amp;utm_medium=blog&amp;utm_campaign=whats-new-dec-2025-jan-2026&amp;utm_content=a2a-tool\" target=\"_blank\">A2A&nbsp;Tool&nbsp;Docs<\/a><\/div>\n<hr \/>\n<h2>Tools<\/h2>\n<h3>Computer Use (Preview)<\/h3>\n<p><strong>Computer Use<\/strong> lets Foundry agents visually interact with desktop and browser environments using the <code>computer-use-preview<\/code> model. Instead of calling structured APIs, the agent receives a screenshot and acts \u2014 click, type, scroll, navigate. Use it for UI testing automation, navigating legacy web apps that predate REST APIs, extracting data from visual-only interfaces, and RPA-style workflows where brittle CSS selectors previously dominated.<\/p>\n<p>The .NET SDK (<code>Azure.AI.Agents.Persistent 1.2.0-beta.8<\/code>, December 2025) added first-class Computer Use tool support. Python and TypeScript support is in active development \u2014 track the changelogs.<\/p>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/agents\/how-to\/tools\/computer-use?utm_source=devblog&amp;utm_medium=blog&amp;utm_campaign=whats-new-dec-2025-jan-2026&amp;utm_content=computer-use\" target=\"_blank\">Computer&nbsp;Use&nbsp;Docs<\/a><\/div>\n<hr \/>\n<h2>Platform<\/h2>\n<h3>Foundry MCP Server (Preview)<\/h3>\n<p>The <strong>Foundry MCP Server<\/strong> is a cloud-hosted, fully managed MCP endpoint at <code>https:\/\/mcp.ai.azure.com<\/code> \u2014 the production successor to the experimental local MCP server shipped at Build 2025. It went live December 3, 2025. No local uptime to manage. Connect it from VS Code (<code>mcp.json<\/code>), Visual Studio 2026 Insiders, or add it as a tool connection in the Foundry portal with one click.<\/p>\n<p>Conversational workflows you can drive through it today:<\/p>\n<ul>\n<li><strong>Model operations:<\/strong> Browse the catalog, compare benchmarks, get upgrade recommendations based on capabilities and deprecation schedules, check quota headroom, deploy and deprecate deployments<\/li>\n<li><strong>Agent management:<\/strong> Create, update, and version agents without leaving your editor<\/li>\n<li><strong>Evaluation pipelines:<\/strong> Chain <code>evaluation_dataset_create<\/code> \u2192 <code>evaluation_create<\/code> \u2192 <code>evaluation_comparison_create<\/code> for automated quality loops in a single chat thread<\/li>\n<\/ul>\n<p>Security: Entra ID authentication end-to-end (OBO tokens scoped to <code>https:\/\/mcp.ai.azure.com<\/code>). Every operation runs under the signed-in user&#8217;s Azure RBAC permissions with full audit logging. Tenant admins control access via Azure Policy Conditional Access.<\/p>\n<pre><code class=\"language-json\">\/\/ .vscode\/mcp.json\n{\n  \"servers\": {\n    \"foundry-mcp\": { \"type\": \"http\", \"url\": \"https:\/\/mcp.ai.azure.com\" }\n  }\n}<\/code><\/pre>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/devblogs.microsoft.com\/foundry\/announcing-foundry-mcp-server-preview-speeding-up-ai-dev-with-microsoft-foundry\/?utm_source=devblog&amp;utm_medium=blog&amp;utm_campaign=whats-new-dec-2025-jan-2026&amp;utm_content=foundry-mcp-server\" target=\"_blank\">Read&nbsp;Announcement<\/a><\/div>\n<hr \/>\n<h3>Microsoft Foundry for VS Code \u2014 January 2026 Update<\/h3>\n<p>The <strong>Microsoft Foundry extension for VS Code<\/strong> shipped a focused update on January 20, 2026:<\/p>\n<ul>\n<li><strong>Multi-workflow Visualizer:<\/strong> View, navigate, and debug multiple interconnected workflows in a single project panel \u2014 previously limited to one at a time.<\/li>\n<li><strong>Prompt agents in Playground:<\/strong> All prompt agents in your project are now surfaced directly in the Playground for interactive testing. No context-switching.<\/li>\n<li><strong>Open code for any agent type:<\/strong> The extension generates and opens sample code for prompt agents, YAML-based workflows, hosted agents, and Foundry classic agents. Drop it straight into your existing project.<\/li>\n<li><strong>Separated v1\/v2 resource view:<\/strong> Classic Foundry resources and new-gen agents display in clearly distinct views, eliminating the common confusion about which generation a resource belongs to.<\/li>\n<\/ul>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/techcommunity.microsoft.com\/blog\/azuredevcommunityblog\/microsoft-foundry-for-vs-code-january-2026-update\/4486132?utm_source=devblog&amp;utm_medium=blog&amp;utm_campaign=whats-new-dec-2025-jan-2026&amp;utm_content=vscode-jan-2026\" target=\"_blank\">Read&nbsp;the&nbsp;Update<\/a><\/div>\n<hr \/>\n<h3>New Foundry Experience at ai.azure.com<\/h3>\n<p><!-- TODO: Verify specific Dec 2025 \/ Jan 2026 capabilities that landed in the new Foundry portal (the \"New Foundry\" toggle at ai.azure.com, and\/or ai.azure.com\/nextgen). Confirm when it became the default experience and what changed from Foundry Classic in this specific period. --><\/p>\n<p>The new unified Foundry portal experience \u2014 available at <a href=\"https:\/\/ai.azure.com\">ai.azure.com<\/a> via the &#8220;New Foundry&#8221; toggle \u2014 introduced a meaningfully different mental model from Foundry Classic:<\/p>\n<ul>\n<li>The <strong>Tools<\/strong> tab is the single entry point for discovering, connecting, and managing agentic integrations: MCP servers, A2A endpoints, Azure AI Search, SharePoint, Fabric, and more \u2014 across more than 1,400 business systems.<\/li>\n<li><strong>Multi-agent workflows<\/strong> are built visually in the portal, distinct from the single-agent flow of Foundry Classic.<\/li>\n<li>The <strong>separated v1\/v2 resource view<\/strong> ensures Classic and new-gen agents don&#8217;t share ambiguous panels.<\/li>\n<\/ul>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/ai.azure.com?utm_source=devblog&amp;utm_medium=blog&amp;utm_campaign=whats-new-dec-2025-jan-2026&amp;utm_content=new-foundry-portal\" target=\"_blank\">Open&nbsp;Microsoft&nbsp;Foundry<\/a><\/div>\n<hr \/>\n<h2>Deprecation Notice<\/h2>\n<h3>AzureML SDK v1 \u2014 End of Life June 30, 2026<\/h3>\n<p>The <strong>Azure Machine Learning SDK v1<\/strong> reaches end of support on <strong>June 30, 2026<\/strong>. After this date, existing workflows may face security risks and breaking changes without active Microsoft support. Note that the AzureML CLI v1 extension already reached end of support on <strong>September 30, 2025<\/strong>. If you&#8217;re still running v1-based training pipelines, the SDK v2 migration guide is the place to start \u2014 v2 brings a significantly improved authoring experience, YAML-first job definitions, and continued investment from the Azure ML team.<\/p>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/how-to-migrate-from-v1\" target=\"_blank\">Migration&nbsp;Guide<\/a><\/div>\n<hr \/>\n<h2>SDK &amp; Language Changelog (Dec 2025 \u2013 Jan 2026)<\/h2>\n<p>All Microsoft Foundry SDK development is consolidating into a single <code>azure-ai-projects<\/code> package per language. Agents, inference, evaluations, and memory operations that previously lived in separate packages (<code>azure-ai-agents<\/code>, etc.) are unified under the <code>azure-ai-projects<\/code> v2 beta line. All active development happens on preview\/beta branches \u2014 pin accordingly.<\/p>\n<h3>Python<\/h3>\n<p><strong><code>azure-ai-projects<\/code> 2.0.0b3 (2026-01-06)<\/strong><\/p>\n<p>The v2 line is the new canonical SDK for everything Foundry: agents (now built on the OpenAI Responses protocol), evaluations, memory stores, and model inference. This release bundles <code>openai<\/code> and <code>azure-identity<\/code> as direct dependencies \u2014 no separate installs required.<\/p>\n<p>Key changes across the 2.0.0b1\u2013b3 window:<\/p>\n<ul>\n<li><strong>Agents on Responses protocol:<\/strong> <code>AIProjectClient<\/code> now handles agent ops directly; <code>azure-ai-agents<\/code> dependency dropped.<\/li>\n<li><strong><code>get_openai_client()<\/code><\/strong> now returns an <code>openai.OpenAI<\/code> client pre-configured for your Foundry project endpoint (Responses API).<\/li>\n<li><strong>Class renames:<\/strong> <code>AgentObject<\/code> \u2192 <code>AgentDetails<\/code>, <code>MemoryStoreObject<\/code> \u2192 <code>MemoryStoreDetails<\/code>, <code>AgentVersionObject<\/code> \u2192 <code>AgentVersionDetails<\/code>.<\/li>\n<li><strong>Tracing overhaul:<\/strong> span names, attribute keys, and operation names changed to align with OpenTelemetry <code>gen_ai.*<\/code> conventions (e.g. <code>gen_ai.provider.name<\/code> is now <code>\"microsoft.foundry\"<\/code>).<\/li>\n<li><strong>New operations:<\/strong> <code>.memory_stores<\/code>, <code>.evaluation_rules<\/code>, <code>.evaluators<\/code>, <code>.insights<\/code>, <code>.schedules<\/code> on <code>AIProjectClient<\/code>.<\/li>\n<\/ul>\n<p><strong>Action:<\/strong> Upgrade to <code>azure-ai-projects==2.0.0b3<\/code>. Remove any standalone <code>azure-ai-agents<\/code> pins \u2014 agent creation and runs are now first-class methods on <code>AIProjectClient<\/code>.<\/p>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/github.com\/Azure\/azure-sdk-for-python\/blob\/azure-ai-projects_2.0.0b3\/sdk\/ai\/azure-ai-projects\/CHANGELOG.md\" target=\"_blank\">Python&nbsp;Changelog<\/a><\/div>\n<p><strong><code>azure-ai-evaluation<\/code> 1.14.0 (2026-01-05)<\/strong><\/p>\n<p>Evaluation still ships as a standalone package while consolidation completes \u2014 expect this to merge into <code>azure-ai-projects<\/code> in a future beta. 1.14.0 is primarily a bug-fix release: corrected binary scoring for <code>CodeVulnerability<\/code> and <code>UngroundedAttributes<\/code> evaluators in the RedTeam scanner, and fixed <code>GroundednessEvaluator<\/code> not honoring <code>is_reasoning_model<\/code> when the <code>query<\/code> parameter was supplied.<\/p>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/github.com\/Azure\/azure-sdk-for-python\/blob\/azure-ai-evaluation_1.14.0\/sdk\/evaluation\/azure-ai-evaluation\/CHANGELOG.md\" target=\"_blank\">Evaluation&nbsp;Changelog<\/a><\/div>\n<h3>.NET<\/h3>\n<p><strong><code>Azure.AI.Agents.Persistent<\/code> 1.2.0-beta.8 (2025-12-01)<\/strong><\/p>\n<p>Added first-class <strong>Computer Use<\/strong> support for agents, letting you wire up <code>computer-use-preview<\/code> model runs directly from the persistent agents client. <code>PersistentAgentsChatClient<\/code> got improved error handling for incomplete-state streaming runs.<\/p>\n<p>Breaking: none in this release.\nAction: Pin <code>Azure.AI.Agents.Persistent<\/code> to <code>1.2.0-beta.8<\/code> to get Computer Use.<\/p>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/github.com\/Azure\/azure-sdk-for-net\/blob\/Azure.AI.Agents.Persistent_1.2.0-beta.8\/sdk\/ai\/Azure.AI.Agents.Persistent\/CHANGELOG.md\" target=\"_blank\">Agents&nbsp;Changelog<\/a><\/div>\n<p><strong><code>Azure.AI.Projects<\/code> 1.2.0-beta.5 (2025-12-12)<\/strong><\/p>\n<p>Updated for transitive compatibility with <strong>OpenAI 2.8.0<\/strong>, including substantial changes to the <code>[Experimental]<\/code> Responses API surface. Also fixes file uploading for fine-tuning jobs. The <code>1.2.0-beta.1<\/code> entry (November) is also worth noting if you haven&#8217;t upgraded \u2014 it introduced the full Microsoft Foundry Agents Service feature set, memory, evaluations, red teaming, schedules, and insights on <code>AIProjectClient<\/code>.<\/p>\n<p>Breaking: Responses API surface changed with <code>OpenAI 2.8.0<\/code> compatibility update \u2014 review your <code>[Experimental]<\/code> Responses code paths.\nAction: Upgrade to <code>Azure.AI.Projects 1.2.0-beta.5<\/code>.<\/p>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/github.com\/Azure\/azure-sdk-for-net\/blob\/Azure.AI.Projects_1.2.0-beta.5\/sdk\/ai\/Azure.AI.Projects\/CHANGELOG.md\" target=\"_blank\">Projects&nbsp;Changelog<\/a><\/div>\n<h3>JavaScript \/ TypeScript<\/h3>\n<p><strong><code>@azure\/ai-projects<\/code> 2.0.0-beta.2 \u2192 2.0.0-beta.4 (Dec 2025 \u2013 Jan 2026)<\/strong><\/p>\n<p>Three betas landed in quick succession \u2014 the highlights:<\/p>\n<ul>\n<li><strong>2.0.0-beta.2<\/strong> (2025-12-02): Re-added <code>project.telemetry<\/code> route to restore access to Application Insights connection string (removed in beta.1).<\/li>\n<li><strong>2.0.0-beta.3<\/strong> (2026-01-09): Fixed response JSON schema deserializer bug.<\/li>\n<li><strong>2.0.0-beta.4<\/strong> (2026-01-29): <strong>Major class renames<\/strong> to align with OpenAI naming conventions \u2014 GA tools now use a <code>Tool<\/code> suffix; preview tools use <code>PreviewTool<\/code>. Key renames: <code>AzureAISearchAgentTool<\/code> \u2192 <code>AzureAISearchTool<\/code>, <code>BrowserAutomationAgentTool<\/code> \u2192 <code>BrowserAutomationPreviewTool<\/code>, <code>A2ATool<\/code> \u2192 <code>A2APreviewTool<\/code>, <code>SharepointAgentTool<\/code> \u2192 <code>SharepointPreviewTool<\/code>, <code>MicrosoftFabricAgentTool<\/code> \u2192 <code>MicrosoftFabricPreviewTool<\/code>.<\/li>\n<\/ul>\n<p>Breaking: The <code>2.0.0-beta.4<\/code> class renames are breaking. If you reference any <code>*AgentTool<\/code> class, update to the new suffixed name.\nAction: Upgrade to <code>@azure\/ai-projects@2.0.0-beta.4<\/code> and search your codebase for the renamed classes. This mirrors the same rename convention coming to the Python <code>2.0.0b4<\/code> release.<\/p>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/github.com\/Azure\/azure-sdk-for-js\/blob\/main\/sdk\/ai\/ai-projects\/CHANGELOG.md\" target=\"_blank\">JS\/TS&nbsp;Changelog<\/a><\/div>\n<hr \/>\n<h2>Stay Connected<\/h2>\n<p>Plenty more is in flight \u2014 the February edition will land on a much shorter timeline. In the meantime, explore any of these models directly in the <a href=\"https:\/\/ai.azure.com\/catalog\/models\">Microsoft Foundry model catalog<\/a> or join the developer community to share what you&#8217;re building.<\/p>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-primary\" href=\"https:\/\/aka.ms\/foundrydevs\" target=\"_blank\">Join&nbsp;the&nbsp;Foundry&nbsp;Community<\/a><\/div>\n","protected":false},"excerpt":{"rendered":"<p>Catch up on all the new models, audio updates, fine-tuning expansions, and SDK updates from Microsoft Foundry spanning December 2025 and January 2026 \u2014 including GPT-5.2, Codex Max, DeepSeek V3.2, FLUX.2, and the azure-ai-projects v2 beta consolidation.<\/p>\n","protected":false},"author":185793,"featured_media":2052,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1,27],"tags":[25,105,102,71,106,9,2,103,104],"class_list":["post-2014","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-microsoft-foundry","category-whats-new","tag-agents","tag-audio","tag-azure-ai-foundry","tag-fine-tuning","tag-image-generation","tag-mcp","tag-microsoft-foundry","tag-models","tag-sdk"],"acf":[],"blog_post_summary":"<p>Catch up on all the new models, audio updates, fine-tuning expansions, and SDK updates from Microsoft Foundry spanning December 2025 and January 2026 \u2014 including GPT-5.2, Codex Max, DeepSeek V3.2, FLUX.2, and the azure-ai-projects v2 beta consolidation.<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/posts\/2014","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/users\/185793"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/comments?post=2014"}],"version-history":[{"count":1,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/posts\/2014\/revisions"}],"predecessor-version":[{"id":2051,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/posts\/2014\/revisions\/2051"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/media\/2052"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/media?parent=2014"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/categories?post=2014"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/tags?post=2014"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}