{"id":2215,"date":"2026-05-12T00:13:15","date_gmt":"2026-05-12T07:13:15","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/foundry\/?p=2215"},"modified":"2026-05-12T00:13:15","modified_gmt":"2026-05-12T07:13:15","slug":"whats-new-in-microsoft-foundry-apr-2026","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/foundry\/whats-new-in-microsoft-foundry-apr-2026\/","title":{"rendered":"What&#8217;s new in Microsoft Foundry | April 2026"},"content":{"rendered":"<h2>TL;DR<\/h2>\n<ul>\n<li><strong>Foundry Local (generally available, GA):<\/strong> Local model inference is production-ready on Windows, macOS on Apple Silicon, and Linux x64.<\/li>\n<li><strong>GPT-5.5:<\/strong> The latest GPT-5 family model is available in Microsoft Foundry, with default quota for Tier 5 and Tier 6 subscriptions.<\/li>\n<li><strong>Microsoft Agent Framework tracing (Preview):<\/strong> Agent Framework agents can emit OpenTelemetry traces into Foundry for debugging and production observability.<\/li>\n<li><strong>Hosted-agent tracing (Preview):<\/strong> Hosted-agent sessions, tool calls, and run steps can now surface in Foundry traces.<\/li>\n<li><strong>CodeAct with Hyperlight (alpha):<\/strong> Agent Framework adds sandboxed Python code execution in Hyperlight micro-virtual machines for low-risk tool chains.<\/li>\n<li><strong>Continuous evaluation custom evaluators (Preview):<\/strong> Bring code-based or prompt-based evaluators into continuous evaluation.<\/li>\n<li><strong>Agent Monitoring Dashboard (Preview):<\/strong> Track operational metrics and evaluation results together, including token usage, latency, run success rate, and evaluator scores.<\/li>\n<li><strong>Agent inventory in Foundry Control Plane:<\/strong> Find supported agents across a subscription from the Operate view, including Foundry agents, Azure Site Reliability Engineering (SRE) Agent, Logic Apps agent loops, and registered custom agents.<\/li>\n<li><strong>SDK &amp; language updates:<\/strong> Python and JavaScript\/TypeScript add beta agents, skills, and toolboxes routes; .NET reaches the 2.0 GA line; Java fixes streaming behavior.<\/li>\n<li><strong>Microsoft Build:<\/strong> Register for Microsoft Build and save Microsoft Foundry sessions to watch online.<\/li>\n<\/ul>\n<div class=\"d-flex\"><a class=\"cta_button_link\" href=\"https:\/\/build.microsoft.com\/\" target=\"_blank\" rel=\"noopener\">Register for Microsoft Build<\/a><\/div>\n<p>Looking for Microsoft Foundry sessions to watch online? Start with these Microsoft Build breakout sessions. Times are shown in Pacific time; check the session page for the latest schedule.<\/p>\n<table>\n<thead>\n<tr>\n<th>Session<\/th>\n<th>Date\/time<\/th>\n<th>Speaker(s)<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><a href=\"https:\/\/build.microsoft.com\/en-US\/sessions\/BRK230\">Confident model selection and integration with Microsoft Foundry (BRK230)<\/a><\/td>\n<td>June 2, 12:30-1:15 PM PT<\/td>\n<td>Yina Arenas, Naomi Moneypenny<\/td>\n<td>Choose, integrate, and validate AI models in Microsoft Foundry, including benchmarking and integrated developer workflows.<\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/build.microsoft.com\/en-US\/sessions\/BRK250\">Govern open-source AI agents, any framework, any scale (BRK250)<\/a><\/td>\n<td>June 2, 2:30-3:15 PM PT<\/td>\n<td>Sarah Bird, Mehrnoosh Sameki<\/td>\n<td>Learn governance patterns for Microsoft Agent Framework and open-source agent stacks, including evaluations and risk controls.<\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/build.microsoft.com\/en-US\/sessions\/BRK241\">From prototype to production: build and run agents at scale (BRK241)<\/a><\/td>\n<td>June 2, 3:45-4:30 PM PT<\/td>\n<td>Tina Schuchman, Jeff Hollan<\/td>\n<td>Walk through the lifecycle for production-grade agents with Foundry Agent Service and Microsoft Agent Framework.<\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/build.microsoft.com\/en-US\/sessions\/BRK252\">From observability to ROI for AI agents on any framework (BRK252)<\/a><\/td>\n<td>June 2, 3:45-4:30 PM PT<\/td>\n<td>Sebastian Kohlmeier, Filisha Shah<\/td>\n<td>Cover cross-framework tracing, evaluations, production observability, and ROI measurement for AI agents.<\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/build.microsoft.com\/en-US\/sessions\/BRKSP94\">Orchestrate special agents with Nemotron models on Microsoft AI Foundry (BRKSP94)<\/a><\/td>\n<td>June 2, 3:45-4:30 PM PT<\/td>\n<td>Stephen McCullough<\/td>\n<td>Route tasks across frontier models, NVIDIA Nemotron, and local models for tiered agentic AI architectures.<\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/build.microsoft.com\/en-US\/sessions\/BRK231\">Deploy. Observe. Learn. Reinforcement learning for production agents (BRK231)<\/a><\/td>\n<td>June 2, 5:00-5:45 PM PT<\/td>\n<td>Alicia Frame, Omkar More<\/td>\n<td>Use fine-tuning and reinforcement learning on Microsoft Foundry to improve production agents with real usage signals.<\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/build.microsoft.com\/en-US\/sessions\/BRK240\">Build context-aware agents at scale with Microsoft IQ (BRK240)<\/a><\/td>\n<td>June 2, 5:00-5:45 PM PT<\/td>\n<td>Marco Casalaina<\/td>\n<td>Learn how Foundry IQ, Fabric IQ, and Work IQ provide an enterprise intelligence layer for AI agents.<\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/build.microsoft.com\/en-US\/sessions\/BRK246\">Context engineering for agents: connect agents with enterprise knowledge (BRK246)<\/a><\/td>\n<td>June 3, 9:00-9:45 AM PT<\/td>\n<td>Pablo Castro Castro<\/td>\n<td>Explore Foundry IQ, Azure AI Search, knowledge sources, agentic retrieval-augmented generation (RAG), and enterprise security.<\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/build.microsoft.com\/en-US\/sessions\/BRK235\">Local models, developer control, and the future of AI runtimes (BRK235)<\/a><\/td>\n<td>June 3, 10:15-11:00 AM PT<\/td>\n<td>Parth Sareen<\/td>\n<td>Learn how local and hybrid model execution can reshape developer workflows, privacy, and experimentation.<\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/build.microsoft.com\/en-US\/sessions\/BRK243\">Claw and agent harness in Microsoft Foundry (BRK243)<\/a><\/td>\n<td>June 3, 11:30 AM-12:15 PM PT<\/td>\n<td>Glenn Condron, Amanda Foster, Shawn Henry<\/td>\n<td>Go deep on multi-agent systems, Claw agent patterns, hosted agents architecture, triggers, state management, and file access.<\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/build.microsoft.com\/en-US\/sessions\/BRK251\">Build secure and enterprise-ready agents with Agent 365 (BRK251)<\/a><\/td>\n<td>June 3, 11:30 AM-12:15 PM PT<\/td>\n<td>Neta Haiby<\/td>\n<td>Build enterprise-ready agents with runtime visibility, identity-aware access, data protection, and policy-based governance.<\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/build.microsoft.com\/en-US\/sessions\/BRKSP92\">Build distributed agentic apps from edge to cloud (BRKSP92)<\/a><\/td>\n<td>June 3, 11:30 AM-12:15 PM PT<\/td>\n<td>Colin Helms, Eddy Rodriguez<\/td>\n<td>Design and run multi-agent applications across client, edge, and Azure environments.<\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/build.microsoft.com\/en-US\/sessions\/BRK232\">Train and deploy custom OSS reasoning models with Foundry (BRK232)<\/a><\/td>\n<td>June 3, 2:45-3:30 PM PT<\/td>\n<td>Vijay Aski, Manoj Bableshwar, Chris Lauren<\/td>\n<td>Train and tune open-source reasoning models in Microsoft Foundry with code-first workflows and curated reinforcement learning environments.<\/td>\n<\/tr>\n<tr>\n<td><a href=\"https:\/\/build.microsoft.com\/en-US\/sessions\/BRK242\">Turn your agents into action: connect tools, APIs, and data (BRK242)<\/a><\/td>\n<td>June 3, 4:00-4:45 PM PT<\/td>\n<td>Ronak Chokshi, Joe Filcik, Maria Naggaga<\/td>\n<td>See how to connect agents with toolsets, application programming interfaces (APIs), and data without overloading context windows.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>Want the full online breakout catalogs? Browse <a href=\"https:\/\/build.microsoft.com\/en-US\/sessions?filter=topic%2FlogicalValue%3EAgents+%26+apps&amp;filter=deliveryTypes%2FlogicalValue%3EOnline&amp;filter=sessionType%2FlogicalValue%3EBreakout&amp;pageSize=96\">Agents &amp; apps<\/a>, <a href=\"https:\/\/build.microsoft.com\/en-US\/sessions?filter=deliveryTypes%2FlogicalValue%3EOnline&amp;filter=sessionType%2FlogicalValue%3EBreakout&amp;filter=topic%2FlogicalValue%3EResponsible+AI&amp;pageSize=96\">Responsible AI<\/a>, and <a href=\"https:\/\/build.microsoft.com\/en-US\/sessions?filter=deliveryTypes%2FlogicalValue%3EOnline&amp;filter=sessionType%2FlogicalValue%3EBreakout&amp;filter=topic%2FlogicalValue%3EWorking+with+models&amp;pageSize=96\">Working with models<\/a>.<\/p>\n<h2>Join the community<\/h2>\n<p>Connect with 50,000+ developers on <a href=\"https:\/\/aka.ms\/foundry\/discord\">Discord<\/a>, ask questions in <a href=\"https:\/\/aka.ms\/foundry\/forum\">GitHub Discussions<\/a>, or <a href=\"https:\/\/devblogs.microsoft.com\/foundry\/category\/whats-new\/feed\/\">subscribe via RSS<\/a> to get this digest monthly.<\/p>\n<hr \/>\n<h2>Models<\/h2>\n<h3>GPT-5.5<\/h3>\n<p><strong>GPT-5.5<\/strong> is now part of the Microsoft Foundry model lineup, but it is not broadly available by default. It has default quota only for Tier 5 and Tier 6 subscriptions. Tiers 1 through 4 currently show 0 requests per minute (RPM) and 0 tokens per minute (TPM) for GPT-5.5, so teams below Tier 5 should request quota before planning a deployment.<\/p>\n<p>We know that can be frustrating if you are ready to test today. We plan to make GPT-5.5 available to more tiers as soon as demand and capacity allow. Thanks for being patient with us while we expand access responsibly.<\/p>\n<p>Regional availability includes Global Standard deployments in East US 2, Sweden Central, South Central US, and Poland Central. Data Zone Standard deployments are available in those same regions.<\/p>\n<table>\n<thead>\n<tr>\n<th>Quota tier<\/th>\n<th>GPT-5.5 Data Zone Standard<\/th>\n<th>GPT-5.5 Global Standard<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Tier 5<\/td>\n<td>3,000 RPM \/ 3,000,000 TPM<\/td>\n<td>10,000 RPM \/ 10,000,000 TPM<\/td>\n<\/tr>\n<tr>\n<td>Tier 6<\/td>\n<td>4,000 RPM \/ 4,000,000 TPM<\/td>\n<td>15,000 RPM \/ 15,000,000 TPM<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>To check your subscription tier, use the <a href=\"https:\/\/learn.microsoft.com\/en-us\/rest\/api\/aifoundry\/accountmanagement\/quota-tiers\/get?view=rest-aifoundry-accountmanagement-2025-10-01-preview&amp;tabs=HTTP&amp;preserve-view=true\">Microsoft Cognitive Services quota tiers control plane API<\/a> and look for <code>properties.currentTierName<\/code> in the response. If you&#8217;re signed in with the Azure CLI, this command returns the current tier for a subscription:<\/p>\n<pre><code class=\"language-bash\">az rest --method get --url \"https:\/\/management.azure.com\/subscriptions\/&lt;your-subscription-id&gt;\/providers\/Microsoft.CognitiveServices\/quotaTiers?api-version=2025-10-01-preview\" --query \"value[0].properties.currentTierName\" --output tsv<\/code><\/pre>\n<p>Example output:<\/p>\n<pre><code class=\"language-text\">Tier 2<\/code><\/pre>\n<p>In this example, the subscription is below Tier 5, so GPT-5.5 would need a quota request before deployment.<\/p>\n<p>Developers can use GPT-5.5 through the Responses API or Chat Completions API, with support for structured outputs, text and image inputs, functions, tools, parallel tool calling, computer use, and reasoning.<\/p>\n<blockquote><p><strong>Action:<\/strong> Check your subscription tier, region, and quota before you move traffic to GPT-5.5. If your subscription is below Tier 5, submit a quota request first.<\/p><\/blockquote>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/learn.microsoft.com\/azure\/foundry\/openai\/quotas-limits#quota-tiers\" target=\"_blank\" rel=\"noopener\">Check Quota Tiers<\/a><\/div>\n<hr \/>\n<h2>Agents<\/h2>\n<h3>Foundry Local (GA)<\/h3>\n<p><strong>Foundry Local<\/strong> is generally available for building AI features that run on-device, without a cloud dependency in the request path. It supports Windows, macOS on Apple Silicon, and Linux x64, with software development kits (SDKs) for Python, JavaScript, C#, and Rust.<\/p>\n<p>For agent builders, the value is simple: prototype locally, keep latency low, and ship offline-capable experiences where user data stays on the device.<\/p>\n<p>After installing <code>foundry-local-sdk<\/code> (<code>foundry-local-sdk-winml<\/code> on Windows for Windows ML acceleration), you can run a local chat completion with a small catalog model:<\/p>\n<pre><code class=\"language-python\">from foundry_local_sdk import Configuration, FoundryLocalManager\r\n\r\nconfig = Configuration(app_name=\"foundry_local_quickstart\")\r\nFoundryLocalManager.initialize(config)\r\nmanager = FoundryLocalManager.instance\r\n\r\nmodel = manager.catalog.get_model(\"qwen2.5-0.5b\")\r\nmodel.download()\r\nmodel.load()\r\n\r\ntry:\r\n    client = model.get_chat_client()\r\n    response = client.complete_chat(\r\n        [{\"role\": \"user\", \"content\": \"Write one sentence about local AI.\"}]\r\n    )\r\n    print(response.choices[0].message.content)\r\nfinally:\r\n    model.unload()<\/code><\/pre>\n<p>Example output:<\/p>\n<pre><code class=\"language-text\">Local AI refers to machine learning models and algorithms that can run on devices within the same physical location as they were trained.<\/code><\/pre>\n<blockquote><p><strong>Action:<\/strong> Try Foundry Local if your agent needs offline execution, local data handling, or a fast local development loop before moving workloads to Microsoft Foundry in the cloud.<\/p><\/blockquote>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/devblogs.microsoft.com\/foundry\/foundry-local-ga\/\" target=\"_blank\" rel=\"noopener\">Read the GA Announcement<\/a><\/div>\n<h3>Microsoft Agent Framework tracing (Preview)<\/h3>\n<p><strong>Microsoft Agent Framework tracing<\/strong> is now available in preview for Python agents. It emits OpenTelemetry (OTel) spans for the agent run, model calls, tool execution, token usage, latency, and input\/output payloads when you explicitly enable sensitive data for a safe development environment.<\/p>\n<p>This gives developers a practical debugging loop for agent apps: run a scenario, copy the trace ID, and inspect which model answered, which tool ran, how long each step took, how many tokens were used, and what payload moved through the run.<\/p>\n<p>Install the Agent Framework Foundry package and Azure Monitor OpenTelemetry support:<\/p>\n<pre><code class=\"language-bash\">pip install agent-framework-foundry azure-identity azure-monitor-opentelemetry aiohttp pydantic<\/code><\/pre>\n<p>Then run a minimal weather agent. Set the tracing flags before importing Agent Framework, connect to your Foundry project, and let <code>FoundryChatClient.configure_azure_monitor()<\/code> send telemetry to the Application Insights resource connected to that project:<\/p>\n<pre><code class=\"language-python\">import asyncio\r\nimport os\r\nfrom typing import Annotated\r\n\r\n# Enable GenAI semantic tracing while the capability is experimental.\r\nos.environ.setdefault(\"AZURE_EXPERIMENTAL_ENABLE_GENAI_TRACING\", \"true\")\r\nos.environ.setdefault(\"ENABLE_INSTRUMENTATION\", \"true\")\r\nos.environ.setdefault(\"ENABLE_SENSITIVE_DATA\", \"true\")\r\nos.environ.setdefault(\"OTEL_SERVICE_NAME\", \"weather-agent-demo\")\r\n\r\nfrom agent_framework import Agent, tool\r\nfrom agent_framework.foundry import FoundryChatClient\r\nfrom agent_framework.observability import get_tracer\r\nfrom azure.identity import AzureCliCredential\r\nfrom opentelemetry.trace import SpanKind\r\nfrom opentelemetry.trace.span import format_trace_id\r\nfrom pydantic import Field\r\n\r\n@tool(approval_mode=\"never_require\")\r\nasync def get_weather(\r\n    location: Annotated[str, Field(description=\"The city or region to get weather for.\")],\r\n) -&gt; str:\r\n    \"\"\"Get the current weather for a location.\"\"\"\r\n    await asyncio.sleep(0.2)\r\n    return f\"The weather in {location} is sunny with a high of 22C.\"\r\n\r\nasync def main():\r\n    client = FoundryChatClient(\r\n        project_endpoint=os.environ[\"FOUNDRY_PROJECT_ENDPOINT\"],\r\n        model=os.environ[\"FOUNDRY_MODEL\"],\r\n        credential=AzureCliCredential(),\r\n    )\r\n    try:\r\n        await client.configure_azure_monitor(enable_sensitive_data=True)\r\n\r\n        agent = Agent(\r\n            client=client,\r\n            tools=[get_weather],\r\n            name=\"WeatherAgent\",\r\n            id=\"weather-agent\",\r\n            default_options={\r\n                \"tool_choice\": \"required\",\r\n                \"reasoning\": {\"effort\": \"low\", \"summary\": \"auto\"},\r\n            },\r\n            instructions=(\r\n                \"You are a weather assistant. For every weather question, call the \"\r\n                \"get_weather tool before answering. Do not guess or use memorized weather.\"\r\n            ),\r\n        )\r\n\r\n        with get_tracer().start_as_current_span(\"Weather Agent Chat\", kind=SpanKind.CLIENT) as span:\r\n            print(f\"Trace ID: {format_trace_id(span.get_span_context().trace_id)}\")\r\n            session = agent.create_session()\r\n            result = await agent.run(\"What's the weather in Amsterdam?\", session=session)\r\n            print(result)\r\n    finally:\r\n        await client.project_client.close()\r\n        await client.client.close()\r\n\r\nif __name__ == \"__main__\":\r\n    asyncio.run(main())<\/code><\/pre>\n<p>A run like this emits the expected Agent Framework span tree: <code>invoke_agent<\/code>, <code>chat<\/code>, and <code>execute_tool<\/code>, with token counts, durations, tool arguments, tool output, and final assistant response. Keep <code>enable_sensitive_data=False<\/code> for routine observability, and turn it on only for temporary debugging with non-sensitive inputs.<\/p>\n<p>In the Foundry portal, the trace view makes it easy to scan the agent invocation, tool execution, model call, user input, and assistant output in one place.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2026\/05\/foundry-weatheragent-trace.webp\" alt=\"Microsoft Foundry Traces tab showing the WeatherAgent version 1 span tree with invoke_agent, execute_tool, and chat spans plus user input and assistant output\" \/><\/p>\n<blockquote><p><strong>Action:<\/strong> Connect Application Insights to your Foundry project, enable Agent Framework observability, and use the trace ID to debug model calls, tool calls, tokens, latency, and failures.<\/p><\/blockquote>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/learn.microsoft.com\/en-us\/agent-framework\/agents\/observability?pivots=programming-language-python\" target=\"_blank\" rel=\"noopener\">Get Started with Agent Framework Tracing<\/a><\/div>\n<h3>Hosted-agent tracing (Preview)<\/h3>\n<p><strong>Hosted-agent tracing<\/strong> brings server-side trace visibility to hosted agents when tracing is connected to your project. You can inspect recent runs, review conversation details, and see the ordered actions, run steps, and tool calls behind a response.<\/p>\n<p>Tracing is generally available for prompt agents; workflow, hosted, and custom agents are in preview. Treat trace data like production telemetry: redact sensitive content before it reaches spans, prompts, tool arguments, or logs.<\/p>\n<blockquote><p><strong>Action:<\/strong> Connect Application Insights to your Foundry project, generate hosted-agent traffic, and use the Traces view to debug tool behavior and latency.<\/p><\/blockquote>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/observability\/how-to\/trace-agent-setup\" target=\"_blank\" rel=\"noopener\">Configure Agent Tracing<\/a><\/div>\n<h3>CodeAct with Hyperlight (alpha)<\/h3>\n<p><strong>CodeAct with Hyperlight<\/strong> is available as an alpha package for Microsoft Agent Framework. It lets an agent collapse multi-step tool plans into a single sandboxed Python code block, then run that generated code in an isolated Hyperlight micro-virtual machine.<\/p>\n<p>The best fit is read-heavy, chainable work: data lookups, light computation, report assembly, and tasks where several small tool calls can be composed safely. Keep side-effecting tools, such as sending email or writing to production systems, approval-gated as direct tools.<\/p>\n<blockquote><p><strong>Action:<\/strong> Use CodeAct for low-risk tool chains where reducing model round trips matters, and keep human approval around tools with side effects.<\/p><\/blockquote>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/devblogs.microsoft.com\/agent-framework\/codeact-with-hyperlight\/\" target=\"_blank\" rel=\"noopener\">Explore CodeAct with Hyperlight<\/a><\/div>\n<h3>Agent inventory in Foundry Control Plane<\/h3>\n<p><strong>Foundry Control Plane<\/strong> gives teams a subscription-level inventory for supported agents across projects and platforms. The <strong>Operate &gt; Assets &gt; Agents<\/strong> view helps you find agents, check status, review versions, inspect runs and error rates when observability is configured, and perform supported lifecycle operations.<\/p>\n<p>Supported discovery includes Foundry agents, Azure Site Reliability Engineering (SRE) Agent, Azure Logic Apps agent loops, and registered custom agents. Logic Apps agent loops appear in the inventory, but observability features such as traces and metrics are not supported for those loops.<\/p>\n<blockquote><p><strong>Action:<\/strong> Use the Agents inventory to find agent assets across a subscription before you troubleshoot, stop, block, or register agents.<\/p><\/blockquote>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/control-plane\/how-to-manage-agents\" target=\"_blank\" rel=\"noopener\">Manage Agents in Control Plane<\/a><\/div>\n<hr \/>\n<h2>Evaluations &amp; Observability<\/h2>\n<h3>Continuous evaluation custom evaluators (Preview)<\/h3>\n<p><strong>Continuous evaluation<\/strong> now supports custom evaluators, so production quality checks can match the way your application actually works. Add code-based evaluators for deterministic checks like format validation, or prompt-based evaluators for subjective checks like tone, helpfulness, and domain-specific answer quality.<\/p>\n<blockquote><p><strong>Action:<\/strong> Move one team-specific acceptance criterion into a custom evaluator, then add it to continuous evaluation from the agent monitoring settings.<\/p><\/blockquote>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/observability\/how-to\/how-to-monitor-agents-dashboard#use-custom-evaluators-for-continuous-evaluations\" target=\"_blank\" rel=\"noopener\">Use Custom Evaluators<\/a><\/div>\n<h3>Agent Monitoring Dashboard (Preview)<\/h3>\n<p><strong>Agent Monitoring Dashboard<\/strong> brings operational metrics and evaluation results into one view: token usage, latency, run success rate, evaluator scores, and red-teaming results when enabled. This makes quality drift easier to spot alongside the runtime signals your team already watches.<\/p>\n<blockquote><p><strong>Action:<\/strong> Connect Application Insights to your Foundry project, open your agent&#8217;s monitoring view, and review evaluation scores next to latency and success-rate trends.<\/p><\/blockquote>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/observability\/how-to\/how-to-monitor-agents-dashboard\" target=\"_blank\" rel=\"noopener\">Explore Agent Monitoring<\/a><\/div>\n<h3>Monitoring custom agents (Preview)<\/h3>\n<p><strong>Custom agent monitoring<\/strong> lets Foundry centralize observability for agents that do not run directly on the platform. Register a custom agent through Foundry Control Plane, route it through AI Gateway, and send OTel traces to the same Application Insights resource used by your Foundry project. From there, you can monitor metrics like error rate and configure continuous evaluations for production traffic.<\/p>\n<blockquote><p><strong>Action:<\/strong> If you have a LangGraph, Agent-to-Agent (A2A), or HTTP-based agent running outside Foundry, register it as a custom agent and instrument it with OTel semantic conventions.<\/p><\/blockquote>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/control-plane\/register-custom-agent\" target=\"_blank\" rel=\"noopener\">Register a Custom Agent<\/a><\/div>\n<hr \/>\n<h2>SDK &amp; Language Changelog (April 2026)<\/h2>\n<p>After March&#8217;s GA wave, April was about rounding out the day-to-day developer experience. The SDKs are starting to expose more of the hosted-agent lifecycle directly from code: sessions, session files, skills, toolboxes, typed evaluation inputs, and safer streaming behavior.<\/p>\n<h3>Python<\/h3>\n<p><strong><code>azure-ai-projects<\/code> 2.1.0 (Apr 20)<\/strong><\/p>\n<p>If you&#8217;re building hosted agents in Python, this is the release to look at. The <code>azure-ai-projects<\/code> 2.1.0 package adds more of the hosted-agent management surface under the preview <code>.beta<\/code> namespace, so you can script the same workflows you test in the portal.<\/p>\n<p>The most useful change is that <code>get_openai_client(agent_name=...)<\/code> can now return an OpenAI client scoped to an agent endpoint when your <code>AIProjectClient<\/code> is created with <code>allow_preview=True<\/code>. That makes it easier to keep application code on the familiar OpenAI client shape while routing requests through a specific agent.<\/p>\n<p>The release also adds <code>project_client.beta.agents<\/code> operations for hosted-agent sessions and session files, plus <code>patch_agent_details()<\/code> for updating agent metadata. New <code>project_client.beta.skills<\/code> and <code>project_client.beta.toolboxes<\/code> clients bring CRUD-style operations for packaged skills and toolboxes, and evaluation authors get TypedDict helpers for <code>.evals.create()<\/code> and <code>.evals.runs.create()<\/code>.<\/p>\n<p>One tracing change to note: trace context propagation is now enabled by default when tracing is enabled.<\/p>\n<blockquote><p><strong>Action:<\/strong> Upgrade to <code>azure-ai-projects==2.1.0<\/code> if you are using hosted-agent sessions, skills, toolboxes, or typed evaluation inputs. Keep <code>allow_preview=True<\/code> for agent endpoint preview flows.<\/p><\/blockquote>\n<p><a href=\"https:\/\/pypi.org\/project\/azure-ai-projects\/2.1.0\/\">Changelog<\/a><\/p>\n<h3>JavaScript \/ TypeScript<\/h3>\n<p><strong><code>@azure\/ai-projects<\/code> 2.0.2 (Apr 6) + 2.1.0 (Apr 17)<\/strong><\/p>\n<p>JavaScript and TypeScript followed the same direction as Python. Version 2.1.0 adds preview routes for the pieces you need when an agent becomes more than a prompt: beta agent operations, skills, and toolboxes.<\/p>\n<p>Use <code>project.beta.agents<\/code> for beta agent operations such as managed agent identity blueprints, sessions, and session files. Use <code>project.beta.skills<\/code> and <code>project.beta.toolboxes<\/code> when you want to manage skills and toolbox features from your build or deployment tooling instead of doing everything manually.<\/p>\n<p>There is also an important safety fix in <code>2.0.2<\/code>: unconditional <code>console.debug<\/code> calls were replaced with Azure SDK logging, which helps avoid exposing sensitive values such as SAS URIs in console output.<\/p>\n<p>Breaking changes in <code>2.1.0<\/code> are small but worth checking: <code>container_protocol_versions<\/code> and <code>code_type<\/code> changed from required to optional in hosted-agent output types, and <code>Schedule.id<\/code> was renamed to <code>schedule_id<\/code>.<\/p>\n<blockquote><p><strong>Action:<\/strong> Upgrade to <code>@azure\/ai-projects@2.1.0<\/code> for the new beta routes. If you are pinned to <code>2.0.0<\/code> or <code>2.0.1<\/code>, take the <code>2.0.2<\/code> logging fix at minimum.<\/p><\/blockquote>\n<p><a href=\"https:\/\/www.npmjs.com\/package\/@azure\/ai-projects\">Changelog<\/a><\/p>\n<h3>.NET<\/h3>\n<p><strong><code>Azure.AI.Projects<\/code> 2.0.0 (Apr 1) + 2.0.1 (Apr 22)<\/strong><\/p>\n<p>For .NET developers, April marks the move to the <code>Azure.AI.Projects<\/code> 2.0 GA line on the v1 REST surface. If you held off during the beta cycle, this is the stable package to start from.<\/p>\n<p>The migration from beta includes several naming cleanups. <code>Insights<\/code> became <code>ProjectInsights<\/code>, <code>Evaluators<\/code> became <code>ProjectEvaluators<\/code>, <code>AIProjectClient.OpenAI<\/code> became <code>AIProjectClient.ProjectOpenAIClient<\/code>, and <code>AIProjectClient.Agents<\/code> became <code>AIProjectClient.AgentAdministrationClient<\/code>. Evaluation and memory operations also moved into <code>Azure.AI.Projects.Evaluation<\/code> and <code>Azure.AI.Projects.Memory<\/code> namespaces.<\/p>\n<p><code>2.0.1<\/code> adopts <code>Azure.Core<\/code> 1.53.0, which type-forwards the <code>Azure.Identity<\/code> namespace, so the explicit <code>Azure.Identity<\/code> dependency is no longer required.<\/p>\n<p>Preview note: <code>2.1.0-beta.1<\/code> added a Toolboxes sample for teams trying the preview toolbox surface.<\/p>\n<blockquote><p><strong>Action:<\/strong> Upgrade to <code>Azure.AI.Projects<\/code> 2.0.1 for the stable .NET package, and review the rename list if you are moving from the beta line.<\/p><\/blockquote>\n<p><a href=\"https:\/\/github.com\/Azure\/azure-sdk-for-net\/blob\/main\/sdk\/ai\/Azure.AI.Projects\/CHANGELOG.md\">Changelog<\/a><\/p>\n<h3>Java<\/h3>\n<p><strong><code>azure-ai-projects<\/code> 2.0.1 (Apr 16)<\/strong><\/p>\n<p>Java&#8217;s April update is smaller, but important if you stream responses. Version 2.0.1 fixes streaming APIs so they stream response data instead of eagerly buffering the full response body in memory. Async completions also moved off I\/O threads to avoid blocking.<\/p>\n<blockquote><p><strong>Action:<\/strong> Upgrade to <code>com.azure:azure-ai-projects:2.0.1<\/code> if your Java app uses streaming responses or long-running streamed output.<\/p><\/blockquote>\n<p><a href=\"https:\/\/github.com\/Azure\/azure-sdk-for-java\/blob\/main\/sdk\/ai\/azure-ai-projects\/CHANGELOG.md\">Changelog<\/a><\/p>\n<hr \/>\n<h2>Resources &amp; Community<\/h2>\n<p><div class=\"alert alert-info\"><p class=\"alert-divider\"><i class=\"fabric-icon fabric-icon--Info\"><\/i><strong>Register for Microsoft Build<\/strong><\/p>Microsoft Build runs June 2-3, 2026, in San Francisco and online. Register now, sign in, and save Microsoft Foundry sessions to your schedule so you can watch them online. <a href=\"https:\/\/build.microsoft.com\/\">Register for Microsoft Build<\/a><\/div><\/p>\n<div class=\"d-flex\"><a class=\"cta_button_link btn-secondary\" href=\"https:\/\/build.microsoft.com\/\" target=\"_blank\" rel=\"noopener\">Register for Microsoft Build<\/a><\/div>\n<ul>\n<li><strong>Foundry docs:<\/strong> Start with the <a href=\"https:\/\/learn.microsoft.com\/azure\/ai-foundry\/\">Microsoft Foundry documentation<\/a><\/li>\n<li><strong>Microsoft Build:<\/strong> <a href=\"https:\/\/build.microsoft.com\/\">Register for Microsoft Build<\/a> and sign in to save Microsoft Foundry sessions to your online schedule<\/li>\n<li><strong>Discord:<\/strong> Join the <a href=\"https:\/\/aka.ms\/foundry\/discord\">Foundry Discord<\/a><\/li>\n<li><strong>GitHub Discussions:<\/strong> Ask questions in <a href=\"https:\/\/aka.ms\/foundry\/forum\">the forum<\/a><\/li>\n<li><strong>RSS:<\/strong> <a href=\"https:\/\/devblogs.microsoft.com\/foundry\/category\/whats-new\/feed\/\">Subscribe<\/a> to get this digest monthly<\/li>\n<li><strong>Model catalog:<\/strong> Browse models in <a href=\"https:\/\/ai.azure.com\/catalog\">Microsoft Foundry<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>April brings Foundry Local GA for local AI development, GPT-5.5 model support with Tier 5 and Tier 6 default quota in Microsoft Foundry, new tracing paths for Microsoft Agent Framework and hosted agents, CodeAct with Hyperlight for sandboxed agent code execution, expanded monitoring and continuous evaluation capabilities for production agents, SDK updates across Python, JavaScript\/TypeScript, .NET, and Java, and a reminder to register for Microsoft Build.<\/p>\n","protected":false},"author":185793,"featured_media":2216,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1,27],"tags":[87,25,131,66,38,132,34,2,103,83,104],"class_list":["post-2215","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-microsoft-foundry","category-whats-new","tag-agent-framework","tag-agents","tag-control-plane","tag-evaluations","tag-foundry-local","tag-gpt-5-5","tag-microsoft-build","tag-microsoft-foundry","tag-models","tag-observability","tag-sdk"],"acf":[],"blog_post_summary":"<p>April brings Foundry Local GA for local AI development, GPT-5.5 model support with Tier 5 and Tier 6 default quota in Microsoft Foundry, new tracing paths for Microsoft Agent Framework and hosted agents, CodeAct with Hyperlight for sandboxed agent code execution, expanded monitoring and continuous evaluation capabilities for production agents, SDK updates across Python, JavaScript\/TypeScript, .NET, and Java, and a reminder to register for Microsoft Build.<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/posts\/2215","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/users\/185793"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/comments?post=2215"}],"version-history":[{"count":1,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/posts\/2215\/revisions"}],"predecessor-version":[{"id":2218,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/posts\/2215\/revisions\/2218"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/media\/2216"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/media?parent=2215"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/categories?post=2215"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/tags?post=2215"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}