{"id":1316,"date":"2025-10-14T12:01:20","date_gmt":"2025-10-14T19:01:20","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/foundry\/?p=1316"},"modified":"2025-11-12T12:27:59","modified_gmt":"2025-11-12T20:27:59","slug":"the-developers-guide-to-smarter-fine-tuning","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/foundry\/the-developers-guide-to-smarter-fine-tuning\/","title":{"rendered":"The Developer\u2019s Guide to Smarter Fine-tuning: Unlock custom AI for every business challenge"},"content":{"rendered":"<p><span data-ccp-props=\"{}\"><a href=\"https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2025\/10\/DevBlog-FT-Banner.png\"><img decoding=\"async\" class=\"aligncenter size-full wp-image-1342\" src=\"https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2025\/10\/DevBlog-FT-Banner.png\" alt=\"DevBlog FT Banner image\" width=\"1890\" height=\"1057\" srcset=\"https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2025\/10\/DevBlog-FT-Banner.png 1890w, https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2025\/10\/DevBlog-FT-Banner-300x168.png 300w, https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2025\/10\/DevBlog-FT-Banner-1024x573.png 1024w, https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2025\/10\/DevBlog-FT-Banner-768x430.png 768w, https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2025\/10\/DevBlog-FT-Banner-1536x859.png 1536w\" sizes=\"(max-width: 1890px) 100vw, 1890px\" \/><\/a><a href=\"https:\/\/ai.azure.com\/\" target=\"_blank\" rel=\"noopener\">Azure AI Foundry<\/a> makes fine-tuning smarter, faster, and more accessible than ever. Whether you\u2019re building agents that reason, tools that adapt, or workflows that scale, this is your launchpad for customizing models to solve real business challenges. Dive in to discover best practices, hands-on resources, and the latest innovations so you can build, test, and deploy specialized AI with confidence. <\/span><\/p>\n<h4 aria-level=\"2\"><span data-contrast=\"none\">What is Fine-tuning?<\/span><span data-ccp-props=\"{&quot;134245418&quot;:true,&quot;134245529&quot;:true,&quot;335559738&quot;:160,&quot;335559739&quot;:80}\">\u00a0<\/span><\/h4>\n<p><span data-contrast=\"auto\">Fine-tuning refers to customizing a pre-trained LLM with additional training on a specific task or new dataset for enhanced performance, new skills, or improved accuracy. <\/span><span data-contrast=\"auto\">So instead of building an AI model from scratch, you take a powerful pre-trained model and give it extra training using your own examples, and it learns your style, your needs, and your domain.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><center><a href=\"https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2025\/10\/Data-Set-Image.png\"><img decoding=\"async\" class=\"aligncenter wp-image-1330\" src=\"https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2025\/10\/Data-Set-Image.png\" alt=\"Data Set Image image\" width=\"423\" height=\"140\" srcset=\"https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2025\/10\/Data-Set-Image.png 928w, https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2025\/10\/Data-Set-Image-300x99.png 300w, https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2025\/10\/Data-Set-Image-768x254.png 768w\" sizes=\"(max-width: 423px) 100vw, 423px\" \/><\/a><\/center><\/p>\n<h4 aria-level=\"2\"><span data-contrast=\"none\">Fine-tune for Specialized AI Tasks<\/span><span data-ccp-props=\"{&quot;134245418&quot;:true,&quot;134245529&quot;:true,&quot;335559738&quot;:160,&quot;335559739&quot;:80}\">\u00a0<\/span><\/h4>\n<p><span data-contrast=\"auto\">Foundation models are powerful, but they\u2019re generalists by design. When precision matters, especially in domain-specific applications, <strong>fine-tuning is the unlock.<\/strong> It lets teams shape models to their exact needs, boosting accuracy, slashing latency, and reducing inference costs.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><span data-contrast=\"auto\">Whether you&#8217;re building tools for legal contract analysis, conversational agents that understand nuance, or wealth advisory copilots that deliver tailored insights, fine-tuning turns a capable model into a specialized expert. It\u2019s not just about better answers, it\u2019s about delivering <\/span><i><span data-contrast=\"auto\">relevant, reliable, and ready-to-deploy<\/span><\/i><span data-contrast=\"auto\"> intelligence.<\/span><\/p>\n<p style=\"text-align: center;\"><iframe src=\"\/\/www.youtube.com\/embed\/L1LMzcqGQ8w\" width=\"560\" height=\"314\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/p>\n<p style=\"text-align: center;\"><strong><span style=\"font-size: 8pt;\" data-contrast=\"auto\">o4-mini RFT Wealth Advisory Demo<\/span><\/strong><\/p>\n<h5><b><span data-contrast=\"auto\">Use Case: From Generalist to Expert <\/span><\/b><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559685&quot;:0,&quot;335559737&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:278}\">\u00a0<\/span><\/h5>\n<p><span data-contrast=\"auto\">When preparing for a trip to Patagonia, one common concern is whether a sleeping bag will be warm enough for the conditions: \u201cw<\/span><i><span data-contrast=\"auto\">ill my sleeping bag work for Patagonia?\u201d <\/span><\/i><span data-contrast=\"auto\">becomes a perfect test case for fine-tuning. By combining <\/span><b><span data-contrast=\"auto\">user input<\/span><\/b><span data-contrast=\"auto\">, <\/span><b><span data-contrast=\"auto\">prompt engineering<\/span><\/b><span data-contrast=\"auto\">, <\/span><b><span data-contrast=\"auto\">and RAG, the<\/span><\/b><span data-contrast=\"auto\"> model evolves from giving vague travel advice to delivering a precise, context-aware recommendation.<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;335559738&quot;:281,&quot;335559739&quot;:281}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">Think of the LLM as a language calculator: user input is the raw data, prompt engineering defines the operation, and fine-tuning adds the specialized logic. The result? A model that factors in regional weather, seasonal temperature ranges, and gear specs to give an answer that was not just helpful but actionable.<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;335559738&quot;:240,&quot;335559739&quot;:240}\">\u00a0<\/span><\/p>\n<p><center><a href=\"https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2025\/10\/LLM-Language-Calculator.png\"><img decoding=\"async\" class=\"aligncenter wp-image-1331\" src=\"https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2025\/10\/LLM-Language-Calculator.png\" alt=\"LLM Language Calculator image\" width=\"558\" height=\"227\" srcset=\"https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2025\/10\/LLM-Language-Calculator.png 1097w, https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2025\/10\/LLM-Language-Calculator-300x122.png 300w, https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2025\/10\/LLM-Language-Calculator-1024x416.png 1024w, https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2025\/10\/LLM-Language-Calculator-768x312.png 768w\" sizes=\"(max-width: 558px) 100vw, 558px\" \/><\/a><\/center><\/p>\n<h4><span data-contrast=\"none\">Best Practices in the Agentic Era<\/span><span data-ccp-props=\"{&quot;134245418&quot;:true,&quot;134245529&quot;:true,&quot;335559738&quot;:160,&quot;335559739&quot;:80}\">\u00a0<\/span><\/h4>\n<p><span data-contrast=\"auto\">Done right, fine-tuning is a superpower. Done wrong, it\u2019s a liability. A robust, iterative approach, grounded in clear objectives, high-quality data, and continuous feedback, is essential for safe, effective, and innovative agent deployments.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<ul>\n<li><b><span data-contrast=\"auto\">Start with Clear Objectives<\/span><\/b><span data-contrast=\"auto\">: Define the specific use case, tasks and outcomes you want your model to achieve. Common use cases include reducing prompt length, teaching new skills, improving tool use, and domain adaptation. Consider vertical use cases such as improving dialect\/tone responses, customer specific knowledge inclusion, or natural language to code applications.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/li>\n<li><b><span data-contrast=\"auto\">Invest in Data Quality<\/span><\/b><span data-contrast=\"auto\">: High-quality training data is the backbone of effective fine-tuning. The better your training data, the more effective your fine-tuned model will be.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/li>\n<li><b><span data-contrast=\"auto\">Establish a Signals Loop: <\/span><\/b><span data-contrast=\"auto\">Agents evolve and so should your model.<\/span> <span data-contrast=\"auto\">Continuously evaluate model performance and retrain as needed using observed reasoning, tool use, and performance metrics to ensure alignment, safety, and accuracy.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/li>\n<li><strong>Build Fine-tuning into the Agent-building Process:<\/strong> Organizations can enhance the performance of pre-trained models and enable agents to meet unique business requirements without starting from scratch.<\/li>\n<li><b><span data-contrast=\"auto\">Leverage Azure\u2019s Ecosystem<\/span><\/b><span data-contrast=\"auto\">: Azure offers a full-stack toolkit for building and deploying fine-tuned agents. Take advantage of integrated tools, documentation, and community resources for support and innovation.<\/span><\/li>\n<\/ul>\n<h4 aria-level=\"2\"><span data-contrast=\"none\">Fine-tuning<\/span><span data-ccp-props=\"{&quot;134245418&quot;:true,&quot;134245529&quot;:true,&quot;335559738&quot;:160,&quot;335559739&quot;:80}\"> Innovations in Azure AI Foundry<\/span><\/h4>\n<p>Recent Azure updates have made fine-tuning more flexible, affordable, and developer-friendly, enabling teams to build agents that are not just reactive.<\/p>\n<p><center><a href=\"https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2025\/10\/FT-Methods.png\"><img decoding=\"async\" class=\"aligncenter wp-image-1333\" src=\"https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2025\/10\/FT-Methods.png\" alt=\"FT Methods image\" width=\"534\" height=\"268\" srcset=\"https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2025\/10\/FT-Methods.png 904w, https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2025\/10\/FT-Methods-300x151.png 300w, https:\/\/devblogs.microsoft.com\/foundry\/wp-content\/uploads\/sites\/89\/2025\/10\/FT-Methods-768x386.png 768w\" sizes=\"(max-width: 534px) 100vw, 534px\" \/><\/a><\/center><\/p>\n<ul>\n<li><strong>Reinforcement Fine-Tuning (RFT) Upgrades<\/strong>: Promoted <a href=\"https:\/\/techcommunity.microsoft.com\/blog\/azure-ai-foundry-blog\/o4-mini-reinforcement-fine-tuning-rft-now-generally-available-on-azure-ai-foundr\/4452597\">o4-mini RFT <\/a>to general availability, o4-mini RFT enables models to learn from reward signals and optimize for complex objectives which is essential for agentic workflows. <span class=\"TextRun SCXW254137458 BCX8\" lang=\"EN-US\" xml:lang=\"EN-US\" data-contrast=\"auto\"><span class=\"NormalTextRun SCXW254137458 BCX8\">RFT is now fully\u202fAPI and <\/span><span class=\"FindHit SCXW254137458 BCX8\">Swagger<\/span><span class=\"NormalTextRun SCXW254137458 BCX8\"> ready, enabling users to fine-tune models using reinforcement learning techniques.<\/span><\/span><span class=\"EOP SCXW254137458 BCX8\" data-ccp-props=\"{}\">\u00a0<\/span><\/li>\n<li><strong>More Global Training Options:<\/strong> Expanded support for Azure OpenAI model fine-tuning across any of the 26 supported regions, making it easier to deploy customized models worldwide. We\u2019ve also promoted Global Training to generally available and added support for gpt-4o and gpt-4o-mini.<\/li>\n<li><strong>Developer Tier for hosting\/inferencing:<\/strong> Experiment faster with this cost-effective way to evaluate and test fine-tuned models before scaling to production, now promoted to generally available. Key features include:\n<ul>\n<li><span style=\"font-size: 10pt;\">Deploy fine-tuned GPT-4.1 and GPT-4.1-mini models from any training region, including Global Training.\u00a0<\/span><\/li>\n<li><span style=\"font-size: 10pt;\">Free hosting for 24 hours per deployment.\u00a0<\/span><\/li>\n<li><span style=\"font-size: 10pt;\">Pay per token at the same rate as Global Standard, helping you budget your testing costs.\u00a0<\/span><\/li>\n<li><span style=\"font-size: 10pt;\">Simultaneously evaluate multiple models to choose the best candidate for production.\u00a0<\/span><\/li>\n<\/ul>\n<\/li>\n<li><strong>Evaluations Enhancements<\/strong>: Added <a href=\"https:\/\/techcommunity.microsoft.com\/blog\/azure-ai-foundry-blog\/what%E2%80%99s-new-in-azure-ai-foundry-finetuning-july-2025\/4438850\">several new capabilities:<\/a> RFT Observability (Auto-Evals), Quick Evals, and Python Grader streamlining evals and debugging during fine-tuning, now in public preview.<\/li>\n<li><strong>More Granular Control:<\/strong> Added the ability to <a href=\"https:\/\/techcommunity.microsoft.com\/blog\/azure-ai-foundry-blog\/what%E2%80%99s-new-in-azure-ai-foundry-fine-tuning-august-2025\/4445176\">copy fine-tuned models<\/a> across Azure regions via REST API, enabling flexible multi-region deployment and added <a href=\"https:\/\/techcommunity.microsoft.com\/blog\/azure-ai-foundry-blog\/what%E2%80%99s-new-in-azure-ai-foundry-fine-tuning-august-2025\/4445176\">Pause &amp; Resume capabilities<\/a>, giving customers control to halt and resume fine-tuning jobs without losing progress.<\/li>\n<\/ul>\n<h4 aria-level=\"2\"><span data-contrast=\"none\">Getting Started with Fine-Tuning in Azure AI Foundry<\/span><span data-ccp-props=\"{&quot;134245418&quot;:true,&quot;134245529&quot;:true,&quot;335559738&quot;:160,&quot;335559739&quot;:80}\">\u00a0<\/span><\/h4>\n<p><span data-contrast=\"auto\">Azure AI Foundry makes it easy to begin fine-tuning advanced language models for your specific use case. Once you have your use case(s) figured out you can dive into:<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<ol>\n<li><b><span data-contrast=\"auto\">Data Preparation<\/span><\/b><span data-contrast=\"auto\">: Gather and curate high-quality, domain-specific datasets.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/li>\n<li><b><span data-contrast=\"auto\">Model Selection<\/span><\/b><span data-contrast=\"auto\">: Choose the right foundation model (e.g., GPT-4o, 4.1-nano).<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/li>\n<li><b><span data-contrast=\"auto\">Training &amp; Optimization<\/span><\/b><span data-contrast=\"auto\">: Use advanced techniques like Direct Preference Optimization (DPO), RFT, and distillation to enhance model performance.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/li>\n<li><b><span data-contrast=\"auto\">Deployment<\/span><\/b><span data-contrast=\"auto\">: Seamlessly deploy fine-tuned models with automatic scaling and monitoring.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/li>\n<li><b><span data-contrast=\"auto\">Iterate and evaluate:<\/span><\/b><span data-contrast=\"auto\"> Fine-tuning is an iterative process\u2014start with a baseline, measure performance, and refine your approach based on results to build a reliable signals loop.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/li>\n<\/ol>\n<p>&nbsp;<\/p>\n<p style=\"text-align: center;\"><div  class=\"d-flex justify-content-left\"><a class=\"cta_button_link btn-primary mb-24\" href=\"https:\/\/ai.azure.com\" target=\"_blank\">Create with Azure AI Foundry<\/a><\/div><\/p>\n<p>&nbsp;<\/p>\n<h4 aria-level=\"2\"><span data-contrast=\"none\">Featured Hands-on Technical Content &amp; Community Resources<\/span><\/h4>\n<p>Whether you&#8217;re just getting started or scaling production-grade agents, Azure offers a rich ecosystem of developer-first resources to support every stage of your fine-tuning workflow:<\/p>\n<h5 aria-level=\"2\">Code Samples<\/h5>\n<table style=\"border-collapse: collapse; width: 100%; height: 421px;\">\n<tbody>\n<tr style=\"height: 24px;\">\n<td style=\"width: 37.7137%; height: 24px;\"><strong>Name<\/strong><\/td>\n<td style=\"width: 48.1019%; height: 24px;\"><strong>Description<\/strong><\/td>\n<td style=\"width: 14.1843%; height: 24px;\"><strong>Links<\/strong><\/td>\n<\/tr>\n<tr style=\"height: 96px;\">\n<td style=\"width: 37.7137%; height: 96px;\">O4-mini RFT Code Sample<\/td>\n<td style=\"width: 48.1019%; height: 96px;\">Dive into a hands-on example of fine-tuning a reasoning model (o4-mini) using RFT on the Countdown dataset.<\/td>\n<td style=\"width: 14.1843%; height: 96px;\"><a href=\"https:\/\/github.com\/azure-ai-foundry\/fine-tuning\/blob\/main\/Demos\/RFT_Countdown\/demo_with_python_grader.ipynb\" target=\"_blank\" rel=\"noopener\">GitHub Repo<\/a><\/td>\n<\/tr>\n<tr style=\"height: 120px;\">\n<td style=\"width: 37.7137%; height: 120px;\">RAFT Fine-tuning on Azure AI Foundry Code Sample<\/td>\n<td style=\"width: 48.1019%; height: 120px;\">Explore a recipe for using Meta Llama 3.1 405B or OpenAI GPT-4o deployed on Azure AI to generate synthetic datasets with the RAFT method.<\/td>\n<td style=\"width: 14.1843%; height: 120px;\"><a href=\"https:\/\/github.com\/Azure-Samples\/azureai-foundry-finetuning-raft\">GitHub Repo<\/a><\/td>\n<\/tr>\n<tr style=\"height: 96px;\">\n<td style=\"width: 37.7137%; height: 96px;\">Fine-tuning gpt-oss-20B Code Sample<\/td>\n<td style=\"width: 48.1019%; height: 96px;\">Fine-tune gpt\u2011oss\u201120b using Managed Compute on Azure \u2014 available in preview and accessible via notebook.<\/td>\n<td style=\"width: 14.1843%; height: 96px;\"><a href=\"https:\/\/github.com\/Azure\/azureml-examples\/blob\/4df413cccaef14bd6f6c7efc6f41fdad0cf0533d\/sdk\/python\/jobs\/finetuning\/standalone\/model-as-a-platform\/chat-completion\/gpt-oss-20b\/gpt-oss-20b-chat-completion.ipynb\" target=\"_blank\" rel=\"noopener\">GitHub Repo<\/a><\/td>\n<\/tr>\n<tr style=\"height: 18px;\">\n<td style=\"width: 37.7137%; height: 18px;\">Distill DeepSeek V3 into Phi-4-mini Code Sample<\/td>\n<td style=\"width: 48.1019%; height: 18px;\">Distill knowledge from DeepSeek V3 into Phi-4-mini using Azure\u2019s powerful AI stack.<\/td>\n<td style=\"width: 14.1843%; height: 18px;\"><a href=\"https:\/\/github.com\/microsoft\/Build25-LAB329\">GitHub Repo<\/a><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 37.7137%; height: 48px;\">AI Tour 2025: Efficient Model Customization Code Sample<\/td>\n<td style=\"width: 48.1019%; height: 48px;\">Zava retail demo shows how Azure AI Foundry transforms retail with intelligence fine-tuning agents to deliver truly personalized experiences, precise answers, and cost-efficient innovation. This presentation and repo focuses on Distillation, RFT and RAFT approaches using Azure AI Foundry.<\/td>\n<td style=\"width: 14.1843%; height: 48px;\"><a href=\"https:\/\/github.com\/microsoft\/BRK443GHrepo\" target=\"_blank\" rel=\"noopener\">GitHub Repo<\/a><\/p>\n<p><a href=\"https:\/\/speakerdeck.com\/nitya\/aitour-26-efficient-model-customization-with-azure-ai-foundry\">Slides<\/a><\/td>\n<\/tr>\n<tr style=\"height: 19px;\">\n<td style=\"width: 37.7137%; height: 19px;\">Models for Beginners Course<\/td>\n<td style=\"width: 48.1019%; height: 19px;\">Free course on GitHub coming soon! Checkout the preview<\/td>\n<td style=\"width: 14.1843%; height: 19px;\"><a href=\"https:\/\/aka.ms\/models-for-beginners\" target=\"_blank\" rel=\"noopener\">GitHub Repo Preview<\/a><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h5 aria-level=\"2\">Videos with Demos<\/h5>\n<p><b><span data-contrast=\"auto\">Azure AI Show: There\u2019s no reason not to fine-tune (<a href=\"https:\/\/aka.ms\/AIShow\/FineTuning\" target=\"_blank\" rel=\"noopener\">YouTube Video<\/a>)<\/span><\/b><\/p>\n<p><span data-contrast=\"auto\">Learn how to fine-tune foundation models in Azure AI Foundry for improved performance, lower costs, and agentic scenarios. Watch Alicia and Omkar break it down in this episode.<\/span><\/p>\n<p><strong>Model Mondays: Fine-tuning &amp; Distillation (<a href=\"https:\/\/youtu.be\/VSNGzBB20aw\" target=\"_blank\" rel=\"noopener\">YouTube Video<\/a>).\u00a0<\/strong><\/p>\n<p>Dave Voutila shows how Azure AI Foundry makes it easy to fine-tune existing models and use distillation techniques without needing deep ML expertise.<\/p>\n<h5 aria-level=\"3\"><span data-contrast=\"none\">Helpful Docs\u00a0<\/span><\/h5>\n<ul>\n<li><span style=\"font-size: 10pt;\"><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/concepts\/fine-tuning-overview\">Fine-tune models with Azure AI Foundry<\/a> (Comprehensive Guide)\u00a0<\/span><\/li>\n<li><span style=\"font-size: 10pt;\"><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/openai\/how-to\/reinforcement-fine-tuning\">Reinforcement Fine-Tuning (RFT) Overview<\/a>\u00a0<\/span><\/li>\n<li><span style=\"font-size: 10pt;\"><a href=\"https:\/\/azure.microsoft.com\/en-us\/products\/ai-foundry\/\">Developer Tier Details<\/a>\u00a0<\/span><\/li>\n<li><span style=\"font-size: 10pt;\"><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/ai-foundry\/how-to\/evaluate-generative-ai-app\">Evaluations Enhancements &amp; Auto-Evals<\/a>\u00a0<\/span><\/li>\n<\/ul>\n<h5 aria-level=\"2\">Community<\/h5>\n<p><span data-contrast=\"none\">\ud83d\udc4b Continue the conversation on\u00a0<\/span><a href=\"https:\/\/aka.ms\/model-mondays\/discord\" target=\"_blank\" rel=\"noopener noreferrer\"><span data-contrast=\"none\">Discord<\/span><\/a><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335557856&quot;:16777215,&quot;335559738&quot;:0,&quot;335559739&quot;:150,&quot;335559740&quot;:240}\">\u00a0<\/span><\/p>\n<p><b><span data-contrast=\"auto\">About the Authors<\/span><\/b><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">Hi! I\u2019m Jacques \u201cJack\u201d, Microsoft Technical Trainer at Microsoft. I help learners and organizations adopt intelligent automation through Microsoft technologies. This blog reflects my experience guiding teams in building agentic AI solutions that are not only powerful but also secure, ethical, and scalable.<\/span><span data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">I\u2019m Malena, I lead product marketing for fine-tuning at Microsoft. I help developers make Azure AI real by <\/span><span data-ccp-props=\"{}\">activating hands-on content and building community online and offline.\u00a0<\/span><\/p>\n<p><span data-teams=\"true\"> <strong>#SkilledByMTT<\/strong><\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Azure AI Foundry makes fine-tuning smarter, faster, and more accessible than ever. Whether you\u2019re building agents that reason, tools that adapt, or workflows that scale, this is your launchpad for customizing models to solve real business challenges. Dive in to discover best practices, hands-on resources, and the latest innovations so you can build, test, and [&hellip;]<\/p>\n","protected":false},"author":190160,"featured_media":1342,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1],"tags":[71,72,70,73],"class_list":["post-1316","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-microsoft-foundry","tag-fine-tuning","tag-microsoftlearn","tag-model-customization","tag-skilledbymtt"],"acf":[],"blog_post_summary":"<p>Azure AI Foundry makes fine-tuning smarter, faster, and more accessible than ever. Whether you\u2019re building agents that reason, tools that adapt, or workflows that scale, this is your launchpad for customizing models to solve real business challenges. Dive in to discover best practices, hands-on resources, and the latest innovations so you can build, test, and [&hellip;]<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/posts\/1316","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/users\/190160"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/comments?post=1316"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/posts\/1316\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/media\/1342"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/media?parent=1316"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/categories?post=1316"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/foundry\/wp-json\/wp\/v2\/tags?post=1316"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}