{"id":16557,"date":"2026-01-23T00:00:00","date_gmt":"2026-01-23T08:00:00","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/ise\/?p=16557"},"modified":"2026-01-23T04:57:14","modified_gmt":"2026-01-23T12:57:14","slug":"bridging-local-development-cloud-evaluation-devtunnels-azure-ml","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/ise\/bridging-local-development-cloud-evaluation-devtunnels-azure-ml\/","title":{"rendered":"Bridging Local Development and Cloud Evaluation: Using Microsoft Devtunnels with Azure Machine Learning"},"content":{"rendered":"<h1>Introduction<\/h1>\n<p>Picture this scenario: you&#8217;ve implemented a critical fix in your AI service and want to validate it against your comprehensive evaluation dataset. However, the prospect of deploying to Azure for a quick test creates an unwelcome delay in your development cycle. This challenge is common among AI developers.<\/p>\n<p>When developing AI applications, one of the most significant pain points is efficiently testing locally running services against cloud-based evaluation frameworks. While local development offers speed and debugging convenience, robust evaluations require the comprehensive pipelines that Azure Machine Learning provides. This creates tension between rapid iteration and thorough validation.<\/p>\n<p>We&#8217;re excited to share an approach using Microsoft Devtunnels with Azure Machine Learning that has transformed our development workflow. This solution enables you to test local services against cloud evaluations seamlessly, maintaining both development velocity and evaluation rigor.<\/p>\n<h1>The Challenge: Local Development vs. Cloud Evaluation<\/h1>\n<p>Consider a common development scenario: you&#8217;re working on an AI-powered service that generates business intelligence dashboards from natural language queries (transforming requests like &#8220;show me last quarter&#8217;s sales by region&#8221; into interactive visualizations). During development, teams often find themselves navigating between competing requirements:<\/p>\n<ol>\n<li><strong>Running services locally<\/strong> for rapid iteration and simplified debugging<\/li>\n<li><strong>Executing comprehensive evaluations<\/strong> using proven Azure Machine Learning pipelines<\/li>\n<li><strong>Maintaining centralized tracking<\/strong> where team members can monitor progress and results<\/li>\n<li><strong>Avoiding frequent deployments<\/strong> to reduce overhead and development friction<\/li>\n<\/ol>\n<p>Traditional approaches present significant limitations:<\/p>\n<ul>\n<li>Deploying every change for cloud testing introduces substantial feedback delays<\/li>\n<li>Maintaining separate local evaluation scripts often leads to inconsistencies and drift from production evaluation methods<\/li>\n<\/ul>\n<p>These constraints can significantly impact development velocity and team productivity.<\/p>\n<h1>The Solution: Microsoft Devtunnels<\/h1>\n<p>Microsoft Devtunnels provides an elegant solution to this challenge. It acts as a secure bridge between your local development environment and the internet, creating HTTPS tunnels that make locally running services accessible from anywhere, including Azure Machine Learning pipelines.<\/p>\n<p>The key advantage is that Azure ML can communicate with your local development server as if it were any other deployed service, eliminating the traditional &#8220;works locally but fails in cloud testing&#8221; disconnect.<\/p>\n<h1>Architecture Overview<\/h1>\n<p>The following diagram illustrates how this architecture functions in practice:<\/p>\n<pre><code class=\"language-text\">\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510    \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510    \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\r\n\u2502   Local Dev     \u2502    \u2502   Devtunnels     \u2502    \u2502  Azure ML      \u2502\r\n\u2502   Environment   \u2502    \u2502   (Bridge)       \u2502    \u2502  (Cloud Eval)  \u2502\r\n\u2502                 \u2502    \u2502                  \u2502    \u2502                \u2502\r\n\u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502    \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502    \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502\r\n\u2502 \u2502AI Service   \u2502\u25c4\u251c\u2500\u2500\u2500\u2500\u2524 \u2502HTTPS Tunnel  \u2502\u25c4\u251c\u2500\u2500\u2500\u2500\u2524 \u2502Evaluation  \u2502 \u2502\r\n\u2502 \u2502Port 4301    \u2502 \u2502    \u2502 \u2502g276r7b9...   \u2502 \u2502    \u2502 \u2502Component   \u2502 \u2502\r\n\u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502    \u2502 \u2502.devtunnels.ms\u2502 \u2502    \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502\r\n\u2502                 \u2502    \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502    \u2502                \u2502\r\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518    \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518    \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518<\/code><\/pre>\n<p>The solution maintains clear separation between local development and cloud evaluation while providing seamless connectivity through the Devtunnels bridge.<\/p>\n<h1>Implementation Guide<\/h1>\n<p>Let&#8217;s walk through the complete setup process.<\/p>\n<h2>Step 1: Install and Configure Microsoft Devtunnels<\/h2>\n<p>Begin by installing the Devtunnels CLI. The installation process varies by operating system:<\/p>\n<h3>macOS Installation<\/h3>\n<p>For macOS users, use the automated installer script:<\/p>\n<pre><code class=\"language-bash\"># Install Devtunnels CLI\r\ncurl -sL https:\/\/aka.ms\/DevTunnelCliInstall | bash\r\nsource ~\/.zshrc\r\n\r\n# Verify successful installation\r\ndevtunnel -h<\/code><\/pre>\n<h3>Windows Installation<\/h3>\n<p>For Windows users, you have several installation options:<\/p>\n<h4>Option 1: Using PowerShell (Recommended)<\/h4>\n<pre><code class=\"language-powershell\"># Install using the automated PowerShell script\r\nInvoke-WebRequest -Uri \"https:\/\/aka.ms\/DevTunnelCliInstall\" -UseBasicParsing | Invoke-Expression\r\n\r\n# Verify installation\r\ndevtunnel -h<\/code><\/pre>\n<h4>Option 2: Using Windows Package Manager (winget)<\/h4>\n<pre><code class=\"language-cmd\"># Install via winget\r\nwinget install Microsoft.devtunnel\r\n\r\n# Verify installation\r\ndevtunnel -h<\/code><\/pre>\n<h4>Option 3: Manual Download<\/h4>\n<ol>\n<li>Download the latest release from the <a href=\"https:\/\/github.com\/microsoft\/dev-tunnels\/releases\">Microsoft Devtunnels releases page<\/a><\/li>\n<li>Extract the executable to a directory in your PATH<\/li>\n<li>Verify installation with <code>devtunnel -h<\/code><\/li>\n<\/ol>\n<h3>Linux Installation<\/h3>\n<p>For Linux distributions, use the automated installer script:<\/p>\n<p><strong>Ubuntu\/Debian:<\/strong><\/p>\n<pre><code class=\"language-bash\"># Install Devtunnels CLI\r\ncurl -sL https:\/\/aka.ms\/DevTunnelCliInstall | bash\r\n\r\n# Reload your shell configuration\r\nsource ~\/.bashrc\r\n\r\n# Verify successful installation\r\ndevtunnel -h<\/code><\/pre>\n<p><strong>CentOS\/RHEL\/Fedora:<\/strong><\/p>\n<pre><code class=\"language-bash\"># Install Devtunnels CLI\r\ncurl -sL https:\/\/aka.ms\/DevTunnelCliInstall | bash\r\n\r\n# Reload your shell configuration\r\nsource ~\/.bashrc\r\n\r\n# Verify successful installation\r\ndevtunnel -h<\/code><\/pre>\n<p><strong>Manual Installation (All Linux Distributions):<\/strong><\/p>\n<pre><code class=\"language-bash\"># Download and install manually\r\nwget (https:\/\/learn.microsoft.com\/en-us\/azure\/developer\/dev-tunnels\/get-started?tabs=windows#install\r\n\r\nchmod +x devtunnel-linux-x64\r\nsudo mv devtunnel-linux-x64 \/usr\/local\/bin\/devtunnel\r\n\r\n# Verify installation\r\ndevtunnel -h<\/code><\/pre>\n<h3>Authentication (All Platforms)<\/h3>\n<p>Once installed, authenticate with your Microsoft account:<\/p>\n<pre><code class=\"language-bash\">devtunnel user login<\/code><\/pre>\n<p>This command opens your browser and guides you through the standard Microsoft authentication flow regardless of your operating system.<\/p>\n<h2>Step 2: Create and Host a Tunnel<\/h2>\n<p>There are two primary approaches for creating tunnels, depending on whether you&#8217;re conducting initial testing or establishing a more permanent development setup.<\/p>\n<h3>Option A: Temporary Tunnel (Ideal for Initial Testing)<\/h3>\n<p>For quick validation of the overall approach, use this command:<\/p>\n<pre><code class=\"language-bash\"># Create a temporary tunnel to your local service\r\ndevtunnel host -p 4301<\/code><\/pre>\n<p>This command will generate output similar to:<\/p>\n<pre><code class=\"language-text\">Hosting port 4301 at &lt;devtunnels URL&gt;\r\nReady to accept connections for tunnel: swift-horse-fj051061<\/code><\/pre>\n<p>The generated URL immediately provides access to your local port 4301.<\/p>\n<h3>Option B: Persistent Tunnel (Recommended for Regular Development)<\/h3>\n<p>For ongoing development work, establish a more robust configuration:<\/p>\n<pre><code class=\"language-bash\"># Create a persistent tunnel with authentication options\r\ndevtunnel create --allow-anonymous\r\n\r\n# Set an expiration to maintain security hygiene\r\ndevtunnel create --allow-anonymous --expiration 7d\r\n\r\n# Configure the target port and begin hosting\r\ndevtunnel port create -p 4301\r\ndevtunnel host<\/code><\/pre>\n<p>The persistent approach provides greater control and enables reusing the same tunnel URL across multiple development sessions, simplifying Azure ML pipeline configuration management.<\/p>\n<h3>Authentication Methods<\/h3>\n<p>Devtunnels provides multiple security options to control access to your local service:<\/p>\n<ol>\n<li><strong>Anonymous Access<\/strong> (<code>--allow-anonymous<\/code>): Allows unrestricted access to anyone with the URL. Suitable for development environments but requires careful consideration of security implications.<\/li>\n<li><strong>Token-based Authentication<\/strong>: Generates JWT tokens with configurable expiration times (24 hours by default). This approach provides an optimal balance of security and convenience for Azure ML integration.<\/li>\n<li><strong>Microsoft Entra ID<\/strong>: Offers enterprise-grade authentication suitable for corporate environments with strict access control requirements.<\/li>\n<\/ol>\n<p>For Azure ML integration scenarios, token-based authentication typically provides the most practical solution.<\/p>\n<h2>Step 3: Authentication and Token Management<\/h2>\n<p>For Azure Machine Learning pipelines to authenticate with your tunnel, you have several options including anonymous access, token-based authentication, or Microsoft Entra ID authentication (detailed in the Complete Authentication Workflows section below). For most Azure ML scenarios, token-based authentication provides the best balance of security and usability.<\/p>\n<p><strong>Token Generation (Recommended):<\/strong><\/p>\n<pre><code class=\"language-bash\"># Generate a token with the default 24-hour expiration\r\ndevtunnel token swift-horse-fj051061 --scopes connect\r\n\r\n# Configure longer expiration if needed\r\ndevtunnel token swift-horse-fj051061 --scopes connect --expiration 48h<\/code><\/pre>\n<p><strong>Important considerations:<\/strong><\/p>\n<ul>\n<li>Default expiration is 24 hours, which accommodates most evaluation scenarios<\/li>\n<li>Duration can range from 1 hour to several weeks<\/li>\n<li>Token validity cannot exceed the tunnel&#8217;s own expiration period<\/li>\n<li><strong>Critical limitation:<\/strong> Tokens cannot be automatically refreshed once Azure ML pipelines begin execution. Ensure your token lifetime exceeds the expected completion time.<\/li>\n<\/ul>\n<h2>Complete Authentication Workflows for Azure ML<\/h2>\n<p>Choose the workflow that best matches your security requirements:<\/p>\n<h3>Workflow 1: Anonymous Access (Quick Testing)<\/h3>\n<pre><code class=\"language-bash\"># Step 1: Create and host tunnel\r\ndevtunnel create --allow-anonymous --expiration 7d\r\n# Note the tunnel ID from output (example: swift-horse-fj051061)\r\ndevtunnel port create swift-horse-fj051061 -p 4301\r\ndevtunnel host swift-horse-fj051061\r\n\r\n# Step 2: Get tunnel URL (in a separate terminal)\r\ndevtunnel show swift-horse-fj051061\r\n\r\n# Step 3: Azure ML pipeline configuration\r\n# url: \"&lt;devtunnels.ms URL&gt;\"\r\n# devtunnel_token: \"\"  # Leave empty for anonymous access<\/code><\/pre>\n<h3>Workflow 2: Token-based Authentication (Recommended)<\/h3>\n<pre><code class=\"language-bash\"># Step 1: Create secured tunnel\r\ndevtunnel create --expiration 7d\r\n# Note the tunnel ID from output (example: clever-cat-gh892341)\r\ndevtunnel port create clever-cat-gh892341 -p 4301\r\n\r\n# Step 2: Generate authentication token\r\ndevtunnel token clever-cat-gh892341 --scopes connect --expiration 48h\r\n\r\n# Step 3: Begin hosting\r\ndevtunnel host clever-cat-gh892341\r\n\r\n# Step 4: Get tunnel URL\r\ndevtunnel show clever-cat-gh892341\r\n\r\n# Step 5: Azure ML pipeline configuration\r\n# url: \"&lt;devtunnels.ms URL&gt;\"\r\n# devtunnel_token: \"eyJhbGciOiJFUzI1NiIsImtpZCI6...\"<\/code><\/pre>\n<h3>Workflow 3: Enterprise Authentication<\/h3>\n<pre><code class=\"language-bash\"># Step 1: Create tunnel with tenant access\r\ndevtunnel create --expiration 30d\r\ndevtunnel access create brave-lion-kx456789 --tenant\r\ndevtunnel port create brave-lion-kx456789 -p 4301\r\n\r\n# Step 2: Generate token and start hosting\r\ndevtunnel token brave-lion-kx456789 --scopes connect --expiration 24h\r\ndevtunnel host brave-lion-kx456789\r\n\r\n# Step 3: Get tunnel URL\r\ndevtunnel show brave-lion-kx456789<\/code><\/pre>\n<h2>Step 4: Configure Your Azure ML Pipeline<\/h2>\n<p>Update your Azure ML pipeline configuration to utilize the Devtunnel endpoint. Here&#8217;s an example configuration:<\/p>\n<pre><code class=\"language-yaml\"># pipeline.yaml\r\njobs:\r\n  service_evaluation:\r\n    type: command\r\n    component: azureml:service_evaluation@latest\r\n    environment_variables:\r\n      SERVICE_USERNAME: ${{secrets.SERVICE_USERNAME}}\r\n      SERVICE_PASSWORD: ${{secrets.SERVICE_PASSWORD}}\r\n    inputs:\r\n      input_data: ${{parent.inputs.input_data}}\r\n      # Replace production URL with tunnel URL for local development\r\n      url: \"&lt;devtunnels.ms\/api\/visualization URL&gt;\"\r\n      # Include tunnel authentication token\r\n      devtunnel_token: \"eyJhbGciOiJFUzI1NiIsImtpZCI6...\"\r\n      retry_count: 3\r\n      retry_delay: 5<\/code><\/pre>\n<p>This configuration enables seamless switching between local development and production environments by simply modifying the URL and token parameters.<\/p>\n<h2>Step 5: Update Your API Client Code<\/h2>\n<p>Your Azure ML component requires modification to include tunnel authorization headers in requests. Here&#8217;s the implementation approach:<\/p>\n<pre><code class=\"language-python\">def call_api(url, bearer_token, data, tunnel_token=\"\", logger=None):\r\n    headers = {\r\n        \"Authorization\": f\"Bearer {bearer_token}\",\r\n        \"Content-Type\": \"application\/json\",\r\n        \"Use-Advanced-Mode\": \"true\",\r\n    }\r\n\r\n    # Add tunnel authorization header when token is provided\r\n    if tunnel_token:\r\n        headers[\"X-Tunnel-Authorization\"] = f\"tunnel {tunnel_token}\"\r\n\r\n    try:\r\n        response = requests.post(url, json=data, headers=headers)\r\n        response.raise_for_status()\r\n        return response.json()\r\n    except requests.exceptions.RequestException as e:\r\n        logger.error(f\"API call failed: {e}\")\r\n        raise<\/code><\/pre>\n<p>The <code>X-Tunnel-Authorization<\/code> header is essential for Devtunnels to authenticate and authorize requests to your local service. Without this header, authentication will fail and requests will be rejected.<\/p>\n<h2>Step 6: Component Parameter Configuration<\/h2>\n<p>Design your Azure ML component to intelligently detect whether it&#8217;s communicating with a local tunnel or production service. This approach eliminates the need for multiple component versions:<\/p>\n<pre><code class=\"language-python\">def main():\r\n    parser = argparse.ArgumentParser()\r\n    parser.add_argument(\"--url\", type=str, required=True,\r\n                        help=\"API endpoint URL\")\r\n    parser.add_argument(\"--devtunnel_token\", type=str, default=\"\",\r\n                        help=\"Devtunnel token for local development\")\r\n\r\n    args = parser.parse_args()\r\n\r\n    # Log the operational mode for debugging clarity\r\n    if args.devtunnel_token:\r\n        logger.info(f\"Operating in development mode: {args.url}\")\r\n    else:\r\n        logger.info(f\"Operating in production mode: {args.url}\")<\/code><\/pre>\n<p>This logging approach provides clear operational context when reviewing Azure ML Studio logs, particularly valuable during troubleshooting sessions.<\/p>\n<h1>Real-World Example: AI-Powered Business Intelligence Service<\/h1>\n<p>To illustrate this approach, let&#8217;s examine an AI-powered business intelligence service that transforms natural language queries into interactive charts and dashboards. The service exposes REST API endpoints for visualization generation, semantic search, and data analysis.<\/p>\n<h2>The Evaluation Pipeline<\/h2>\n<p>The Azure ML pipeline consists of two components:<\/p>\n<ol>\n<li><strong>Service Evaluation Component<\/strong>: Executes test queries against service endpoints<\/li>\n<li><strong>Metrics Processor Component<\/strong>: Analyzes responses and calculates evaluation metrics<\/li>\n<\/ol>\n<h2>Local Development Workflow<\/h2>\n<p>The typical workflow includes:<\/p>\n<ol>\n<li>Initialize the local service (port 4301) with required endpoints<\/li>\n<li>Establish a Devtunnel connection<\/li>\n<li>Generate authentication tokens as needed<\/li>\n<li>Update pipeline configuration with tunnel URL and credentials<\/li>\n<li>Execute the Azure ML pipeline<\/li>\n<li>Review results in Azure ML Studio<\/li>\n<\/ol>\n<p>This workflow reduces development cycles from 30+ minutes to 5-10 minutes for complete evaluation feedback.<\/p>\n<h2>Pipeline Configuration<\/h2>\n<p>Here&#8217;s the actual configuration used in production:<\/p>\n<pre><code class=\"language-yaml\">service_evaluation:\r\n  inputs:\r\n    # Production URL (commented out during development)\r\n    # url: \"&lt;PROD_URL&gt;\"\r\n\r\n    # Devtunnel URL for local development\r\n    url: \"&lt;DEV_URL&gt;\"\r\n\r\n    # Devtunnel authentication token\r\n    devtunnel_token: \"eyJhbGciOiJFUzI1NiIsImtpZCI6...\"\r\n\r\n    # Other configuration\r\n    user_tenant: \"Development Team\"\r\n    truth_column: \"expected\"\r\n    query_column: \"User Query\"\r\n    retry_count: 3\r\n    retry_delay: 5<\/code><\/pre>\n<h1>Best Practices and Tips<\/h1>\n<p>The following recommendations are based on practical experience implementing this approach across multiple development scenarios:<\/p>\n<h2>Service Limits and Quotas<\/h2>\n<p>Before implementing Devtunnels in your development workflow, it&#8217;s important to understand the service limits that apply. These limits reset monthly:<\/p>\n<table>\n<thead>\n<tr>\n<th>Resource<\/th>\n<th>Limit<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Bandwidth<\/td>\n<td>5 GB per user<\/td>\n<\/tr>\n<tr>\n<td>Tunnels<\/td>\n<td>10 per user<\/td>\n<\/tr>\n<tr>\n<td>Active connections<\/td>\n<td>1000 per port<\/td>\n<\/tr>\n<tr>\n<td>Ports<\/td>\n<td>10 per tunnel<\/td>\n<\/tr>\n<tr>\n<td>HTTP request rate<\/td>\n<td>1500\/min per port<\/td>\n<\/tr>\n<tr>\n<td>Data transfer rate<\/td>\n<td>Up to 20 MB\/s per tunnel<\/td>\n<\/tr>\n<tr>\n<td>Max web-forwarding HTTP request body size<\/td>\n<td>16 MB<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><strong>Planning considerations:<\/strong><\/p>\n<ul>\n<li><strong>Bandwidth usage<\/strong>: Monitor your evaluation pipeline&#8217;s data transfer to stay within the 5 GB monthly limit<\/li>\n<li><strong>Request rate limits<\/strong>: Azure ML evaluation pipelines should respect the 1500 requests\/minute limit per port<\/li>\n<li><strong>Connection limits<\/strong>: Large-scale parallel evaluations may hit the 1000 concurrent connections per port limit<\/li>\n<li><strong>Tunnel management<\/strong>: With a 10-tunnel limit per user, consider reusing tunnels across development sessions<\/li>\n<\/ul>\n<p>For questions about these limits or requests for increases, you can open an issue in the <a href=\"https:\/\/github.com\/Microsoft\/dev-tunnels\/issues\">Microsoft dev-tunnels GitHub repository<\/a>.<\/p>\n<h2>Security Considerations<\/h2>\n<p><strong>Authentication Method Selection:<\/strong><\/p>\n<ul>\n<li><strong>Anonymous access<\/strong>: Convenient for rapid prototyping but exposes services publicly. Reserve for temporary development use only.<\/li>\n<li><strong>Token-based authentication<\/strong>: Provides balanced security and usability. Recommended for regular development workflows.<\/li>\n<li><strong>Microsoft Entra ID<\/strong>: Enterprise-grade solution suitable for corporate environments with strict compliance requirements.<\/li>\n<\/ul>\n<p><strong>Token Management Best Practices:<\/strong> Devtunnel tokens cannot be automatically refreshed within running Azure ML pipelines. Plan token lifetime to exceed expected pipeline completion time.<\/p>\n<p><strong>Credential Security:<\/strong> Never commit devtunnel tokens to version control. Use Azure ML secrets or environment variables.<\/p>\n<h2>Development Workflow Optimization<\/h2>\n<p><strong>Environment Configuration Management:<\/strong> Use configuration files or environment variables to enable seamless switching between local and production endpoints.<\/p>\n<p><strong>Comprehensive Logging:<\/strong> Implement clear logging that identifies whether requests target local or production services to reduce debugging time.<\/p>\n<p><strong>Network Resilience:<\/strong> Implement retry logic and appropriate timeouts. Network latency can vary when routing requests through tunnels.<\/p>\n<h2>Performance Optimization<\/h2>\n<p><strong>Rate Limiting:<\/strong> Local services may not handle full Azure ML evaluation pipeline volume. Implement rate limiting to prevent resource exhaustion.<\/p>\n<p><strong>Geographic Considerations:<\/strong> Network latency varies based on proximity between development environments and Azure regions.<\/p>\n<h1>Troubleshooting Common Issues<\/h1>\n<p>The following troubleshooting guide addresses frequently encountered issues and their resolutions:<\/p>\n<h2>Connection Problems<\/h2>\n<ul>\n<li><strong>Tunnel connectivity failures<\/strong>: Verify tunnel status using <code>devtunnel show<\/code> to confirm the tunnel is active and properly configured.<\/li>\n<li><strong>Authentication errors<\/strong>: Confirm that tokens haven&#8217;t expired and match the tunnel&#8217;s configured authentication method. Both tokens and tunnels have expiration times that must be managed appropriately.<\/li>\n<li><strong>Port configuration issues<\/strong>: Verify that your local service is running on the expected port.<\/li>\n<\/ul>\n<h2>Azure ML Pipeline Issues<\/h2>\n<ul>\n<li><strong>Component execution failures<\/strong>: Review Azure ML logs for specific error messages related to URL accessibility or authentication problems.<\/li>\n<li><strong>Token format errors<\/strong>: Ensure the <code>X-Tunnel-Authorization<\/code> header uses the exact format <code>tunnel &lt;your-token&gt;<\/code> (not <code>Token<\/code> or <code>Bearer<\/code>).<\/li>\n<li><strong>Token expiration during execution<\/strong>: If evaluation pipelines exceed token lifetime, they will fail mid-execution. Set token expiration to at least twice the expected pipeline duration.<\/li>\n<\/ul>\n<h2>Performance Issues<\/h2>\n<ul>\n<li><strong>Response latency<\/strong>: Consider geographic distance between development environments and Azure regions when investigating performance concerns.<\/li>\n<li><strong>Rate limiting effects<\/strong>: Monitor local service logs to identify whether rate limiting or resource constraints are causing request failures or delays.<\/li>\n<\/ul>\n<h1>Conclusion<\/h1>\n<p>This approach using Microsoft Devtunnels with Azure Machine Learning has significantly transformed our AI development workflows. The ability to test local modifications against production-quality evaluation pipelines without deployment overhead provides substantial development velocity improvements.<\/p>\n<p>Key benefits we&#8217;ve observed include:<\/p>\n<ol>\n<li><strong>Accelerated development cycles<\/strong>: Complete evaluation feedback is available within minutes rather than the traditional deployment timeframes<\/li>\n<li><strong>Evaluation consistency<\/strong>: The same pipeline validates both local development and production code, eliminating environment-specific discrepancies<\/li>\n<li><strong>Reduced infrastructure costs<\/strong>: Decreased cloud compute requirements during development iterations<\/li>\n<li><strong>Enhanced debugging capabilities<\/strong>: Full local debugging access while maintaining comprehensive cloud-based evaluation<\/li>\n<\/ol>\n<p>This implementation demonstrates how to establish secure connectivity between local development and cloud evaluation environments, configure Azure ML pipelines for both local and production targets, implement proper authentication, and optimize workflows for efficient development practices.<\/p>\n<p>This pattern proves particularly valuable for AI applications where rapid iteration and comprehensive evaluation are critical success factors.<\/p>\n<h1>Additional Resources<\/h1>\n<ul>\n<li><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/developer\/dev-tunnels\/\">Microsoft Devtunnels Documentation<\/a><\/li>\n<li><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/concept-ml-pipelines\">Azure Machine Learning Pipeline Documentation<\/a><\/li>\n<li><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/concept-component\">Azure ML Components Documentation<\/a><\/li>\n<li><a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/architecture\/best-practices\/api-design\">REST API Best Practices for AI Services<\/a><\/li>\n<\/ul>\n<hr \/>\n<p><em>This blog post is based on a real-world implementation of AI service evaluation using Azure Machine Learning and Microsoft Devtunnels. The code examples and configurations shown are adapted from production use cases for evaluating business intelligence and analytics services.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Learn how to streamline AI development by using Microsoft Devtunnels to connect local services with Azure Machine Learning evaluation pipelines, eliminating deployment delays while maintaining comprehensive cloud-based validation.<\/p>\n","protected":false},"author":196019,"featured_media":16558,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[10,1,17,19],"tags":[3611,85,3634,3635,3632,3400,3633],"class_list":["post-16557","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-azure-app-services","category-cse","category-frameworks","category-machine-learning","tag-ai-development","tag-azure-ml","tag-cloud-evaluation","tag-development-workflow","tag-devtunnels","tag-ise","tag-local-development"],"acf":[],"blog_post_summary":"<p>Learn how to streamline AI development by using Microsoft Devtunnels to connect local services with Azure Machine Learning evaluation pipelines, eliminating deployment delays while maintaining comprehensive cloud-based validation.<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/posts\/16557","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/users\/196019"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/comments?post=16557"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/posts\/16557\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/media\/16558"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/media?parent=16557"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/categories?post=16557"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/tags?post=16557"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}