{"id":252,"date":"2026-01-14T09:00:00","date_gmt":"2026-01-14T15:33:04","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/aspire\/?p=252"},"modified":"2026-01-08T08:30:51","modified_gmt":"2026-01-08T16:30:51","slug":"adding-aspire-to-a-python-rag-application","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/aspire\/adding-aspire-to-a-python-rag-application\/","title":{"rendered":"Adding Aspire to a Python RAG Application"},"content":{"rendered":"<h1>Adding Aspire to a Python RAG Application<\/h1>\n<p>Imagine you&#8217;re developing a RAG (Retrieval Augmented Generation) application with a Python backend, TypeScript frontend, and a few different Azure services. Your local development setup requires multiple terminal windows, hardcoded ports scattered across config files, and a deployment process that demands an Azure deployment just to test locally. Onboarding a new developer? Good luck walking them through all of this.<\/p>\n<p>This was our situation with the <a href=\"https:\/\/github.com\/Azure-Samples\/azure-search-openai-demo\">azure-search-openai-demo<\/a>, a sample showing how to build ChatGPT-like experiences over your own documents. The developer experience left room for improvement\u2014especially for new team members trying to get up and running.<\/p>\n<p>This post walks through adding Aspire to the azure-search-openai-demo application.<\/p>\n<h2>The Application: A RAG Chat Experience<\/h2>\n<p>Before diving into the Aspire integration, let&#8217;s understand what we&#8217;re working with. The azure-search-openai-demo is a production-ready RAG application that allows users to ask questions about their documents:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/aspire\/wp-content\/uploads\/sites\/90\/2026\/01\/chatscreen.png\" alt=\"The chat screen of the azure-search-openai-demo application\" \/><\/p>\n<p>Its architecture combines:<\/p>\n<ul>\n<li><strong>Backend<\/strong>: Python 3.11+ with Quart (an async Flask-like framework)<\/li>\n<li><strong>Frontend<\/strong>: React with TypeScript, built using Vite<\/li>\n<li><strong>Azure Services<\/strong>: OpenAI (GPT-4), AI Search, Blob Storage<\/li>\n<li>Document ingestion pipeline, vector search, semantic ranking, and citation rendering<\/li>\n<\/ul>\n<p><img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/aspire\/wp-content\/uploads\/sites\/90\/2026\/01\/appcomponents.png\" alt=\"The architecture of the azure-search-openai-demo application\" \/><\/p>\n<p>It&#8217;s a comprehensive example of how to build AI applications on Azure, complete with authentication, multimodal support, and deployment infrastructure.<\/p>\n<h3>The Developer Experience Challenges<\/h3>\n<p>Despite its sophistication, the local development workflow had some pain points:<\/p>\n<table>\n<thead>\n<tr>\n<th>Challenge<\/th>\n<th>Before Aspire<\/th>\n<th>After Aspire<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><strong>Local setup<\/strong><\/td>\n<td>Deploy to Azure first, then run locally<\/td>\n<td>Run locally without cloud deployment<\/td>\n<\/tr>\n<tr>\n<td><strong>Starting services<\/strong><\/td>\n<td>Start.ps1\/sh scripts<\/td>\n<td>Single command startup<\/td>\n<\/tr>\n<tr>\n<td><strong>Environment variables<\/strong><\/td>\n<td>Multiple config files (.env, azure.yaml, etc.)<\/td>\n<td>Declarative configuration in code<\/td>\n<\/tr>\n<tr>\n<td><strong>Observability<\/strong><\/td>\n<td>Application Insights (production only)<\/td>\n<td>Aspire Dashboard (local + production)<\/td>\n<\/tr>\n<tr>\n<td><strong>Port management<\/strong><\/td>\n<td>Hardcoded ports (backend: 50505, frontend: 5173)<\/td>\n<td>Dynamic assignment<\/td>\n<\/tr>\n<tr>\n<td><strong>Deployment<\/strong><\/td>\n<td>Write Bicep infrastructure templates<\/td>\n<td>Define in AppHost, Aspire generates Bicep<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2>What is Aspire<\/h2>\n<p>Aspire streamlines building, running, debugging, and deploying distributed apps. With Aspire, you define your services in code and run everything with one command:<\/p>\n<ul>\n<li>Start your entire app locally with <code>aspire run<\/code>\u2014no terminal juggling<\/li>\n<li>Services automatically discover each other through environment variables<\/li>\n<li>The dashboard collects logs, traces, and metrics using OpenTelemetry, an open standard you can take anywhere<\/li>\n<li>New team members can clone and run immediately<\/li>\n<\/ul>\n<h3>Language Support<\/h3>\n<p>Aspire includes dedicated packages for polyglot development:<\/p>\n<ul>\n<li><code>Aspire.Hosting.Python<\/code> &#8211; Run Python applications, scripts, and modules<\/li>\n<li><code>Aspire.Hosting.JavaScript<\/code> &#8211; Run Node.js and Vite applications<\/li>\n<li><code>Aspire.Hosting.Azure.*<\/code> &#8211; Declaratively define Azure resources<\/li>\n<\/ul>\n<p>This means you can run your Python and JavaScript applications alongside Azure services, all from a single AppHost file, with automatic environment variable injection and service discovery.<\/p>\n<h2>The Integration Journey<\/h2>\n<p>The transformation happened incrementally across four commits. Let&#8217;s walk through each commit. Don&#8217;t worry if you&#8217;re not familiar with C# &#8211; I&#8217;ll explain what each part does.<\/p>\n<h3>Commit 1: Creating the AppHost Foundation<\/h3>\n<p>The <strong>AppHost<\/strong> is a single file where you define your services. There&#8217;s no need for a separate folder or full project structure. We created an <code>apphost.cs<\/code> file in the top-level <code>app<\/code> folder:<\/p>\n<pre><code class=\"language-csharp\">#:sdk Aspire.AppHost.Sdk@13.1.0\n#:package Aspire.Hosting.Azure.AppContainers@13.1.0\n#:package Aspire.Hosting.Azure.CognitiveServices@13.1.0\n#:package Aspire.Hosting.Azure.Search@13.1.0\n#:package Aspire.Hosting.Azure.Storage@13.1.0\n#:package Aspire.Hosting.JavaScript@13.1.0\n#:package Aspire.Hosting.Python@13.1.0\n\nusing Aspire.Hosting.Azure;\n\nvar builder = DistributedApplication.CreateBuilder(args);<\/code><\/pre>\n<h4>Defining Azure Resources<\/h4>\n<p>You define your Azure resources in the AppHost, and Aspire generates the Bicep for deployment:<\/p>\n<pre><code class=\"language-csharp\">\/\/ Storage account with blob container\nvar storage = builder.AddAzureStorage(\"storage\");\nvar content = storage.AddBlobContainer(\"content\");\n\n\/\/ Azure AI Search\nvar search = builder.AddAzureSearch(\"search\");\n\n\/\/ Azure OpenAI with two model deployments\nvar openai = builder.AddAzureOpenAI(\"openai\");\nvar chatModel = openai.AddDeployment(\"chat\", \"gpt-4o\", \"2024-08-06\")\n    .WithProperties(m =&gt; m.SkuCapacity = 30);\nvar textEmbedding = openai.AddDeployment(\"text-embedding\", \"text-embedding-3-large\", \"1\")\n    .WithProperties(m =&gt; m.SkuCapacity = 200);<\/code><\/pre>\n<p>Aspire handles provisioning these resources in Azure during deployment and provides connection information to your application automatically. No more copying connection strings or managing secrets in config files.<\/p>\n<h4>Configuring the Python Backend<\/h4>\n<p>Aspire can run Python applications. Instead of having <code>start.ps1<\/code> and <code>start.sh<\/code> scripts with:<\/p>\n<pre><code class=\"language-bash\">.\/.venv\/bin\/python -m pip install -r app\/backend\/requirements.txt\n\nport=50505\nhost=localhost\n..\/..\/.venv\/bin\/python -m quart --app main:app run --port \"$port\" --host \"$host\" --reload<\/code><\/pre>\n<p>We can model starting the Python application in the AppHost:<\/p>\n<pre><code class=\"language-csharp\">var backend = builder.AddPythonModule(\"backend\", \".\/backend\", \"quart\")\n    .WithHttpEndpoint(env: \"PORT\")\n    .WithArgs(c =&gt;\n    {\n        c.Args.Add(\"--app\");\n        c.Args.Add(\"main.py\");\n        c.Args.Add(\"run\");\n\n        var endpoint = ((IResourceWithEndpoints)c.Resource).GetEndpoint(\"http\");\n        c.Args.Add(\"--port\");\n        c.Args.Add(endpoint.Property(EndpointProperty.TargetPort));\n        c.Args.Add(\"--host\");\n        c.Args.Add(endpoint.EndpointAnnotation.TargetHost);\n\n        \/\/ Hot reload for local development\n        c.Args.Add(\"--reload\");\n    })\n    .WithEnvironment(\"AZURE_STORAGE_ACCOUNT\", storage.Resource.NameOutputReference)\n    .WithEnvironment(\"AZURE_STORAGE_CONTAINER\", content.Resource.BlobContainerName)\n    .WithEnvironment(\"AZURE_SEARCH_ENDPOINT\", search.Resource.UriExpression)\n    .WithEnvironment(\"AZURE_OPENAI_ENDPOINT\", openai.Resource.UriExpression)\n    .WithEnvironment(\"AZURE_OPENAI_CHATGPT_DEPLOYMENT\", chatModel.Resource.DeploymentName)\n    .WithEnvironment(\"AZURE_OPENAI_EMB_DEPLOYMENT\", textEmbedding.Resource.DeploymentName)<\/code><\/pre>\n<p>Key features here:<\/p>\n<ul>\n<li><strong>Dynamic port assignment<\/strong>: No more hardcoded <code>50505<\/code><\/li>\n<li><strong>Hot reload<\/strong>: Changes to Python files automatically restart the server<\/li>\n<li><strong>Type-safe resource references<\/strong>: The Azure resources flow as strongly-typed objects<\/li>\n<\/ul>\n<p>Aspire resolves the Azure expressions at runtime based on whether you&#8217;re running locally or deployed to Azure.<\/p>\n<h4>Configuring the Vite Frontend<\/h4>\n<p>At development time, Vite runs its own server to host the frontend files which allows for things like Hot Module Replacement. When the app is deployed, the frontend is packaged into the backend app, and served from a <code>static<\/code> folder.<\/p>\n<p>The frontend integration is simple:<\/p>\n<pre><code class=\"language-csharp\">var frontend = builder.AddViteApp(\"frontend\", \".\/frontend\")\n    .WithReference(backend)\n    .WaitFor(backend);<\/code><\/pre>\n<p><code>WithReference(backend)<\/code> automatically injects environment variables (<code>BACKEND_HTTP<\/code>, <code>BACKEND_HTTPS<\/code>) that tell the frontend how to reach the backend. No more hardcoded URLs.<\/p>\n<p>We updated <code>vite.config.ts<\/code> to use these environment variables:<\/p>\n<pre><code class=\"language-typescript\">\/\/ Before:\nproxy: {\n    \"\/ask\": \"http:\/\/localhost:50505\",\n    \"\/chat\": \"http:\/\/localhost:50505\",\n    \/\/ ... repeated\n}\n\n\/\/ After:\nconst proxyTarget = {\n    target: process.env.BACKEND_HTTPS || process.env.BACKEND_HTTP,\n    changeOrigin: true\n};\n\nproxy: {\n    \"\/ask\": proxyTarget,\n    \"\/chat\": proxyTarget,\n    \"\/speech\": proxyTarget,\n    \"\/config\": proxyTarget,\n    \/\/ ... all routes use the same dynamic target\n}<\/code><\/pre>\n<p>Now the frontend automatically discovers the backend URL when running locally on a dynamic port.<\/p>\n<p><code>WaitFor(backend)<\/code> tells the frontend to wait until the backend is healthy before starting. This ensures the backend is ready to accept requests before the frontend tries connecting to it.<\/p>\n<p>When deployed to production, the frontend is hosted from the backend:<\/p>\n<pre><code class=\"language-csharp\">backend.PublishWithContainerFiles(frontend, \".\/static\");<\/code><\/pre>\n<p>This tells the <code>backend<\/code> to take the published files from the <code>frontend<\/code> resource and put them in the <code>.\/static<\/code> folder in the backend. The backend python code already supports this with routes like:<\/p>\n<pre><code class=\"language-python\">bp = Blueprint(\"routes\", __name__, static_folder=\"static\")\n\n@bp.route(\"\/\")\nasync def index():\n    return await bp.send_static_file(\"index.html\")\n\n@bp.route(\"\/assets\/&lt;path:path&gt;\")\nasync def assets(path):\n    return await send_from_directory(Path(__file__).resolve().parent \/ \"static\" \/ \"assets\", path)<\/code><\/pre>\n<h3>Commit 2: Refactoring Environment Variables<\/h3>\n<p>We updated the application to use full endpoint URLs instead of service names:<\/p>\n<pre><code class=\"language-python\"># Before:\nAZURE_SEARCH_SERVICE = os.environ[\"AZURE_SEARCH_SERVICE\"]\nAZURE_SEARCH_ENDPOINT = f\"https:\/\/{AZURE_SEARCH_SERVICE}.search.windows.net\"\n\n# After:\nAZURE_SEARCH_ENDPOINT = os.environ[\"AZURE_SEARCH_ENDPOINT\"]<\/code><\/pre>\n<p>Why does this matter?<\/p>\n<ul>\n<li><strong>Simpler code<\/strong>: No string concatenation to build URLs<\/li>\n<li><strong>Flexibility<\/strong>: Works with custom domains and different cloud environments<\/li>\n<li><strong>Aspire-friendly<\/strong>: Aspire provides full URIs via <code>Resource.UriExpression<\/code><\/li>\n<\/ul>\n<p>The same pattern was applied to <code>AZURE_OPENAI_ENDPOINT<\/code> and other Azure service connections. This change touched 10 files across the codebase but resulted in cleaner, more maintainable code.<\/p>\n<h3>Commit 3: Adding OpenTelemetry Observability<\/h3>\n<p>Observability is where Aspire shines. OpenTelemetry is the industry standard for telemetry, and Aspire&#8217;s dashboard natively consumes OTLP (OpenTelemetry Protocol).<\/p>\n<p>To enable OpenTelemetry in the app, we added <code>app\/backend\/telemetry.py<\/code>:<\/p>\n<pre><code class=\"language-python\">import opentelemetry._logs as otel_logs\nimport opentelemetry.metrics as otel_metrics\nimport opentelemetry.trace as otel_trace\n# (additional imports omitted for brevity)\n\ndef configure_opentelemetry():\n    # Configure Traces\n    otel_trace.set_tracer_provider(otel_sdk_trace.TracerProvider())\n    otlp_span_exporter = trace_exporter.OTLPSpanExporter()\n    span_processor = otel_trace_export.BatchSpanProcessor(otlp_span_exporter)\n    otel_trace.get_tracer_provider().add_span_processor(span_processor)\n\n    # Configure Metrics\n    otlp_metric_exporter = metric_exporter.OTLPMetricExporter()\n    metric_reader = otel_metrics_export.PeriodicExportingMetricReader(\n        otlp_metric_exporter,\n        export_interval_millis=5000\n    )\n    otel_metrics.set_meter_provider(\n        otel_sdk_metrics.MeterProvider(metric_readers=[metric_reader])\n    )\n\n    # Configure Logs\n    otel_logs.set_logger_provider(otel_sdk_logs.LoggerProvider())\n    otlp_log_exporter = log_exporter.OTLPLogExporter()\n    log_processor = otel_logs_export.BatchLogRecordProcessor(otlp_log_exporter)\n    otel_logs.get_logger_provider().add_log_record_processor(log_processor)\n\n    # Configure standard logging to also emit to OpenTelemetry\n    logging.basicConfig(\n        level=logging.INFO,\n        handlers=[\n            logging.StreamHandler(),\n            otel_sdk_logs.LoggingHandler(logger_provider=otel_logs.get_logger_provider())\n        ]\n    )<\/code><\/pre>\n<p>This configures the three signals of observability:<\/p>\n<ul>\n<li><strong>Traces<\/strong>: Track requests as they flow through the system<\/li>\n<li><strong>Metrics<\/strong>: Measure performance indicators<\/li>\n<li><strong>Logs<\/strong>: Structured logging with automatic correlation<\/li>\n<\/ul>\n<p>In <code>app.py<\/code>, we integrated this early in the startup process:<\/p>\n<pre><code class=\"language-python\">import telemetry\n\n# Configure OpenTelemetry BEFORE creating the app\ntelemetry.configure_opentelemetry()\n\n# ... later in create_app() ...\nif os.getenv(\"APPLICATIONINSIGHTS_CONNECTION_STRING\") or os.getenv(\"OTEL_EXPORTER_OTLP_ENDPOINT\"):\n    # Instrument HTTP clients\n    AioHttpClientInstrumentor().instrument()\n    HTTPXClientInstrumentor().instrument()\n\n    # Instrument OpenAI SDK calls\n    OpenAIInstrumentor().instrument()\n\n    # Instrument the ASGI app to capture request\/response\n    app.asgi_app = OpenTelemetryMiddleware(app.asgi_app)<\/code><\/pre>\n<p>The conditional check ensures instrumentation only runs when telemetry endpoints are configured. Aspire automatically sets <code>OTEL_EXPORTER_OTLP_ENDPOINT<\/code> to point to its dashboard.<\/p>\n<p>When running the application now, we get traces and metrics in the Aspire Dashboard:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/aspire\/wp-content\/uploads\/sites\/90\/2026\/01\/dashboard-tracing.png\" alt=\"The Aspire Dashboard shows traces of the application\" \/><\/p>\n<p>You can see the 3 sparkle icons on the calls to <code>openai<\/code>. Clicking on these gives you insight into what is sent to the LLM:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/aspire\/wp-content\/uploads\/sites\/90\/2026\/01\/dashboard-genai.png\" alt=\"The Aspire Dashboard shows generative AI information\" \/><\/p>\n<p>You can even view how many tokens were used:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/aspire\/wp-content\/uploads\/sites\/90\/2026\/01\/dashboard-\" alt=\"The Aspire Dashboard shows tokens used\" \/><\/p>\n<h3>Commit 4: Enabling Azure Container Apps Deployment<\/h3>\n<p>The final piece was configuring deployment to Azure Container Apps. This code only runs when the application is being deployed, not when the app is run locally.<\/p>\n<pre><code class=\"language-csharp\">builder.AddAzureContainerAppEnvironment(\"env\");\n\nvar backend = builder.AddPythonModule(\n        \"backend\",\n        \".\/backend\", \n        builder.ExecutionContext.IsRunMode ? \"quart\" : \"gunicorn\")\n    .WithArgs(c =&gt;\n    {\n        \/\/ In run mode, set up for local development with hot reload\n        \/\/ In publish mode, use a production server\n        if (builder.ExecutionContext.IsRunMode)\n        {\n           ... \/\/ run args\n        }\n        else\n        {\n            c.Args.Add(\"-k\");\n            c.Args.Add(\"uvicorn.workers.UvicornWorker\");\n\n            c.Args.Add(\"-b\");\n            c.Args.Add(\"0.0.0.0:8000\");\n\n            c.Args.Add(\"main:app\");\n        }\n    })\n    \/\/ ... all the previous configuration ...\n    .WithExternalHttpEndpoints()\n    .PublishAsAzureContainerApp((infra, app) =&gt;\n    {\n        var c = app.Template.Containers.Single().Value;\n        if (c != null)\n        {\n            c.Resources.Cpu = 1.0;\n            c.Resources.Memory = \"2.0Gi\";\n        }\n    });<\/code><\/pre>\n<p><code>PublishAsAzureContainerApp<\/code> tells Aspire to:<\/p>\n<ol>\n<li>Build a Docker container for the application<\/li>\n<li>Push it to Azure Container Registry<\/li>\n<li>Deploy it to Azure Container Apps with the specified settings \n<ul>\n<li>The existing bicep sets Cpu to 1.0 and Memory to &#8220;2.0Gi&#8221;, so we do here as well.<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p>The frontend builds into the backend&#8217;s static folder:<\/p>\n<pre><code class=\"language-csharp\">backend.PublishWithContainerFiles(frontend, \".\/static\");<\/code><\/pre>\n<p>This creates a single-container deployment with the frontend embedded, simplifying the deployment architecture.<\/p>\n<p>Now deploying the app is as simple as:<\/p>\n<pre><code class=\"language-bash\">aspire deploy<\/code><\/pre>\n<p>And answering a few questions, like which Subscription, Resource Group, and Region to deploy to, and the application is running in Azure:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/aspire\/wp-content\/uploads\/sites\/90\/2026\/01\/azure-portal.png\" alt=\"Azure services in the portal.\" \/><\/p>\n<h2>Developer Experience Transformation<\/h2>\n<p>Here&#8217;s what local development looks like now.<\/p>\n<h3>Local Development: Before and After<\/h3>\n<p><strong>Before Aspire:<\/strong><\/p>\n<pre><code class=\"language-bash\"># Create an azd environment\nazd env new\n\n# Deploy to Azure\nazd up\n\n# Start the app\n.\/app\/start.sh\n\n# Check logs in terminal output<\/code><\/pre>\n<p><strong>After Aspire:<\/strong><\/p>\n<pre><code class=\"language-bash\"># Single command\naspire run\n\n# A link to the Aspire Dashboard is displayed<\/code><\/pre>\n<h3>The Aspire Dashboard<\/h3>\n<p>When you run the AppHost, the Aspire Dashboard launches automatically. It provides:<\/p>\n<h3>Live Resource Monitoring<\/h3>\n<ul>\n<li>All services (backend, frontend, prepdocs) in one view<\/li>\n<li>Real-time status: starting, running, exited<\/li>\n<li>Console logs for each service with filtering<\/li>\n<li>Environment variables inspection<\/li>\n<\/ul>\n<h3>Distributed Tracing<\/h3>\n<ul>\n<li>See request flows: Frontend \u2192 Backend \u2192 OpenAI \u2192 AI Search<\/li>\n<li>Performance timing for each operation<\/li>\n<li>Automatic correlation across services<\/li>\n<li>Click on any trace to see detailed spans<\/li>\n<\/ul>\n<h3>Structured Logs<\/h3>\n<ul>\n<li>All service logs in one searchable interface<\/li>\n<li>Automatic correlation with traces via span IDs<\/li>\n<li>Filter by severity, service, time range<\/li>\n<li>Search across all logs simultaneously<\/li>\n<\/ul>\n<h3>Metrics<\/h3>\n<ul>\n<li>Request rates and response times<\/li>\n<li>Resource utilization (CPU, memory)<\/li>\n<li>Custom application metrics<\/li>\n<li>Historical trends and graphs<\/li>\n<\/ul>\n<h3>Deployment: The New Way<\/h3>\n<p><strong>Before Aspire:<\/strong><\/p>\n<pre><code class=\"language-bash\"># Write and maintain Bicep files\nazd init\nazd up\n# Edit .env file\nazd deploy<\/code><\/pre>\n<p><strong>After Aspire:<\/strong><\/p>\n<pre><code class=\"language-bash\">aspire deploy                # Aspire handles everything\n# Aspire generates Bicep \u2192 builds containers \u2192 deploys to ACA\n# Environment variables flow automatically from AppHost<\/code><\/pre>\n<h2>Key Learnings and Best Practices<\/h2>\n<h3>What Worked Well<\/h3>\n<p><strong>1&#46; Start with Local Development First<\/strong> Get the AppHost working locally before tackling deployment. The Aspire Dashboard makes it easy to verify that all services connect correctly and environment variables flow as expected.<\/p>\n<p><strong>2&#46; Incremental Migration<\/strong> We added Aspire over four commits, not in a &#8220;big bang&#8221; rewrite. Each commit was testable and deployable independently. This approach reduces risk and makes it easier to learn Aspire incrementally.<\/p>\n<h2>What&#8217;s Next: Document ingestion<\/h2>\n<p>The azure-search-openai-demo app provides two ways to ingest data: manual ingestion and cloud ingestion. Both approaches use the same code for processing the data, but the manual ingestion runs locally while cloud ingestion runs in Azure Functions as Azure AI Search custom skills.<\/p>\n<p>For this first step of moving to Aspire, we only enabled manual ingestion. To enable cloud ingestion, we will need to add support for:<\/p>\n<ol>\n<li>The Azure Functions which provide the custom skills to Azure Search.<\/li>\n<li>Support for running the <code>setup_cloud_ingestion.py<\/code> script during\/after deployment.<\/li>\n<\/ol>\n<p>In a future post, we\u2019ll tackle these gaps and extend the Aspire AppHost to support cloud ingestion end to end, completing the document ingestion story for the azure-search-openai-demo app.<\/p>\n<h2>Wrapping Up<\/h2>\n<p>Adding Aspire to our Python RAG application gave us:<\/p>\n<ul>\n<li>\u2705 <strong>Simplified local development<\/strong> &#8211; One command instead of managing multiple scripts<\/li>\n<li>\u2705 <strong>Built-in observability<\/strong> &#8211; Aspire Dashboard with traces, logs, and metrics<\/li>\n<li>\u2705 <strong>Streamlined deployment<\/strong> &#8211; AppHost \u2192 generated Bicep \u2192 Azure Container Apps<\/li>\n<li>\u2705 <strong>Maintained polyglot architecture<\/strong> &#8211; Python and TypeScript, no rewrites needed<\/li>\n<\/ul>\n<p>We added Aspire alongside our existing code with minimal changes.<\/p>\n<h3>Try It Yourself<\/h3>\n<p>Try it out now:<\/p>\n<ol>\n<li>\n<p><strong>Install Aspire<\/strong>: Following the <a href=\"https:\/\/aspire.dev\/get-started\/install-cli\/\">instructions<\/a>.<\/p>\n<\/li>\n<li>\n<p><strong>Clone the sample<\/strong>:<\/p>\n<pre><code class=\"language-bash\">git clone -b Aspirify https:\/\/github.com\/eerhardt\/azure-search-openai-demo\ncd azure-search-openai-demo<\/code><\/pre>\n<\/li>\n<li>\n<p><strong>Run it locally with Aspire<\/strong>:<\/p>\n<pre><code class=\"language-bash\">aspire run<\/code><\/pre>\n<\/li>\n<li>\n<p><strong>Explore the Dashboard<\/strong>: It opens automatically at <code>https:\/\/localhost:17216<\/code><\/p>\n<p>You will be prompted to enter Azure information to deploy the Azure Search, Open AI, and Storage resources.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/aspire\/wp-content\/uploads\/sites\/90\/2026\/01\/azure-prompt.png\" alt=\"Prompt for Azure information\" \/><\/p>\n<p>Once this completes, you can run the <code>prepdocs<\/code> resource by clicking the play button, which will ingest the documents contained in the <code>data<\/code> folder.<\/p>\n<p>You can then open the <code>frontend<\/code> URL to see the app running, and ask it a question about the documents.<\/p>\n<\/li>\n<li>\n<p><strong>Deploy to Azure<\/strong>:<\/p>\n<pre><code class=\"language-bash\">aspire deploy<\/code><\/pre>\n<blockquote>\n<p><strong>Note:<\/strong> If you&#8217;re already using <code>azd<\/code>, it also supports deploying Aspire AppHosts via <code>azd up<\/code>.<\/p>\n<\/blockquote>\n<\/li>\n<li>\n<p><strong>Study the commits<\/strong>: Check out the <a href=\"https:\/\/github.com\/Azure-Samples\/azure-search-openai-demo\/compare\/main...eerhardt:azure-search-openai-demo:Aspirify\">four commits that added Aspire<\/a>.<\/p>\n<\/li>\n<li>\n<p><strong>Read the docs<\/strong>: <a href=\"https:\/\/aspire.dev\/docs\/\">Aspire documentation<\/a> has excellent Python and JavaScript examples<\/p>\n<\/li>\n<\/ol>\n<hr \/>\n<h3>Resources<\/h3>\n<ul>\n<li><a href=\"https:\/\/aspire.dev\/docs\/\">Aspire Documentation<\/a><\/li>\n<li><a href=\"https:\/\/github.com\/Azure-Samples\/azure-search-openai-demo\">Azure Search OpenAI Demo Repository<\/a><\/li>\n<li><a href=\"https:\/\/aspire.dev\/integrations\/frameworks\/python\/\">Aspire Python Hosting<\/a><\/li>\n<li><a href=\"https:\/\/aspire.dev\/integrations\/frameworks\/javascript\/\">Aspire JavaScript Hosting<\/a><\/li>\n<li><a href=\"https:\/\/opentelemetry.io\/docs\/languages\/python\/\">OpenTelemetry Python<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>This blog post documents the transformation of the azure-search-openai-demo (a Python\/TypeScript RAG application) through the addition of Aspire for local development and observability. It demonstrates how Aspire can enhance polyglot applications without requiring rewrites.<\/p>\n","protected":false},"author":112653,"featured_media":246,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[20,1],"tags":[8,9,12,24],"class_list":["post-252","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai","category-aspire-category","tag-ai","tag-aspire","tag-javascript","tag-python"],"acf":[],"blog_post_summary":"<p>This blog post documents the transformation of the azure-search-openai-demo (a Python\/TypeScript RAG application) through the addition of Aspire for local development and observability. It demonstrates how Aspire can enhance polyglot applications without requiring rewrites.<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/aspire\/wp-json\/wp\/v2\/posts\/252","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/aspire\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/aspire\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/aspire\/wp-json\/wp\/v2\/users\/112653"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/aspire\/wp-json\/wp\/v2\/comments?post=252"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/aspire\/wp-json\/wp\/v2\/posts\/252\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/aspire\/wp-json\/wp\/v2\/media\/246"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/aspire\/wp-json\/wp\/v2\/media?parent=252"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/aspire\/wp-json\/wp\/v2\/categories?post=252"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/aspire\/wp-json\/wp\/v2\/tags?post=252"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}