September 17th, 2025
0 reactions

Ground Your Agents Faster with Native Azure AI Search Indexing in Foundry

Farzad Sunavala
Principal Product Manager

TL;DR

Azure AI Foundry now lets you ingest data directly from Azure Blob Storage, ADLS Gen2, or Microsoft OneLake and create an Azure AI Search index in just one click. 

When you create an agent in Azure AI Foundry one of the most powerful steps is “Add knowledge”—grounding your agent with your enterprise data so it can answer questions and act with context. 

Previously, this required you to bring an existing Azure AI Search index and configure it before you could connect your data. That meant extra setup steps and more friction, especially if you were just experimenting. 

Today, we’re making this much simpler. 

Why This Matters

Grounding (a.k.a. retrieval augmentation) is one of the highest‑leverage steps in agent development. But the traditional workflow—provision a search service, design an index, run an ingestion pipeline, create skillsets, then wire it to your agent—adds friction when you simply want to test a hypothesis or enable a new scenario.

Now you can collapse that entire path into a single, integrated flow inside Azure AI Foundry. You focus on: (1) choosing a data source, (2) selecting an embedding model, and (3) clicking create. Foundry orchestrates ingestion, chunking, embedding, and vector index creation for you.

What’s New

You can now natively create an Azure AI Search vector index inside Foundry during the “Add knowledge” step of agent creation or editing.

Supported data sources (initial wave)

  • Azure Blob Storage
  • Azure Data Lake Storage (ADLS) Gen2
  • Microsoft OneLake (Fabric)

Key capabilities

Capability Description
Inline index creation No pre-existing Search index required.
Automatic ingestion Content is pulled, chunked, and prepared for embeddings.
Embedding model selection Choose from supported embedding models at creation time.
Hybrid-ready Index configured for combined vector + keyword retrieval.
Secure by design Respects Azure RBAC & network isolation of underlying resources.

How It Works

  1. Open (or create) an agent in Azure AI Foundry.
  2. Select Add knowledge.
  3. Choose a supported data source (Blob / ADLS Gen2 / OneLake).
  4. Authorize the connection (if first time) and pick containers / paths.
  5. Select an Azure OpenAI embedding model (e.g., text-embedding-*).
  6. Click Create index & ingest.
  7. Foundry: pulls content → chunks documents → generates embeddings → provisions (or reuses) an Azure AI Search index optimized for hybrid queries.
  8. Your agent can now answer grounded questions immediately.

Animated walkthrough – creating an index from Blob storage during Add knowledge flow

No separate indexing pipeline. No manual schema definition. No script to run. Just connect data and go.

Try It Today

Get started by our tutorial on How to create an Azure AI Search index in Foundry.

Related Resources


Happy grounding—can’t wait to see what you build. Share launches with #AzureAIFoundry!

Author

Farzad Sunavala
Principal Product Manager

Building knowledge retrieval capabilities for AI Agents.

0 comments