TL;DR
Azure AI Foundry now lets you ingest data directly from Azure Blob Storage, ADLS Gen2, or Microsoft OneLake and create an Azure AI Search index in just one click.
When you create an agent in Azure AI Foundry one of the most powerful steps is “Add knowledge”—grounding your agent with your enterprise data so it can answer questions and act with context.
Previously, this required you to bring an existing Azure AI Search index and configure it before you could connect your data. That meant extra setup steps and more friction, especially if you were just experimenting.
Today, we’re making this much simpler.
Why This Matters
Grounding (a.k.a. retrieval augmentation) is one of the highest‑leverage steps in agent development. But the traditional workflow—provision a search service, design an index, run an ingestion pipeline, create skillsets, then wire it to your agent—adds friction when you simply want to test a hypothesis or enable a new scenario.
Now you can collapse that entire path into a single, integrated flow inside Azure AI Foundry. You focus on: (1) choosing a data source, (2) selecting an embedding model, and (3) clicking create. Foundry orchestrates ingestion, chunking, embedding, and vector index creation for you.
What’s New
You can now natively create an Azure AI Search vector index inside Foundry during the “Add knowledge” step of agent creation or editing.
Supported data sources (initial wave)
- Azure Blob Storage
- Azure Data Lake Storage (ADLS) Gen2
- Microsoft OneLake (Fabric)
Key capabilities
Capability | Description |
---|---|
Inline index creation | No pre-existing Search index required. |
Automatic ingestion | Content is pulled, chunked, and prepared for embeddings. |
Embedding model selection | Choose from supported embedding models at creation time. |
Hybrid-ready | Index configured for combined vector + keyword retrieval. |
Secure by design | Respects Azure RBAC & network isolation of underlying resources. |
How It Works
- Open (or create) an agent in Azure AI Foundry.
- Select Add knowledge.
- Choose a supported data source (Blob / ADLS Gen2 / OneLake).
- Authorize the connection (if first time) and pick containers / paths.
- Select an Azure OpenAI embedding model (e.g.,
text-embedding-*
). - Click Create index & ingest.
- Foundry: pulls content → chunks documents → generates embeddings → provisions (or reuses) an Azure AI Search index optimized for hybrid queries.
- Your agent can now answer grounded questions immediately.
No separate indexing pipeline. No manual schema definition. No script to run. Just connect data and go.
Try It Today
Get started by our tutorial on How to create an Azure AI Search index in Foundry.
Related Resources
- Azure AI Search Concepts
- Hybrid Retrieval Overview
- Embeddings Models in Foundry
- Latest Agentic Retrieval (preview) in Azure AI Search
Happy grounding—can’t wait to see what you build. Share launches with #AzureAIFoundry!
0 comments
Be the first to start the discussion.