Azure Cosmos DB was named by Bloomberg as the no. 1 Database of choice for Retrieval Augmented Generation (RAG) and Large Language Model (LLM) customization. It is used by OpenAI to scale it’s ChatGPT service, and by many thousands of customers worldwide, serving billions of end-users with globally distributed apps at planet scale.
In this blog we will explore a simple example for lightweight multi-agent orchestration in Python using OpenAI Swarm. We’ll also see the benefits of being able to use Azure Cosmos DB as both a vector store and an operational database.
What are multi-agent apps?
Multi-agent AI apps involve multiple autonomous agents designed to work together or independently to solve problems, achieve goals, or simulate behaviors. Each agent has specific capabilities, decision-making processes, and communication mechanisms, enabling them to collaborate, compete, or adapt within a shared environment. They are used where decentralized and cooperative strategies are essential.
Why is a multi-agent approach useful for building AI apps?
When building AI apps with Large Language Models (LLMs), think of LLMs as indexes on a database for unstructured text. Unlike structured queries, LLMs use natural language to interpolate and transform data, offering flexible “queries” but non-deterministic and sometimes inaccurate “results.” This is in contrast to traditional databases, which prioritize accuracy and predictability with minimal input flexibility. To optimize bespoke AI apps with LLMs, it’s crucial to balance their power with controlled accuracy. By clearly defining and narrowing the tasks each agent performs, we can mitigate the challenges of overloading a single agent with too many capabilities through prompting, by building multi-agent apps.
Building a multi-agent AI app
OpenAI Swarm makes agent coordination and execution lightweight, highly controllable, and easily testable. The semantics for creating an agent are simple, allowing you to build scalable, real-world solutions while avoiding a steep learning curve:
agent_a = Agent(
name="Agent A",
instructions="Instructions that tightly define the scope of agent A",
functions=[transfer_to_agent_b, do_some_specific_function],
)
agent_b = Agent(
name="Agent B",
instructions="Instructions that tightly define the scope of agent B",
functions=[transfer_to_agent_a, do_some_specific_function],
)
In our sample app for a personal shopping AI assistant, 4 specialised agents are created:
- Triage Agent: Determines the type of request and transfers to the appropriate agent.
- Product Agent: Answers customer queries, applying the RAG pattern with vector search in Azure Cosmos DB.
- Refund Agent: Manages customer refunds, storing the transactional data in Azure Cosmos DB.
- Sales Agent: Handles actions related to placing orders, storing the transactional data in Azure Cosmos DB.
Each agent transfer initiates a new interaction with Azure OpenAI’s chat API using the agent’s initial prompt. This prevents model collapse, ensuring agents stay coherent. Once an agent’s scope is reached, it transfers tasks to a new agent better suited to handle them. We can see agent transfers in action when running the command-line app, where different demands trigger quick hand-offs between agents:
When the same app is presented to the user without signalling agent transfers, it maintains seamless, accurate, and predictable interaction with the user, while still leveraging the powerful language processing capabilities in Azure OpenAI:
Combining vector search with operational transactions enables up-to-date contextual input for chat completion. The product container serves as both an operational data store and a vector store. Inventory updates instantly reflect in vector search, and similarity search results from the product_information
function are seamlessly integrated into the chat completion stream.
def product_information(user_prompt):
"""Provide information about a product based on the user prompt.
Takes as input the user prompt as a string."""
# Perform a vector search on the Cosmos DB container and return results to the agent
vectors = azure_open_ai.generate_embedding(user_prompt)
vector_search_results = vector_search(azure_cosmos_db.products_container_name, vectors)
return vector_search_results
Go ahead and try the sample out! Leave a comment to let us know how you get on!
Leave a review
Tell us about your Azure Cosmos DB experience! Leave a review on PeerSpot and we’ll gift you $50. Get started here.
About Azure Cosmos DB
Azure Cosmos DB is a fully managed and serverless NoSQL and vector database for modern app development, including AI applications. With its SLA-backed speed and availability as well as instant dynamic scalability, it is ideal for real-time NoSQL and MongoDB applications that require high performance and distributed computing over massive volumes of NoSQL and vector data.
Try Azure Cosmos DB for free here. To stay in the loop on Azure Cosmos DB updates, follow us on X, YouTube, and LinkedIn.
Looks great, the idea of multi agent architecture for RPA is a really cool idea