{"id":56830,"date":"2025-05-21T09:05:00","date_gmt":"2025-05-21T16:05:00","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/dotnet\/?p=56830"},"modified":"2025-05-21T09:05:00","modified_gmt":"2025-05-21T16:05:00","slug":"ai-vector-data-dotnet-extensions-ga","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/dotnet\/ai-vector-data-dotnet-extensions-ga\/","title":{"rendered":"AI and Vector Data Extensions are now Generally Available (GA)"},"content":{"rendered":"<p>A few months ago, we introduced the first preview releases of the AI and Vector Data extensions\u2014powerful .NET libraries designed to simplify working with AI models and vector stores.<\/p>\n<p>Since then, we&#8217;ve been collaborating closely with partners and the community to refine these libraries, stabilize the APIs, and incorporate valuable feedback.<\/p>\n<p>Today, we\u2019re excited to announce that these extensions are now <strong>generally available<\/strong>, providing developers with a robust foundation to build scalable, maintainable, and interoperable AI-powered applications.<\/p>\n<h2>What are the AI and Vector Data Extensions<\/h2>\n<p><img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/dotnet\/wp-content\/uploads\/sites\/10\/2025\/05\/extensions-layers.png\" alt=\"Diagram showing AI and Vector Data Extension Layering\" \/><\/p>\n<p>The <strong>AI and Vector Data Extensions<\/strong> are a set of .NET libraries that offer shared abstractions and utilities for working with AI models and vector stores.<\/p>\n<p>These libraries are available as NuGet packages:<\/p>\n<ul>\n<li><strong><a href=\"https:\/\/www.nuget.org\/packages\/Microsoft.Extensions.AI.Abstractions\">Microsoft.Extensions.AI.Abstractions<\/a><\/strong> &#8211; Defines common types and abstractions for AI models.<\/li>\n<li><strong><a href=\"https:\/\/www.nuget.org\/packages\/Microsoft.Extensions.AI\">Microsoft.Extensions.AI<\/a><\/strong> &#8211; AI extension utilities<\/li>\n<li><strong><a href=\"https:\/\/www.nuget.org\/packages\/Microsoft.Extensions.VectorData.Abstractions\/\">Microsoft.Extensions.VectorData.Abstractions<\/a><\/strong> &#8211; Provide exchange types and abstractions for vector stores.<\/li>\n<\/ul>\n<p>These packages serve as foundational building blocks for higher-level components, promoting:<\/p>\n<ul>\n<li><strong>Interoperability<\/strong> \u2013 Libraries can work together more easily by targeting the same abstractions.<\/li>\n<li><strong>Extensibility<\/strong> \u2013 Developers can build on top of shared types to add new capabilities.<\/li>\n<li><strong>Consistency<\/strong> \u2013 A unified programming model across different implementations reduces integration complexity.<\/li>\n<\/ul>\n<h3>Why target these abstractions?<\/h3>\n<p>If you&#8217;re building a <strong>library<\/strong>, it\u2019s critical to remain agnostic to specific AI or vector systems. By depending only on these shared abstractions, you avoid locking your consumers into particular providers and ensure your library can interoperate with others. This promotes flexibility and broad compatibility across the ecosystem.<\/p>\n<p>If you\u2019re building an <strong>application<\/strong>, you have more freedom to choose concrete implementations.<\/p>\n<p>What does this mean in practice?<\/p>\n<ul>\n<li><strong>Providers<\/strong> can implement these abstractions to integrate smoothly with the ecosystem.<\/li>\n<li><strong>Library authors<\/strong> should build on the abstractions to enable composability and avoid forcing provider choices on consumers.<\/li>\n<li><strong>Application developers<\/strong> benefit from a consistent API, making it easier to switch or combine providers without major code changes.<\/li>\n<\/ul>\n<h2>Key Scenarios and Use Cases<\/h2>\n<p>The AI and Vector Data Extensions provide the essential building blocks that make it easier to implement advanced AI capabilities in your applications. By offering consistent abstractions for features like structured output, tool invocation, and observability, these libraries enable you to build robust, maintainable, and production-ready solutions tailored to your specific needs.<\/p>\n<p>Below are examples of common scenarios where these building blocks come together to empower real-world AI-powered applications.<\/p>\n<h3>Portability across model and vector store providers<\/h3>\n<p>Whether you&#8217;re using different model providers for local development and production, or building agents that rely on various models, the AI and Vector Data extensions offer a consistent set of APIs across environments.<\/p>\n<p>Thanks to a growing ecosystem of official and community-supported packages that implement these abstractions, it&#8217;s easy to integrate models and vector databases into your applications.<\/p>\n<p>Below is an example of how you can switch between providers based on the environment, while keeping your code clean and consistent:<\/p>\n<pre><code class=\"language-csharp\">IChatClient chatClient = \n    environment == \"Development\"\n        ? new OllamaApiClient(\"YOUR-OLLAMA-ENDPOINT\", \"qwen3\")\n        : new AzureOpenAIClient(\"YOUR-AZURE-OPENAI-ENDPOINT\", new DefaultAzureCredential())\n            .GetChatClient(\"gpt-4.1\")\n            .AsIChatClient();\n\nawait foreach (var message in chatClient.GetStreamingResponseAsync(\"What is AI?\"))\n{\n    Console.Write($\"{message.Text}\");\n};\n\nIEmbeddingGenerator&lt;string, Embedding&lt;float&gt;&gt; embeddingGenerator = \n    environment == \"Development\"\n        ? new OllamaApiClient(\"YOUR-OLLAMA-ENDPOINT\", \"all-minilm\")\n        : new AzureOpenAIClient(\"YOUR-AZURE-OPENAI-ENDPOINT\", new DefaultAzureCredential())\n            .GetEmbeddingClient(\"text-embedding-3-small\")\n            .AsIEmbeddingGenerator();\n\nvar embedding = await embeddingGenerator.GenerateAsync(\"What is AI?\");\n\nVectorStoreCollection&lt;int, Product&gt; collection =\n    environment == \"Development\"\n        ? new SqliteCollection&lt;int, Product&gt;(\n            \"Data Source=products.db\",\n            \"products\", \n            new SqliteCollectionOptions { EmbeddingGenerator = embeddingGenerator})\n        : new QdrantCollection&lt;int, Product&gt;(\n            new QdrantClient(\"YOUR-HOSTED-ENDPOINT\"), \n            \"products\", \n            true, \n            new QdrantCollectionOptions { EmbeddingGenerator = embeddingGenerator});\n\nawait collection.UpsertAsync(...);<\/code><\/pre>\n<h3>Progressively add functionality<\/h3>\n<p>Using AI models is just the beginning. Building production-grade applications requires logging, caching, and observability through tools like OpenTelemetry.<\/p>\n<p>The AI extensions support these needs out of the box. They integrate seamlessly with existing .NET primitives, allowing you to plug in your own <code>ILogger<\/code>, <code>IDistributedCache<\/code>, and OpenTelemetry-compliant tools without reinventing the wheel.<\/p>\n<p>Here\u2019s a simple example of how to enable these features:<\/p>\n<pre><code class=\"language-csharp\">IChatClient chatClient = \n    new ChatClientBuilder(...)\n        .UseLogging()\n        .UseDistributedCache()\n        .UseOpenTelemetry()\n        .Build();<\/code><\/pre>\n<p>Need more control? The extensions are fully extensible. You can inject custom logic\u2014like rate limiting\u2014directly into the pipeline.<\/p>\n<pre><code class=\"language-csharp\">RateLimiter rateLimiter = new ConcurrencyLimiter(new()\n{\n    PermitLimit = 1,\n    QueueLimit = int.MaxValue\n});\n\nIChatClient client = \n    new ChatClientBuilder(...)\n        .UseDistributedCache()\n        .Use(async (messages, options, nextAsync, cancellationToken) =&gt;\n        {\n            using var lease = await rateLimiter.AcquireAsync(permitCount: 1, cancellationToken).ConfigureAwait(false);\n            if (!lease.IsAcquired)\n                throw new InvalidOperationException(\"Unable to acquire lease.\");\n\n            await nextAsync(messages, options, cancellationToken);\n        })\n        .UseOpenTelemetry()\n        .Build();<\/code><\/pre>\n<h3>Handle different content and structure their output<\/h3>\n<p>Generative AI models can process more than just text\u2014they\u2019re also capable of handling images, audio, and other data types.<\/p>\n<p>To support this, the AI extensions provide flexible primitives for representing diverse data formats.<\/p>\n<p>One common challenge is that model outputs are often unstructured, making integration with your application more difficult.<\/p>\n<p>Fortunately, many models now support structured output\u2014a feature where responses are formatted according to a predefined schema, such as JSON. This adds reliability and predictability to the model\u2019s responses, simplifying integration.<\/p>\n<p>The AI extensions are designed to work seamlessly with structured outputs, making it easy to map model responses directly to your C# types.<\/p>\n<pre><code class=\"language-csharp\">record Item(string Name, float Price);\nenum Category { Food, Electronics, Clothing, Services };\nrecord Receipt(string Merchant, List&lt;Item&gt; Items, float Total, Category Category);\n\nvar imageUri = new Uri(\"https:\/\/host\/someimage.jpg\");\n\nList&lt;AIContent&gt; content = [\n    new TextContent(\"Process this receipt\"),\n    new UriContent(new Uri(imageUri), mediaType: \"image\/jpeg\")\n];\n\nvar message = new ChatMessage(ChatRole.User, content);\nvar response = await chatClient.GetResponseAsync&lt;Receipt&gt;(message);\n\nresponse.TryGetResult(out var receiptData);\n\nConsole.WriteLine($\"Merchant: {receiptData.Merchant} | Total: {receiptData.Total} | Category: {receiptData.Category}\");\n\/\/Merchant: ECOSPACE | Total: 49.64 | Category: Food<\/code><\/pre>\n<h3>Tool Calling<\/h3>\n<p>AI models can process data and understand natural language, but they can&#8217;t perform actions on their own. To take meaningful action, they need access to external tools and systems.<\/p>\n<p>This is where tool calling comes in\u2014a feature supported by many modern generative AI models that allows them to invoke functions based on user intent.<\/p>\n<p>Similar to structured output, the AI extensions make it easy to take advantage of this capability in your applications.<\/p>\n<p>In this example, the <code>CalculateTax<\/code> method is registered as an AI-invokable function. The model can automatically decide when to call it based on the user\u2019s request. If you need more control over this behavior, it can be easily configured.<\/p>\n<pre><code class=\"language-csharp\">record ReceiptTotal(float SubTotal, float TaxAmount, float TaxRate, float Total);\n[Description(\"Calculate tax given a receipt and tax rate\")]\nfloat CalculateTax(Receipt receipt, float taxRate)\n{\n    return receipt.Total * (1 + taxRate);\n}\n\nIChatClient functionChatClient = \n    chatClient\n        .AsBuilder()\n        .UseFunctionInvocation()\n        .Build();\n\nvar message = new ChatMessage(ChatRole.User, [\n    new TextContent(\"Here is information from a recent purchase\"),\n    new TextContent($\"{JsonSerializer.Serialize(receiptData)}\"),\n    new TextContent(\"What is the total price after tax given a tax rate of 10%?\")\n]);\n\nvar response = await functionChatClient.GetResponseAsync&lt;ReceiptTotal&gt;(message, new ChatOptions {Tools = [AIFunctionFactory.Create(CalculateTax)]});\n\nresponse.TryGetResult(out var receiptTotal);\nConsole.WriteLine($\"SubTotal: {receiptTotal.SubTotal} | TaxAmount: {receiptTotal.TaxAmount} | TaxRate: {receiptTotal.TaxRate} | Total: {receiptTotal.Total}\");\n\/\/ SubTotal: 49.64 | TaxAmount: 4.96 | TaxRate: 0.1 | Total: 54.6<\/code><\/pre>\n<h3>Simplified embedding generation<\/h3>\n<p>Semantic search\u2014and by extension, Retrieval-Augmented Generation (RAG)\u2014is a powerful pattern for building intelligent applications. While the core tasks, like embedding generation, are conceptually straightforward, they often involve repetitive, boilerplate code.<\/p>\n<p>At the heart of this pattern are vector databases, which store numerical representations of data (called embeddings) and enable fast similarity search. They let applications retrieve semantically relevant information\u2014not just exact keyword matches\u2014making them essential for tasks like search, question answering, and recommendations.<\/p>\n<p>The AI and Vector Data extensions are designed to streamline this process by working seamlessly together.<\/p>\n<p>When you combine a vector store with an embedding generator, the extensions make it easy to map your existing C# data models directly to the vector store. These abstractions eliminate much of the boilerplate, so you can focus on your application logic\u2014not infrastructure details.<\/p>\n<p>And because these are just abstractions, they\u2019re inherently modular. You can swap out vector store or embedding providers without changing your application logic\u2014making it easy to adapt to new tools or scale as your needs evolve.<\/p>\n<p>In the example below, we provide plain text. The extensions handle the rest\u2014automatically generating the embedding using the configured generator and storing it in the appropriate vector field, all with minimal code.<\/p>\n<pre><code class=\"language-csharp\">\/\/ Using Microsoft.SemanticKernel.Connectors.SqliteVec package\nEmbeddingGenerator&lt;string, Embedding&lt;float&gt;&gt; embeddingGenerator = ...;\nVectorStoreCollection&lt;int, Product&gt; collection = new SqliteCollection&lt;int,Product&gt;(\"Data Source=products.db\", \"products\", new SqliteCollectionOptions {\n    EmbeddingGenerator = embeddingGenerator,\n});\n\nawait collection.UpsertAsync(new Product\n{\n    Id = 1,\n    Name = \"Kettle\",\n    TenantId = 5,\n    Embedding = \"This kettle is great for making tea, it heats up quickly and has a large capacity.\"\n});\n\nrecord Product\n{\n    [VectorStoreKey]\n    public int Id { get; set; }\n\n    [VectorStoreData]\n    public required string Name { get; set; }\n\n    [VectorStoreData]\n    public int TenantId { get; set; }\n\n    [VectorStoreVector(Dimensions: 1536)]\n    public string? Embedding { get; set; }\n}<\/code><\/pre>\n<h3>Powerful search capabilities<\/h3>\n<p>Depending on your scenario and data model, you may need more advanced search capabilities. The Vector Data abstractions support a range of powerful search features\u2014including multiple similarity metrics, vector search, hybrid search, and filtering.<\/p>\n<p>Just like with embedding generation, the querying process is streamlined. You can pass in plain text, and the abstractions handle generating the embedding, applying the appropriate similarity metric, and retrieving the most relevant results.<\/p>\n<p>Filtering is also built-in and designed to feel familiar. The filter expressions use a syntax similar to LINQ predicates, so you can leverage your existing C# skills without needing to learn a new query language.<\/p>\n<p>In the example below, we&#8217;re searching for products that match a natural language query, filtered by tenant:<\/p>\n<pre><code class=\"language-csharp\">EmbeddingGenerator&lt;string, Embedding&lt;float&gt;&gt; embeddingGenerator = ...;\nVectorStoreCollection&lt;int, Product&gt; collection = new SqliteCollection&lt;int,Product&gt;(\"Data Source=products.db\", \"products\", new SqliteCollectionOptions {\n    EmbeddingGenerator = embeddingGenerator,\n});\n\nvar query = \"Find me kettles that can hold a lot of water\";\n\nawait foreach (var result in collection.SearchAsync(query, top: 5, new() { Filter = r =&gt; r.TenantId == 8 }))\n{\n    yield return result.Record;\n}<\/code><\/pre>\n<h3>Plugs right into your existing dependency injection configurations<\/h3>\n<p>Modern .NET applications rely on dependency injection (DI) to manage configuration and lifetimes of services\u2014and the AI and Vector Data extensions are built to align with that model.<\/p>\n<p>Whether you&#8217;re wiring up components for local development or configuring services for production, the extensions register cleanly into your existing DI container. This means you can compose and configure AI components just like any other part of your application.<\/p>\n<pre><code class=\"language-csharp\">builder.Services.AddChatClient(sp =&gt; {...})\n    .UseLogging()\n    .UseCaching()\n    .UseOpenTelemetry();\n\nbuilder.Services.AddEmbeddingGenerator(sp =&gt; {...})\n    .UseLogging()\n    .UseCaching()\n    .UseOpenTelemetry();\n\n\/\/ SQLite implementation of Vector Data\nbuilder.Services.AddSqliteCollection&lt;int, Product&gt;(\"Products\", \"Data Source=\/tmp\/products.db\");<\/code><\/pre>\n<h2>A growing ecosystem<\/h2>\n<p>Adoption of these extensions has been strong and continues to grow across the ecosystem.<\/p>\n<p>In just a few months, the extensions have surpassed <strong>3 million downloads<\/strong>, with nearly <strong>100 public NuGet packages<\/strong> combined taking a dependency on them.<\/p>\n<p>We&#8217;re seeing adoption across a wide range of official and community projects, including:<\/p>\n<ul>\n<li><strong>Libraries<\/strong>: Model Context Protocol (MCP), AI Evaluations, Pieces<\/li>\n<li><strong>Agent Frameworks<\/strong>: Semantic Kernel, AutoGen<\/li>\n<li><strong>Playgrounds<\/strong>: AI Dev Gallery  <\/li>\n<li><strong>Provider SDKs<\/strong>: Azure OpenAI, OllamaSharp, Anthropic, Google, HuggingFace, Sqlite, Qdrant, CosmosDB, AzureSQL  <\/li>\n<li><strong>UI Components<\/strong>: DevExpress, Syncfusion, Progress Telerik  <\/li>\n<\/ul>\n<p>Below, we highlight a few of these integrations in more detail.<\/p>\n<h3>Model Context Protocol (MCP) C# SDK<\/h3>\n<p>MCP is an open standard that acts as a universal adapter for AI models. It enables models to interact with external data sources, tools, and APIs through a consistent, standardized interface. This simplifies integration by allowing models to invoke functions or access data without needing custom code for each service.<\/p>\n<p>We partnered with Anthropic to deliver an <a href=\"https:\/\/devblogs.microsoft.com\/blog\/microsoft-partners-with-anthropic-to-create-official-c-sdk-for-model-context-protocol\">official MCP C# SDK<\/a>. Built on top of shared AI abstractions like <code>AIContent<\/code>, <code>AIFunction<\/code>, and others, the SDK enables MCP clients and servers to easily define tools and invoke them using <code>IChatClient<\/code> implementations.<\/p>\n<pre><code class=\"language-csharp\">var mcpClient = await McpClientFactory.CreateAsync(clientTransport, mcpClientOptions, loggerFactory);\n\nvar tools = await mcpClient.ListToolsAsync();\n\nvar response = await _chatClient.GetResponseAsync&lt;List&lt;TripOption&gt;&gt;(messages, new ChatOptions { Tools = [.. tools ] });<\/code><\/pre>\n<h3>Evaluations<\/h3>\n<p>Evaluations play a crucial role in building trustworthy AI applications by helping ensure safety, reliability, and alignment with intended behavior. They allow developers to systematically test and validate AI models against real-world scenarios and quality standards.<\/p>\n<p>The .NET AI Evaluation set of libraries builds on top of the AI Extenions to create powerful evaluation tools that integrate seamlessly into your development workflow, enabling continuous monitoring and improvement of your AI systems.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/dotnet\/wp-content\/uploads\/sites\/10\/2025\/05\/aivectorreport.png\" alt=\"Ai Evaluations Report\" \/><\/p>\n<p>To dive deeper into these evaluation capabilities and see practical examples, check out the following blog posts.<\/p>\n<ul>\n<li><a href=\"https:\/\/devblogs.microsoft.com\/dotnet\/start-using-the-microsoft-ai-evaluations-library-today\/\">Unlock new possibilities for AI Evaluations for .NET<\/a><\/li>\n<li><a href=\"https:\/\/devblogs.microsoft.com\/dotnet\/evaluating-ai-content-safety\/\">Evaluating content safety in your .NET AI applications<\/a><\/li>\n<\/ul>\n<h3>Progress Telerik<\/h3>\n<p>Since the launch of ChatGPT, chat has become the primary way users interact with language models.<\/p>\n<p>To offer a similar experience in their own applications, developers have often had to build custom chat UI components from scratch.<\/p>\n<p>Telerik has made this much easier by providing a set of ready-to-use chat UI components for Blazor. These components simplify the process of adding conversational interfaces to web apps.<\/p>\n<p>Built on top of the AI Extensions, Telerik\u2019s solution also allows developers to switch between different model providers with minimal code changes\u2014making it both flexible and future-proof.<\/p>\n<p><div style=\"width: 640px;\" class=\"wp-video\"><video class=\"wp-video-shortcode\" id=\"video-56830-1\" width=\"640\" height=\"360\" preload=\"metadata\" controls=\"controls\"><source type=\"video\/mp4\" src=\"https:\/\/devblogs.microsoft.com\/dotnet\/wp-content\/uploads\/sites\/10\/2025\/05\/telerik-ai-extensions.mp4?_=1\" \/><a href=\"https:\/\/devblogs.microsoft.com\/dotnet\/wp-content\/uploads\/sites\/10\/2025\/05\/telerik-ai-extensions.mp4\">https:\/\/devblogs.microsoft.com\/dotnet\/wp-content\/uploads\/sites\/10\/2025\/05\/telerik-ai-extensions.mp4<\/a><\/video><\/div><\/p>\n<h3>Semantic Kernel<\/h3>\n<p>Semantic Kernel offers high-level components that make it easier to integrate AI into your applications.<\/p>\n<p>We\u2019re entering the agentic era\u2014where AI agents need access to <strong>models<\/strong>, <strong>data<\/strong>, and <strong>tools<\/strong> to perform tasks effectively. With Semantic Kernel, you can build agents using the same AI extension primitives you&#8217;re already familiar with, such as <code>IChatClient<\/code>.<\/p>\n<p>Here\u2019s an example of how to use <code>IChatClient<\/code> as the foundation for an agent in Semantic Kernel&#8217;s Agent Framework:<\/p>\n<pre><code class=\"language-csharp\">var builder = Kernel.CreateBuilder();\n\n\/\/ Add your IChatClient\nbuilder.Services.AddChatClient((sp) =&gt;\n{\n    return new ChatClientBuilder(...)\n        .UseLogging(...)\n        .UseFunctionInvocation()\n        .Build();\n});\n\nbuilder.Plugins.AddFromFunctions(\n    nameof(CalculateTax), \n    [AIFunctionFactory.Create(CalculateTax).AsKernelFunction()]);\n\nvar kernel = builder.Build();\n\nvar agent = new ChatCompletionAgent\n{\n    Name = \"TravelAgent\",\n    Description = \"A travel agent that helps users with travel plans\",\n    Instructions = \"Help users come up with a travel itinerary\",\n    Kernel = kernel,\n    Arguments = new KernelArguments(\n        new PromptExecutionSettings {\n            FunctionChoiceBehavior  = FunctionChoiceBehavior.Auto()})\n};<\/code><\/pre>\n<p>This is just a basic example. Check out the following post for a deeper look at <a href=\"https:\/\/aka.ms\/dotnet\/ai\/sk\/data\/extensions\">how Semantic Kernel builds on top of the AI Extensions<\/a>.<\/p>\n<p>Semantic Kernel also offers a unified set of connectors for vector databases, built on top of the Vector Data Extensions. These connectors simplify integration by providing a consistent programming model.<\/p>\n<p>For more details, check out this post on the how <a href=\"https:\/\/aka.ms\/dotnet\/ai\/sk\/data\/extensions\">Semantic Kernel builds their connectors using the Vector Data Extensions<\/a>.<\/p>\n<h3>AI Dev Gallery<\/h3>\n<p>The AI Dev Gallery is a Windows application that serves as a comprehensive playground for AI development with .NET. It offers everything you need to explore, experiment with, and implement AI features in your applications\u2014entirely offline, with no dependency on cloud services.<\/p>\n<p><div style=\"width: 1499px;\" class=\"wp-video\"><video class=\"wp-video-shortcode\" id=\"video-56830-2\" width=\"1499\" height=\"900\" poster=\"https:\/\/devblogs.microsoft.com\/dotnet\/wp-content\/uploads\/sites\/10\/2025\/04\/ai-dev-gallery-main.jpg\" preload=\"metadata\" controls=\"controls\"><source type=\"video\/mp4\" src=\"https:\/\/devblogs.microsoft.com\/dotnet\/wp-content\/uploads\/sites\/10\/2025\/04\/ai-dev-gallery.mp4?_=2\" \/><a href=\"https:\/\/devblogs.microsoft.com\/dotnet\/wp-content\/uploads\/sites\/10\/2025\/04\/ai-dev-gallery.mp4\">https:\/\/devblogs.microsoft.com\/dotnet\/wp-content\/uploads\/sites\/10\/2025\/04\/ai-dev-gallery.mp4<\/a><\/video><\/div><\/p>\n<p>The AI Dev Gallery is built on top of the AI and Vector Data Extensions, providing a solid foundation for model and data integrations. It also leverages:<\/p>\n<ul>\n<li><strong>Microsoft.ML.Tokenizers<\/strong> for efficient text preprocessing and tokenization.<\/li>\n<li><strong>System.Numerics.Tensors<\/strong> for high-performance processing of model outputs.<\/li>\n<\/ul>\n<p>Together, these components make the AI Dev Gallery a powerful tool for local, end-to-end AI experimentation and development.<\/p>\n<p>To learn more about the AI Dev Gallery, see the following <a href=\"https:\/\/devblogs.microsoft.com\/dotnet\/introducing-ai-dev-gallery-gateway-to-local-ai-development\/\">blog post<\/a>.<\/p>\n<h2>Get Started today<\/h2>\n<p>Try out the <a href=\"https:\/\/learn.microsoft.com\/dotnet\/ai\/quickstarts\/ai-templates?tabs=visual-studio%2Cconfigure-visual-studio&amp;pivots=github-models\">.NET AI Templates<\/a> to get started with the AI and Vector Data extensions.<\/p>\n<p>Make sure to check out the <a href=\"https:\/\/learn.microsoft.com\/dotnet\/ai\/\">documentation<\/a> to learn more.<\/p>\n<p>We can&#8217;t wait to see what you build.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>We\u2019re excited to announce that these extensions are now generally available, providing developers with a robust foundation to build scalable, maintainable, and interoperable AI-powered applications.<\/p>\n","protected":false},"author":26108,"featured_media":56831,"comment_status":"open","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[685,7781],"tags":[568,7805,8044,8043],"class_list":["post-56830","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-dotnet","category-ai","tag-ai","tag-llm","tag-samples","tag-vectordata"],"acf":[],"blog_post_summary":"<p>We\u2019re excited to announce that these extensions are now generally available, providing developers with a robust foundation to build scalable, maintainable, and interoperable AI-powered applications.<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/posts\/56830","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/users\/26108"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/comments?post=56830"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/posts\/56830\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/media\/56831"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/media?parent=56830"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/categories?post=56830"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/tags?post=56830"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}