Announcing the official OpenAI library for .NET

.NET Team

At Microsoft Build 2024, we announced new investments that expand the AI ecosystem for .NET developers. We’re excited to share more detailed plans around Microsoft’s collaboration with OpenAI on their official .NET library.

Today, the OpenAI team released their first beta, version 2.0.0-beta.1, of the official OpenAI library for .NET. Features include:

  • Support for the entire OpenAI API, including Assistants v2 and Chat Completions
  • Support for GPT-4o, OpenAI’s latest flagship model
  • Extensibility to enable the community to build libraries on top
  • Sync and async APIs for ease of use and efficiency
  • Access to streaming completions via IAsyncEnumerable<T>

This official .NET library ensures a smooth and supported integration with OpenAI and Azure OpenAI. It also complements OpenAI’s official libraries for Python and TypeScript/JavaScript developers.

The .NET library is developed and supported on GitHub and will be kept up to date with the latest features from OpenAI. Work will continue over the next few months to gather feedback to improve the library and release a stable NuGet package.

Thank you to the .NET community

We’d like to thank and recognize the work of Roger Pincombe on his library that was published under the OpenAI v1.x NuGet package name. Roger initially published the library in June 2020, making it the first known OpenAI package for .NET. He volunteered countless hours of personal time ever since to maintain the project on GitHub. Roger has worked closely with OpenAI and Microsoft on our plans for the official .NET package for OpenAI. Roger is also helping with a migration guide from his package to the new official one.

Of course, developers may choose to continue using their favorite community libraries, like:

OpenAI and the .NET team also thank these project maintainers for their extraordinary efforts in filling a void within the community. Even with the release of the official package from OpenAI, there are opportunities for community libraries to add significant value on top. We look forward to collaborating with the community in this space.

Next steps

Here’s how you can get involved:

  • Try the library: Install the OpenAI .NET library and start experimenting with its features.
  • Join the community: Engage with us and other developers on GitHub. Share your experiences, report issues, and contribute to discussions.
  • Attend the live stream: Join us live at 10:00 AM PDT on June 19 for the .NET AI Community Standup. Ask questions, learn more about the library, and see demos of its capabilities.

18 comments

Leave a comment

  • Thomas Ardal 2

    What does this mean for the Azure.AI.OpenAI package?

    • Travis WilsonMicrosoft employee 4

      Hello, Thomas! That’s a great question. We’ve worked closely with OpenAI for overall .NET convergence and that existing Azure.AI.OpenAI package is being converted into the Azure OpenAI Service “companion library” that will provide a dedicated AzureOpenAIClient for connecting to Azure OpenAI resources, together with extensions for Azure-specific concepts like Responsible AI content filter results and On Your Data integration. All of the common capabilities between OpenAI and Azure OpenAI will share the same scenario clients, methods, and request/response types, so interoperating between Azure and non-Azure should be easier than ever.

      We’re hoping to have that update published very soon (I’m waiting on a build right now!) and you can keep an eye on the Azure.AI.OpenAI readme and NuGet package.

      Noting here as we’ll note in the repository: this will bring some very significant changes to the details of the usage patterns and we’re eager to hear your feedback. Discussions are open on the openai-dotnet repository and we’ll also be regularly percolating all of the 2.0.0-beta input on azure-sdk-for-net repository, too. Our goal is to stabilize the preview and produce GA-status libraries as soon as we can, and that means great input from the community is critical to ensuring this .NET support is as awesome as it deserves to be.

      • Thomas Ardal 0

        Thank you for the detailed answer. So, to make sure I understand the split here. Previously, we would use an OpenAIClient to communicate with both ChatGPT from OpenAI and Azure OpenAI Service. With different endpoints and settings, of course. But going forward, we will use AzureOpenAIClient from the Azure.AI.OpenAI package to communicate with Azure OpenAI Service and OpenAIAPI from the OpenAI package to communicate with OpenAI?

        • Travis WilsonMicrosoft employee 2

          OpenAIAPI is from the predecessor v1. OpenAI library prior to the package name being graciously transferred; please be sure you’re using v2.0.0- for the official library!

          The OpenAI library has OpenAIClient; the Azure.AI.OpenAI package adds AzureOpenAIClient, which derives from OpenAIClient and specifically configures things for Azure OpenAI. With either, you then instantiate scenario clients from the factory-like top-level client, e.g. client.GetChatClient("gpt-4o"). From there, the ChatClient (or other scenario client instance) is used the same way whether you’re targeting OpenAI’s v1 endpoint or an Azure OpenAI Service resource endpoint.

          • Thomas Ardal 0

            Makes sense. Thanks!

    • Thomas Ardal 0

      A quick update on this for anyone interested in making the switch. I migrated my code to the new prerelease of the Azure.AI.OpenAI package. Everything is running great. As already mentioned in this thread, you use AzureOpenAIClient when communicating with a model hosted on Azure and OpenAIClient when communicating with a model on OpenAI. One thing you should be aware of is that the new clients now throw ClientResultException instead of RequestFailedException. So, if you catch an exception to detect an invalid API key or similar, you need to change the catch block.

  • José Luis Latorre Millás 0

    Does this include support for the latest GPT-4o features? audio stream in & out as well as image & video recognition?

  • Sergiu Perju 2

    Interesting

  • Jeff Jones 0

    How does this affect those using the ML.NET package from Microsoft? Are the two packages related, or does is this package a “next version” of ML.NET?

      • Jeff Jones 0

        I appreciate the reply. Unfortunately, this is another example of MS having one hand not knowing what the other hand is doing. ML.NET has been around, and been updated, for years. MS should have provided a clear path of transition or map of interaction of the two. ML.NET does not require any subscriptions (and thus no revenue stream for MS), whereas “OpenAI library for .NET” does require a subscription and incurs cost for the developer and end user.

        Having ML.NET, Azure’s AI offering, plus “OpenAI library for .NET”, with no clear interaction of map of how they work together comes across as another example of MS internal verticals not coordinating, and thus becoming an impediment to their success.

        MS should have learned from not coordinating their UI strategy with competing UI formats (WinForms, WinUI, Xamarin/MAUI, and Blazor) and failing to coordinate them and provide the #1 tool that is necessary in Visual Studio – a visual designer like WinForms has (and what made VB and its successor, Visual Studio, dominate the market).

        • Luis QuintanillaMicrosoft employee 0

          Hi Jeff,

          Thanks for the feedback.

          MS should have provided a clear path of transition or map of interaction of the two. ML.NET does not require any subscriptions (and thus no revenue stream for MS), whereas “OpenAI library for .NET” does require a subscription and incurs cost for the developer and end user.

          At a high-level, the intents of the libraries are different.

          OpenAI services can be thought of like any other (Azure) AI Service, for which there has never been support in ML.NET (other than exporting a custom trained model from services like Custom Vision and running inference using the ONNX APIs in ML.NET).

          The OpenAI services are largely intended for consumption via HTTP requests with no local / offline support. The OpenAI set of libraries provide client implementations so that developers don’t have to write their own clients.

          To that point, there is little ML.NET can add for commonly used chat completion models (i.e. GPT-3 / 4).

          That said, there are opportunities to leverage the ML.NET training / inference pipeline programming model to provide more seamless integration with the embedding set of models. These could be used as part of embedding generation / featurization pipelines for downstream tasks. In this area, we’d appreciate feedback on whether this is something folks would like to see in ML.NET.

          Here are a few examples of that:

          In such case though, when using OpenAI services, you’d still need to make requests over HTTP. The OpenAI libraries wouldn’t be built into ML.NET. Rather, ML.NET would provide a higher-level interface / transform for generating embeddings that wraps the OpenAI set of libraries as part of a pipeline.

          Another area where ML.NET would provide value is fine-tuning. However, fine-tuning LLMs is still a relatively expensive (cost / resource) endeavor. Small(er) Language Models show promise in this space and we’re actively working in those areas and identifying natural integration points with ML.NET and the .NET AI set of libraries (i.e. Tokenizers, Tensors, TorchSharp). As mentioned earlier though, at this time, those efforts are independent / unrelated from OpenAI and its client libraries.

  • Reelix 1

    Official .NET libraries requiring API keys for third-party services to use?

    That’s a new one.

    Maybe in 2030 we’ll require the user having a Microsoft account to run a program that has using System; :p

    • Martin Richards 1

      Its OpenAI’s official api, they require a key like a million other services do. Or do you expect them to provide everything unlimited completely free?

    • saint4eva 0

      It is required in whatever programming platform or language you are using. Lol. You are consuming their pretrained models or services through their endpoints, so you need the OPENAI keys. When consuming AI services on Azure or AWS, you would need their keys also.
      But if you want to consume a SLM or MLM locally or on edge (on device), you can use Semantic Kernel library to do that. SK library supports C#, Python and Java – at least for now.
      While ML.NET is amazing and you will use it to train small or large datasets to produce models and you can export it to Onnx format to be consumed by others.

  • Scott AddieMicrosoft employee 0

    For those following along, we discussed and demonstrated the official library on today’s .NET AI Community Standup. View the recording at https://www.youtube.com/watch?v=GUV2p_9QUo8. Special guest Roger Pincombe joined us!

Feedback usabilla icon