The DeepSeek R1 model has been gaining a ton of attention lately. And one of the questions we’ve been getting asked is: “Can I use DeepSeek in my .NET applications”? The answer is absolutely! I’m going to walk you through how to use the Microsoft.Extensions.AI (MEAI) library with DeepSeek R1 on GitHub Models so you can start experimenting with the R1 model today.
MEAI makes using AI services easy
The MEAI library provides a set of unified abstractions and middleware to simplify the integration of AI services into .NET applications.
In other words, if you develop your application with MEAI, your code will use the same APIs no matter which model you decide to use “under the covers”. This lowers the friction it takes to build a .NET AI application as you’ll only have to remember a single library’s (MEAI’s) way of doing things regardless of which AI service you use.
And for MEAI, the main interface you’ll use is IChatClient
.
Let’s chat with DeepSeek R1
GitHub Models allows you to experiment with a ton of different AI models without having to worry about hosting. It’s a great way to get started in your AI development journey for free. And GitHub Models gets updated with new models all the time, like DeepSeek’s R1.
The demo app we’re going to build is a simple console application and it’s available on GitHub at codemillmatt/deepseek-dotnet. You can clone or fork it to follow along, but we’ll talk through the important pieces below too.
First let’s take care of some prerequisites:
- Head on over to GitHub and generate a personal access token (PAT). This will be your key for GitHub Models access. Follow these instructions to create the PAT. You will want a classic token.
- Open the DeepSeek.Console.GHModels project. You can either open the full solution in Visual Studio or just the project folder if using VS Code.
- Create a new user secrets entry for the GitHub PAT. Name it GH_TOKEN and paste in the PAT you generated as the value.
Now let’s explore the code a bit:
- Open the Program.cs file in the DeepSeek.Console.GHModels project.
- The first 2 things to notice are where we initialize the
modelEndpoint
andmodelName
variables. These are standard for the GitHub Models service, they will always be the same. - Now for the fun part! We’re going to initialize our chat client. This is where we’ll connect to the DeepSeek R1 model.
IChatClient client = new ChatCompletionsClient(modelEndpoint, new AzureKeyCredential(Configuration["GH_TOKEN"])).AsChatClient(modelName);
This uses the Microsoft.Extensions.AI.AzureAIInference package to connect to the GitHub Models service. But the
AsChatClient
function returns anIChatClient
implementation. And that’s super cool. Because regardless of which model we chose from GitHub Models, we’d still write our application against theIChatClient
interface! - Next up we pass in our question, or prompt, to the model. And we’ll use make sure we get a streaming response back, this way we can display it as it comes in.
var response = client.CompleteStreamingAsync(question); await foreach (var item in response) { Console.Write(item); }
That’s it! Go ahead and run the project. It might take a few seconds to get the response back (lots of people are trying the model out!). You’ll notice the response isn’t like you’d see in a “normal” chat bot. DeepSeek R1 is a reasoning model, so it wants to figure out and reason through problems. The first part of the response will be it’s reasoning and will be delimited by \<think> and is quite interesting. The second part of the response will be the answer to the question you asked.
Here’s a partial example of a response:
<think>
Okay, let's try to figure this out. The problem says: If I have 3 apples and eat 2, how many bananas do I have? Hmm, at first glance, that seems a bit confusing. Let me break it down step by step.
So, the person starts with 3 apples. Then they eat 2 of them. That part is straightforward. If you eat 2 apples out of 3, you'd have 1 apple left, right? But then the question shifts to bananas. Wait, where did bananas come from? The original problem only mentions apples. There's no mention of bananas at all.
...
Do I have to use GitHub Models?
You’re not limited to running DeepSeek R1 on GitHub Models. You can run it on Azure or even locally (or on GitHub Codespaces) through Ollama. I provided 2 additional console applications in the GitHub repository that show you how to do that.
The biggest difference between the GitHub Models version is where the DeepSeek R1 model is deployed, the credentials you use to connect to it, and the specific model name.
If you deploy on Azure AI Foundry, the code is exactly the same. Here are some instructions on how to deploy the DeepSeek R1 model into Azure AI Foundry.
If you want to run locally on Ollama, we’ve provided a devcontainer definition that you can use to run Ollama in Docker. It will automatically pull down a small parameter version of DeepSeek R1 and start it up for you. The only difference is you’ll use the Microsoft.Extensions.AI.Ollama NuGet package and initialize the IChatClient
with the with OllamaChatClient
. Interacting with DeepSeek R1 is the same.
Note: If you run this in a GitHub Codespace, it will take a couple of minutes to start up and you’ll use roughly 8GB of space – so be aware depending on your Codespace plan.
Of course these are simple Console applications. If you’re using .NET Aspire, it’s easy to use Ollama and DeepSeek R1. Thanks to the .NET Aspire Community Toolkit’s Ollama integration, all you need to do is add one line and you’re all set!
var chat = ollama.AddModel("chat", "deepseek-r1");
Check out this blog post with all the details on how to get going.
Summary
DeepSeek R1 is an exciting new reasoning model that’s drawing a lot of attention and you can build .NET applications that make use of it today using the Microsoft.Extensions.AI library. GitHub Models lowers the friction of getting started and experimenting it with. Go ahead and try out the samples and checkout our other MEAI samples!
0 comments
Be the first to start the discussion.