DeepSeek recently awed the AI community by open sourcing two new state-of-the-art models, the DeepSeek-V3 and a reasoning model, the DeepSeek-R1, that not only claim to be op-par with the most capable models from OpenAI but are also extremely cost-effective. We’d like to highlight the recent announcement from the Azure AI Foundry team highlighting DeepSeek R1 is now available on Azure AI Foundry and GitHub. DeepSeek is also available for use in Semantic Kernel, Azure AI Foundry and can be called via our inferencing connector as well.
- To learn more about the performance and technical details of DeepSeek-V3, please refer to this paper: DeepSeek-V3 Technical Report
- To learn more about the performance and technical details of DeepSeek-R1, please refer to this paper: DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
- To view the inference cost of the DeepSeek models, please visit: Models & Pricing | DeepSeek API Docs
Semantic Kernel is thrilled to see such exciting development in the opensource AI community, and we think most developers are too. Thus, in this blog post, we will share with you how to use the DeepSeek service in Semantic Kernel so that you can experiment with these new models in your new or existing workflows.
Prerequisites
- Create a DeepSeek account here.
- Top up the account here.
- Create an API key here (Copy the API key somewhere for later use as you can only see it once).
DeepSeek API is compatible with the OpenAI chat completion API format, so we are going to use the OpenAI connector.
In .Net
var DEEPSEEK_API_KEY = "...";
OpenAIChatCompletionService chatCompletionService = new(
"deepseek-chat", // or "deepseek-reasoner"
new Uri("https://api.deepseek.com"),
DEEPSEEK_API_KEY,
);
var chatHistory = new ChatHistory();
chatHistory.AddUserMessage("Hello, how are you?");
var reply = await chatCompletionService.GetChatMessageContentAsync(chatHistory);
Console.WriteLine(reply);
In Python
import asyncio
from openai import AsyncOpenAI
from semantic_kernel.contents import ChatHistory
from semantic_kernel.connectors.ai.open_ai import (
OpenAIChatCompletion,
OpenAIChatPromptExecutionSettings,
)
DEEPSEEK_API_KEY = "..."
async def main():
chat_service = OpenAIChatCompletion(
ai_model_id="deepseek-chat", # or "deepseek-reasoner"
async_client=AsyncOpenAI(
api_key=DEEPSEEK_API_KEY,
base_url="https://api.deepseek.com",
),
)
chat_history = ChatHistory()
chat_history.add_user_message("Hello, how are you?")
response = await chat_service.get_chat_message_content(chat_history, OpenAIChatPromptExecutionSettings())
print(response)
if __name__ == "__main__":
asyncio.run(main())
Just like that, whether you are new to or you have built solutions with Semantic Kernel, you will be able to experiment with the DeepSeek models with only a few lines of code.
We have also created a sample within Semantic Kernel using DeepSeek here: Add Deepseek service to concept samples.
The Semantic Kernel team is dedicated to empowering developers by providing access to the latest advancements in the industry. We encourage you to leverage your creativity and build remarkable solutions with SK! Please reach out if you have any questions or feedback through our Semantic Kernel GitHub Discussion Channel. We look forward to hearing from you! We would also love your support, if you’ve enjoyed using Semantic Kernel, give us a star on GitHub.
0 comments
Be the first to start the discussion.