{"id":4308,"date":"2025-03-06T07:31:25","date_gmt":"2025-03-06T15:31:25","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/semantic-kernel\/?p=4308"},"modified":"2025-03-06T07:31:25","modified_gmt":"2025-03-06T15:31:25","slug":"integrate-sk-with-xai-grok-easily","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/agent-framework\/integrate-sk-with-xai-grok-easily\/","title":{"rendered":"Effortlessly Integrate xAI&#8217;s Grok with Semantic Kernel"},"content":{"rendered":"<p><center><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2025\/03\/grokblog-v2.jpg\"><img decoding=\"async\" class=\"aligncenter wp-image-4321 size-medium\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2025\/03\/grokblog-v2-300x225.jpg\" alt=\"Grok generated image based on the blog content\" width=\"300\" height=\"225\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2025\/03\/grokblog-v2-300x225.jpg 300w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2025\/03\/grokblog-v2-768x576.jpg 768w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2025\/03\/grokblog-v2.jpg 1024w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/><\/a><\/center><\/p>\n<p class=\"code-line\" dir=\"auto\" data-line=\"2\">For Semantic Kernel users, integrating xAI&#8217;s Grok API using the OpenAI connector is a breeze thanks to its compatibility with OpenAI&#8217;s API format.<\/p>\n<p class=\"code-line\" dir=\"auto\" data-line=\"4\">This tutorial focuses on setting up Grok in your Semantic Kernel projects with minimal fuss, using C# and Python examples.<\/p>\n<h2 id=\"why-grok\" class=\"code-line\" dir=\"auto\" data-line=\"6\">Why Grok?<\/h2>\n<p class=\"code-line\" dir=\"auto\" data-line=\"8\">Grok, built by xAI, is a powerful AI model, offers a 128k context window and function-calling support, making it a solid choice for complex tasks in Semantic Kernel.<\/p>\n<p class=\"code-line\" dir=\"auto\" data-line=\"10\">With an API compatible with OpenAI, announced in November 2023 and now available via API access, with models like &#8220;grok-beta&#8221; available for developers and soon the new flagship &#8220;grok-3&#8221;.<\/p>\n<p class=\"code-line\" dir=\"auto\" data-line=\"12\">This compatibility allows the OpenAI connector in Semantic Kernel to interface with Grok by adjusting the base URL and API key, leveraging Semantic Kernel&#8217;s existing infrastructure.<\/p>\n<h2 id=\"pricing\" class=\"code-line\" dir=\"auto\" data-line=\"21\">Pricing<\/h2>\n<p class=\"code-line\" dir=\"auto\" data-line=\"16\">The xAI API offers access to Grok models with pricing based on token usage, as detailed in the\u00a0<a href=\"https:\/\/docs.x.ai\/docs\/models?cluster=us-east-1\" data-href=\"https:\/\/docs.x.ai\/docs\/models?cluster=us-east-1\">xAI Models documentation<\/a>.<\/p>\n<p class=\"code-line\" dir=\"auto\" data-line=\"18\">This page lists available models, their pricing per million tokens, and additional capabilities like context length and multimodal support. Below is the current pricing for Grok models in the\u00a0<code>us-east-1<\/code>\u00a0cluster as of March 2025:<\/p>\n<table class=\"code-line\" dir=\"auto\" data-line=\"27\">\n<thead class=\"code-line\" dir=\"auto\" data-line=\"20\">\n<tr class=\"code-line\" dir=\"auto\" data-line=\"20\">\n<th>Model<\/th>\n<th>Input (per 1M tokens)<\/th>\n<th>Output (per 1M tokens)<\/th>\n<\/tr>\n<\/thead>\n<tbody class=\"code-line\" dir=\"auto\" data-line=\"22\">\n<tr class=\"code-line\" dir=\"auto\" data-line=\"22\">\n<td>grok-2-vision-1212<\/td>\n<td>$2.00 (Text\/Image)<\/td>\n<td>$10.00<\/td>\n<\/tr>\n<tr class=\"code-line\" dir=\"auto\" data-line=\"23\">\n<td>grok-2-1212<\/td>\n<td>$2.00 (Text)<\/td>\n<td>$10.00<\/td>\n<\/tr>\n<tr class=\"code-line\" dir=\"auto\" data-line=\"24\">\n<td>grok-vision-beta<\/td>\n<td>$5.00 (Text\/Image)<\/td>\n<td>$15.00<\/td>\n<\/tr>\n<tr class=\"code-line\" dir=\"auto\" data-line=\"25\">\n<td>grok-beta<\/td>\n<td>$5.00 (Text)<\/td>\n<td>$15.00<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2 id=\"obtaining-a-grok-api-key\" class=\"code-line\" dir=\"auto\" data-line=\"27\">Obtaining a Grok API Key<\/h2>\n<p class=\"code-line\" dir=\"auto\" data-line=\"29\">To begin, users must obtain a Grok API key from xAI&#8217;s platform:<\/p>\n<ul class=\"code-line\" dir=\"auto\" data-line=\"30\">\n<li class=\"code-line\" dir=\"auto\" data-line=\"30\">Visit\u00a0<a href=\"https:\/\/console.x.ai\/\" data-href=\"https:\/\/console.x.ai\">xAI&#8217;s console<\/a>\u00a0to sign up or log in.<\/li>\n<li class=\"code-line\" dir=\"auto\" data-line=\"31\">Navigate to the &#8220;API Keys&#8221; section, create a new key, and note it down.<\/li>\n<li class=\"code-line\" dir=\"auto\" data-line=\"32\">The base URL for API calls is\u00a0<code>https:\/\/api.x.ai\/v1<\/code>.<\/li>\n<\/ul>\n<h2 id=\"setting-up-in-net-c\" class=\"code-line\" dir=\"auto\" data-line=\"35\">Setting Up in .NET (C#)<\/h2>\n<p class=\"code-line\" dir=\"auto\" data-line=\"36\">Semantic Kernel&#8217;s OpenAI connector, specifically the\u00a0<code>OpenAIChatCompletionService<\/code>\u00a0class, can be configured to connect to Grok. The constructor allows specification of the API key and base URL, enabling compatibility with Grok&#8217;s API. Here&#8217;s how to set it up:<\/p>\n<pre><code class=\"code-line language-csharp code-active-line\" dir=\"auto\" data-line=\"38\"><span class=\"hljs-keyword\">using<\/span> Microsoft.SemanticKernel.ChatCompletion;\r\n<span class=\"hljs-keyword\">using<\/span> Microsoft.SemanticKernel.Connectors.OpenAI;\r\n\r\n<span class=\"hljs-meta\">#<span class=\"hljs-keyword\">pragma<\/span> <span class=\"hljs-keyword\">warning<\/span> disable SKEXP0010<\/span>\r\n\r\n<span class=\"hljs-comment\">\/\/ Initialize the OpenAI chat completion service with the grok-beta model.<\/span>\r\n<span class=\"hljs-keyword\">var<\/span> chatService = <span class=\"hljs-keyword\">new<\/span> OpenAIChatCompletionService(\r\n    modelId: <span class=\"hljs-string\">\"grok-beta\"<\/span>,  <span class=\"hljs-comment\">\/\/ Grok API model<\/span>\r\n    apiKey: <span class=\"hljs-string\">\"your_grok_api_key\"<\/span>,  <span class=\"hljs-comment\">\/\/ Your Grok API key from xAI<\/span>\r\n    endpoint: <span class=\"hljs-keyword\">new<\/span> Uri(<span class=\"hljs-string\">\"https:\/\/api.x.ai\/v1\"<\/span>)  <span class=\"hljs-comment\">\/\/ Grok API endpoint<\/span>\r\n);\r\n\r\n<span class=\"hljs-comment\">\/\/ Create a new chat history and add a user message to prompt the model.<\/span>\r\nChatHistory chatHistory = [];\r\nchatHistory.AddUserMessage(<span class=\"hljs-string\">\"Why is the sky blue in one sentence?\"<\/span>);\r\n\r\n<span class=\"hljs-comment\">\/\/ Configure settings for the chat completion request.<\/span>\r\n<span class=\"hljs-keyword\">var<\/span> settings = <span class=\"hljs-keyword\">new<\/span> OpenAIPromptExecutionSettings { MaxTokens = <span class=\"hljs-number\">100<\/span> };\r\n\r\n<span class=\"hljs-comment\">\/\/ Send the chat completion request to Grok<\/span>\r\n<span class=\"hljs-keyword\">var<\/span> reply = <span class=\"hljs-keyword\">await<\/span> chatService.GetChatMessageContentAsync(chatHistory, settings);\r\nConsole.WriteLine(<span class=\"hljs-string\">\"Grok reply: \"<\/span> + reply);<\/code><\/pre>\n<h2 id=\"setting-up-in-python\" class=\"code-line\" dir=\"auto\" data-line=\"61\">Setting Up in Python<\/h2>\n<p class=\"code-line\" dir=\"auto\" data-line=\"63\">In Python, leverage\u00a0<code>OpenAIChatCompletion<\/code>\u00a0from\u00a0<code>semantic_kernel.connectors.ai.open_ai<\/code> with an async setup. The constructor allows specification of the API key and base URL, enabling compatibility with Grok&#8217;s API. Here&#8217;s how to set it up:<\/p>\n<pre><code class=\"code-line language-python code-active-line\" dir=\"auto\" data-line=\"65\"><span class=\"hljs-keyword\">import<\/span> asyncio\r\n\r\n<span class=\"hljs-keyword\">from<\/span> openai <span class=\"hljs-keyword\">import<\/span> AsyncOpenAI\r\n<span class=\"hljs-keyword\">from<\/span> semantic_kernel.connectors.ai.open_ai <span class=\"hljs-keyword\">import<\/span> OpenAIChatCompletion, OpenAIChatPromptExecutionSettings\r\n<span class=\"hljs-keyword\">from<\/span> semantic_kernel.contents <span class=\"hljs-keyword\">import<\/span> ChatHistory\r\n\r\n<span class=\"hljs-keyword\">async<\/span> <span class=\"hljs-keyword\">def<\/span> <span class=\"hljs-title function_\">main<\/span>():\r\n    <span class=\"hljs-comment\"># Initialize the OpenAI chat completion service with the grok-beta model.<\/span>\r\n    chat_service = OpenAIChatCompletion(\r\n        ai_model_id=<span class=\"hljs-string\">\"grok-beta\"<\/span>,\r\n        async_client=AsyncOpenAI(\r\n            api_key=<span class=\"hljs-string\">\"your_grok_api_key\"<\/span>,\r\n            base_url=<span class=\"hljs-string\">\"https:\/\/api.x.ai\/v1\"<\/span>,\r\n        ),\r\n    )\r\n\r\n    <span class=\"hljs-comment\"># Create a new chat history and add a user message to prompt the model.<\/span>\r\n    chat_history = ChatHistory()\r\n    chat_history.add_system_message(<span class=\"hljs-string\">\"You are a helpful assistant.\"<\/span>)\r\n    chat_history.add_user_message(<span class=\"hljs-string\">\"Why is the sky blue in one sentence?\"<\/span>)\r\n\r\n    <span class=\"hljs-comment\"># Configure settings for the chat completion request.<\/span>\r\n    settings = OpenAIChatPromptExecutionSettings(max_tokens=<span class=\"hljs-number\">100<\/span>)\r\n\r\n    <span class=\"hljs-comment\"># Get the model's response<\/span>\r\n    response = <span class=\"hljs-keyword\">await<\/span> chat_service.get_chat_message_content(chat_history, settings)\r\n    <span class=\"hljs-built_in\">print<\/span>(<span class=\"hljs-string\">\"Grok reply:\"<\/span>, response)\r\n\r\n<span class=\"hljs-comment\"># Run the async main function<\/span>\r\n<span class=\"hljs-keyword\">if<\/span> __name__ == <span class=\"hljs-string\">\"__main__\"<\/span>:\r\n    asyncio.run(main())<\/code><\/pre>\n<p class=\"code-line\" dir=\"auto\" data-line=\"96\">We invite you to dive into Grok models within your Semantic Kernel workflows. With their impressive 128k context window, support for function calling, and compatibility with xAI\u2019s innovative API, Grok brings a fresh perspective to your AI toolkit. Semantic Kernel smooths out the integration process, making it easy to swap in Grok and explore its potential. Try it out and discover how it can enhance your projects\u2014whether you\u2019re crafting intelligent agents, tackling intricate queries, or streamlining multi-step AI processes. Enjoy experimenting!<\/p>\n<h2 id=\"references\" class=\"code-line\" dir=\"auto\" data-line=\"98\">References<\/h2>\n<p class=\"code-line\" dir=\"auto\" data-line=\"100\">Ready to explore more? Check out these key resources to fuel your Grok and Semantic Kernel journey:<\/p>\n<ul class=\"code-line\" dir=\"auto\" data-line=\"101\">\n<li class=\"code-line\" dir=\"auto\" data-line=\"101\"><strong>More about SK Chat Completion:<\/strong>\u00a0Get the full scoop on chat services\u00a0<a href=\"https:\/\/learn.microsoft.com\/en-us\/semantic-kernel\/concepts\/ai-services\/chat-completion\" data-href=\"https:\/\/learn.microsoft.com\/en-us\/semantic-kernel\/concepts\/ai-services\/chat-completion\">here<\/a>.<\/li>\n<li class=\"code-line\" dir=\"auto\" data-line=\"102\"><strong>xAI Console:<\/strong>\u00a0Grab your Grok API key and get started\u00a0<a href=\"https:\/\/console.x.ai\/\" data-href=\"https:\/\/console.x.ai\">here<\/a>.<\/li>\n<li class=\"code-line\" dir=\"auto\" data-line=\"103\"><strong>xAI Models &amp; Pricing:<\/strong>\u00a0Curious about costs and options? Peek\u00a0<a href=\"https:\/\/docs.x.ai\/docs\/models?cluster=us-east-1\" data-href=\"https:\/\/docs.x.ai\/docs\/models?cluster=us-east-1\">here<\/a>.<\/li>\n<li class=\"code-line\" dir=\"auto\" data-line=\"104\"><strong>xAI Blog Highlights:<\/strong>\u00a0Stay in the loop with the latest from xAI\u00a0<a href=\"https:\/\/x.ai\/blog\" data-href=\"https:\/\/x.ai\/blog\">here<\/a>.<\/li>\n<li class=\"code-line code-active-line\" dir=\"auto\" data-line=\"105\"><strong>Grok Performance Unveiled:<\/strong>\u00a0Dig into benchmark wins for\u00a0<a href=\"https:\/\/x.ai\/blog\/grok-2\" data-href=\"https:\/\/x.ai\/blog\/grok-2\">Grok-2<\/a>\u00a0and\u00a0<a href=\"https:\/\/x.ai\/blog\/grok-3\" data-href=\"https:\/\/x.ai\/blog\/grok-3\">Grok-3<\/a>.<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>For Semantic Kernel users, integrating xAI&#8217;s Grok API using the OpenAI connector is a breeze thanks to its compatibility with OpenAI&#8217;s API format. This tutorial focuses on setting up Grok in your Semantic Kernel projects with minimal fuss, using C# and Python examples. Why Grok? Grok, built by xAI, is a powerful AI model, offers [&hellip;]<\/p>\n","protected":false},"author":63983,"featured_media":4319,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[78,47,34,1],"tags":[48,63,9],"class_list":["post-4308","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-net","category-announcement","category-python-2","category-semantic-kernel","tag-ai","tag-microsoft-semantic-kernel","tag-semantic-kernel"],"acf":[],"blog_post_summary":"<p>For Semantic Kernel users, integrating xAI&#8217;s Grok API using the OpenAI connector is a breeze thanks to its compatibility with OpenAI&#8217;s API format. This tutorial focuses on setting up Grok in your Semantic Kernel projects with minimal fuss, using C# and Python examples. Why Grok? Grok, built by xAI, is a powerful AI model, offers [&hellip;]<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/posts\/4308","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/users\/63983"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/comments?post=4308"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/posts\/4308\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/media\/4319"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/media?parent=4308"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/categories?post=4308"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/tags?post=4308"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}