{"id":2647,"date":"2024-05-13T18:37:06","date_gmt":"2024-05-14T01:37:06","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/semantic-kernel\/?p=2647"},"modified":"2024-05-14T02:38:36","modified_gmt":"2024-05-14T09:38:36","slug":"how-to-get-started-using-semantic-kernel-net","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/agent-framework\/how-to-get-started-using-semantic-kernel-net\/","title":{"rendered":"How to Get Started using Semantic Kernel .NET"},"content":{"rendered":"<p>Hi all,<\/p>\n<p>With Microsoft Build approaching, we wanted to share some walk throughs of using Semantic Kernel to get started if you haven&#8217;t already. Today we&#8217;re going to dive into the Getting Started guide we have in the main Semantic Kernel GitHub repository for .NET.<\/p>\n<p><strong><span class=\"TextRun SCXW181417045 BCX8\" lang=\"EN-US\" xml:lang=\"EN-US\" data-contrast=\"auto\"><span class=\"NormalTextRun SCXW181417045 BCX8\" data-ccp-charstyle=\"normaltextrun\" data-ccp-charstyle-defn=\"{&quot;ObjectId&quot;:&quot;f06eecea-feb5-4ba4-bc59-6f3c3fbafe08|163&quot;,&quot;ClassId&quot;:1073872969,&quot;Properties&quot;:[201342446,&quot;1&quot;,201342447,&quot;5&quot;,201342448,&quot;1&quot;,201342449,&quot;1&quot;,469777841,&quot;Aptos&quot;,469777842,&quot;Arial&quot;,469777843,&quot;Aptos&quot;,469777844,&quot;Aptos&quot;,201341986,&quot;1&quot;,469769226,&quot;Aptos,Arial&quot;,268442635,&quot;22&quot;,335559704,&quot;1025&quot;,335559705,&quot;1033&quot;,335551547,&quot;1033&quot;,335559740,&quot;259&quot;,201341983,&quot;0&quot;,335559739,&quot;160&quot;,469775450,&quot;normaltextrun&quot;,201340122,&quot;1&quot;,134233614,&quot;true&quot;,469778129,&quot;normaltextrun&quot;,335572020,&quot;1&quot;,469778324,&quot;Default Paragraph Font&quot;]}\">Getting Started with Semantic <\/span><span class=\"NormalTextRun SCXW181417045 BCX8\" data-ccp-charstyle=\"normaltextrun\">Kernel<\/span><\/span><span class=\"EOP CommentStart SCXW181417045 BCX8\" data-ccp-props=\"{&quot;201341983&quot;:0,&quot;335559739&quot;:80,&quot;335559740&quot;:240}\">\u00a0<\/span><\/strong><\/p>\n<p><span data-contrast=\"auto\">We are excited to announce new enhancements to Semantic Kernel documentation and samples, and this blog post is focused in the new\u00a0<a href=\"https:\/\/github.com\/microsoft\/semantic-kernel\/tree\/main\/dotnet\/samples\/GettingStarted\">Getting Started<\/a> steps in details, making your adoption seamless.<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">Throughout the steps we are going to introduce many important Semantic Kernel features, like:<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/p>\n<ul>\n<li data-leveltext=\"-\" data-font=\"Aptos\" data-listid=\"13\" data-list-defn-props=\"{&quot;335551671&quot;:0,&quot;335552541&quot;:1,&quot;335559685&quot;:720,&quot;335559991&quot;:360,&quot;469769226&quot;:&quot;Aptos&quot;,&quot;469769242&quot;:[8226],&quot;469777803&quot;:&quot;left&quot;,&quot;469777804&quot;:&quot;-&quot;,&quot;469777815&quot;:&quot;hybridMultilevel&quot;}\" aria-setsize=\"-1\" data-aria-posinset=\"0\" data-aria-level=\"1\"><span data-contrast=\"auto\">The Kernel<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/li>\n<li data-leveltext=\"-\" data-font=\"Aptos\" data-listid=\"13\" data-list-defn-props=\"{&quot;335551671&quot;:0,&quot;335552541&quot;:1,&quot;335559685&quot;:720,&quot;335559991&quot;:360,&quot;469769226&quot;:&quot;Aptos&quot;,&quot;469769242&quot;:[8226],&quot;469777803&quot;:&quot;left&quot;,&quot;469777804&quot;:&quot;-&quot;,&quot;469777815&quot;:&quot;hybridMultilevel&quot;}\" aria-setsize=\"-1\" data-aria-posinset=\"0\" data-aria-level=\"1\"><span data-contrast=\"auto\">Connector Services<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/li>\n<li data-leveltext=\"-\" data-font=\"Aptos\" data-listid=\"13\" data-list-defn-props=\"{&quot;335551671&quot;:0,&quot;335552541&quot;:1,&quot;335559685&quot;:720,&quot;335559991&quot;:360,&quot;469769226&quot;:&quot;Aptos&quot;,&quot;469769242&quot;:[8226],&quot;469777803&quot;:&quot;left&quot;,&quot;469777804&quot;:&quot;-&quot;,&quot;469777815&quot;:&quot;hybridMultilevel&quot;}\" aria-setsize=\"-1\" data-aria-posinset=\"0\" data-aria-level=\"1\">Plugins<\/li>\n<li data-leveltext=\"-\" data-font=\"Aptos\" data-listid=\"13\" data-list-defn-props=\"{&quot;335551671&quot;:0,&quot;335552541&quot;:1,&quot;335559685&quot;:720,&quot;335559991&quot;:360,&quot;469769226&quot;:&quot;Aptos&quot;,&quot;469769242&quot;:[8226],&quot;469777803&quot;:&quot;left&quot;,&quot;469777804&quot;:&quot;-&quot;,&quot;469777815&quot;:&quot;hybridMultilevel&quot;}\" aria-setsize=\"-1\" data-aria-posinset=\"0\" data-aria-level=\"1\"><span data-contrast=\"auto\">Prompt Functions<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/li>\n<li data-leveltext=\"-\" data-font=\"Aptos\" data-listid=\"13\" data-list-defn-props=\"{&quot;335551671&quot;:0,&quot;335552541&quot;:1,&quot;335559685&quot;:720,&quot;335559991&quot;:360,&quot;469769226&quot;:&quot;Aptos&quot;,&quot;469769242&quot;:[8226],&quot;469777803&quot;:&quot;left&quot;,&quot;469777804&quot;:&quot;-&quot;,&quot;469777815&quot;:&quot;hybridMultilevel&quot;}\" aria-setsize=\"-1\" data-aria-posinset=\"0\" data-aria-level=\"1\"><span data-contrast=\"auto\">Method Functions<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/li>\n<li><span data-contrast=\"auto\">Prompt Execution Settings<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/li>\n<li data-leveltext=\"-\" data-font=\"Aptos\" data-listid=\"13\" data-list-defn-props=\"{&quot;335551671&quot;:0,&quot;335552541&quot;:1,&quot;335559685&quot;:720,&quot;335559991&quot;:360,&quot;469769226&quot;:&quot;Aptos&quot;,&quot;469769242&quot;:[8226],&quot;469777803&quot;:&quot;left&quot;,&quot;469777804&quot;:&quot;-&quot;,&quot;469777815&quot;:&quot;hybridMultilevel&quot;}\" aria-setsize=\"-1\" data-aria-posinset=\"0\" data-aria-level=\"1\"><span data-contrast=\"auto\">Prompt Templates<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/li>\n<li data-leveltext=\"-\" data-font=\"Aptos\" data-listid=\"13\" data-list-defn-props=\"{&quot;335551671&quot;:0,&quot;335552541&quot;:1,&quot;335559685&quot;:720,&quot;335559991&quot;:360,&quot;469769226&quot;:&quot;Aptos&quot;,&quot;469769242&quot;:[8226],&quot;469777803&quot;:&quot;left&quot;,&quot;469777804&quot;:&quot;-&quot;,&quot;469777815&quot;:&quot;hybridMultilevel&quot;}\" aria-setsize=\"-1\" data-aria-posinset=\"0\" data-aria-level=\"1\"><span data-contrast=\"auto\">Chat Prompting<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/li>\n<li data-leveltext=\"-\" data-font=\"Aptos\" data-listid=\"13\" data-list-defn-props=\"{&quot;335551671&quot;:0,&quot;335552541&quot;:1,&quot;335559685&quot;:720,&quot;335559991&quot;:360,&quot;469769226&quot;:&quot;Aptos&quot;,&quot;469769242&quot;:[8226],&quot;469777803&quot;:&quot;left&quot;,&quot;469777804&quot;:&quot;-&quot;,&quot;469777815&quot;:&quot;hybridMultilevel&quot;}\" aria-setsize=\"-1\" data-aria-posinset=\"0\" data-aria-level=\"1\"><span data-contrast=\"auto\">Filtering<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/li>\n<li data-leveltext=\"-\" data-font=\"Aptos\" data-listid=\"13\" data-list-defn-props=\"{&quot;335551671&quot;:0,&quot;335552541&quot;:1,&quot;335559685&quot;:720,&quot;335559991&quot;:360,&quot;469769226&quot;:&quot;Aptos&quot;,&quot;469769242&quot;:[8226],&quot;469777803&quot;:&quot;left&quot;,&quot;469777804&quot;:&quot;-&quot;,&quot;469777815&quot;:&quot;hybridMultilevel&quot;}\" aria-setsize=\"-1\" data-aria-posinset=\"0\" data-aria-level=\"1\"><span data-contrast=\"auto\">Dependency Injection<\/span><\/li>\n<\/ul>\n<p aria-level=\"1\"><b><span data-contrast=\"none\">A Glimpse into the Gettings Started Steps:<\/span><\/b><\/p>\n<p><span data-contrast=\"auto\">In the guide below we\u2019ll start from scratch and navigate with you through each of the example steps, clarifying the code, details and running them in real time.<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">All sample projects in Semantic Kernel are <\/span><b><span data-contrast=\"auto\">Unit Test enabled<\/span><\/b><span data-contrast=\"auto\"> projects where you can with ease in your IDE using a Test Explorer UI execute each individually or all at once.<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/p>\n<p aria-level=\"2\"><strong>Preparing to use the samples:<\/strong><\/p>\n<ol>\n<li><span data-contrast=\"auto\">Clone semantic kernel repository in (<a href=\"https:\/\/github.com\/microsoft\/semantic-kernel.git\">https:\/\/github.com\/microsoft\/semantic-kernel.git<\/a>)<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240,&quot;469777462&quot;:[284],&quot;469777927&quot;:[0],&quot;469777928&quot;:[8]}\">\u00a0<\/span><\/li>\n<li data-leveltext=\"%1.\" data-font=\"Aptos,Segoe UI\" data-listid=\"14\" data-list-defn-props=\"{&quot;335552541&quot;:0,&quot;335559685&quot;:360,&quot;335559991&quot;:360,&quot;469769242&quot;:[65533,0],&quot;469777803&quot;:&quot;left&quot;,&quot;469777804&quot;:&quot;%1.&quot;,&quot;469777815&quot;:&quot;multilevel&quot;,&quot;469778510&quot;:&quot;default&quot;}\" aria-setsize=\"-1\" data-aria-posinset=\"2\" data-aria-level=\"1\"><span data-contrast=\"auto\">Configure the secrets using dotnet user-secrets or `environment variables` to use in the samples<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/li>\n<\/ol>\n<ul>\n<li><span data-contrast=\"auto\">Open a Terminal and go to <\/span><b><span data-contrast=\"auto\">GettingStarted<\/span><\/b><span data-contrast=\"auto\"> sample project folder: \n<pre>&lt;repository root&gt;\/dotnet\/samples\/GettingStarted<\/pre>\n<p> <\/span><\/li>\n<\/ul>\n<ul>\n<li><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559685&quot;:792,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><span data-contrast=\"auto\">Execute <\/span><span data-contrast=\"none\">dotnet user-secrets set \u201cKey\u201d \u201cValue\u201d<\/span><span data-contrast=\"auto\"> for every key and value described below.<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/li>\n<\/ul>\n<table data-tablestyle=\"MsoTableGrid\" data-tablelook=\"1184\" aria-rowcount=\"3\">\n<tbody>\n<tr aria-rowindex=\"1\">\n<td data-celllook=\"0\"><span data-contrast=\"none\">Key<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/td>\n<td data-celllook=\"0\"><span data-contrast=\"none\">Value<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/td>\n<\/tr>\n<tr aria-rowindex=\"2\">\n<td data-celllook=\"0\"><span data-contrast=\"auto\">OpenAI:ApiKey<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/td>\n<td data-celllook=\"0\"><span data-contrast=\"auto\">Your OpenAI key<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/td>\n<\/tr>\n<tr aria-rowindex=\"3\">\n<td data-celllook=\"0\"><span data-contrast=\"auto\">OpenAI:ChatModelId<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/td>\n<td data-celllook=\"0\"><span data-contrast=\"auto\">Model to use (i.e. gpt-3.5-turbo)<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><span data-contrast=\"auto\">i.e: <\/span><span data-contrast=\"none\">dotnet user-secrets set \u201cOpenAI:ChatModelId\u201d \u201cgpt-3.5-turbo\u201d<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559685&quot;:792,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/p>\n<p><span data-contrast=\"auto\">\u00a0 3. Open your favorite IDE i.e.:<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/p>\n<ul>\n<li><span data-contrast=\"auto\">VSCode<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span>\n<ul>\n<li><span data-contrast=\"auto\">Open Folder in the root repository<\/span><span data-ccp-props=\"{&quot;134233117&quot;:false,&quot;134233118&quot;:false,&quot;201341983&quot;:0,&quot;335559738&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}\">\u00a0<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture1-red-box.png\"><img decoding=\"async\" class=\"alignnone wp-image-2649 size-full\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture1-red-box.png\" alt=\"Image Picture1 8211 red box\" width=\"325\" height=\"389\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture1-red-box.png 325w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture1-red-box-251x300.png 251w\" sizes=\"(max-width: 325px) 100vw, 325px\" \/><\/a><\/p>\n<ul>\n<li>Select <strong>Testing<\/strong> Icon on the left menu<\/li>\n<\/ul>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture2.png\"><img decoding=\"async\" class=\"alignnone wp-image-2651 size-full\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture2.png\" alt=\"Image Picture2\" width=\"390\" height=\"348\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture2.png 390w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture2-300x268.png 300w\" sizes=\"(max-width: 390px) 100vw, 390px\" \/><\/a><\/p>\n<ul>\n<li>Look for <strong>GettingStarted<\/strong> project and select the Step that you want to execute and click to Run or Debug it.<\/li>\n<\/ul>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture3.png\"><img decoding=\"async\" class=\"alignnone wp-image-2652 size-full\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture3.png\" alt=\"Image Picture3\" width=\"424\" height=\"371\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture3.png 424w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture3-300x263.png 300w\" sizes=\"(max-width: 424px) 100vw, 424px\" \/><\/a><\/p>\n<ul>\n<li>After executing the test, a <strong>Test Result<\/strong> panel will appear, displaying the current test result<\/li>\n<\/ul>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture4-1.png\"><img decoding=\"async\" class=\"alignnone wp-image-2653 size-full\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture4-1.png\" alt=\"Image Picture4\" width=\"624\" height=\"237\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture4-1.png 624w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture4-1-300x114.png 300w\" sizes=\"(max-width: 624px) 100vw, 624px\" \/><\/a><\/p>\n<p>Currently xUnit tests in VSCode, only show Output when fail. For that reason if you want to see the full output of it, add a failing last line for the tests like:<\/p>\n<pre class=\"prettyprint language-cs language-csharp\"><code class=\"language-cs language-csharp\">Assert.Fail(\u201cFail on purpose to show output\u201d);<\/code><\/pre>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture5.png\"><img decoding=\"async\" class=\"alignnone wp-image-2654 size-full\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture5.png\" alt=\"Image Picture5\" width=\"623\" height=\"123\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture5.png 623w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture5-300x59.png 300w\" sizes=\"(max-width: 623px) 100vw, 623px\" \/><\/a><\/p>\n<p>Visual Studio<\/p>\n<ul>\n<li>Open <strong>SK-dotnet.sln <\/strong>solution file inside &lt;repository root folder&gt;\/dotnet. This will trigger your Visual Studio IDE.<\/li>\n<\/ul>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture6.png\"><img decoding=\"async\" class=\"alignnone wp-image-2655 size-full\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture6.png\" alt=\"Image Picture6\" width=\"276\" height=\"154\" \/><\/a><\/p>\n<ul>\n<li><strong>GettingStarted<\/strong> samples will be within samples folder in the solution<\/li>\n<\/ul>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture8.png\"><img decoding=\"async\" class=\"alignnone wp-image-2656 size-full\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture8.png\" alt=\"Image Picture8\" width=\"389\" height=\"368\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture8.png 389w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture8-300x284.png 300w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture8-24x24.png 24w\" sizes=\"(max-width: 389px) 100vw, 389px\" \/><\/a><\/p>\n<ul>\n<li>On the Test Menu Open the Test Explorer and Navigate to <strong>GettingStarted<\/strong><\/li>\n<\/ul>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture9.png\"><img decoding=\"async\" class=\"alignnone wp-image-2657 size-full\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture9.png\" alt=\"Image Picture9\" width=\"439\" height=\"742\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture9.png 439w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture9-177x300.png 177w\" sizes=\"(max-width: 439px) 100vw, 439px\" \/><\/a><\/p>\n<ul>\n<li>Run or Debug the test with Right mouse click on it and wait until the test has completed with the Output result in the right panel.<\/li>\n<\/ul>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture10.png\"><img decoding=\"async\" class=\"alignnone wp-image-2658 size-full\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture10.png\" alt=\"Image Picture10\" width=\"623\" height=\"232\" srcset=\"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture10.png 623w, https:\/\/devblogs.microsoft.com\/agent-framework\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture10-300x112.png 300w\" sizes=\"(max-width: 623px) 100vw, 623px\" \/><\/a><\/p>\n<p><strong>Steps Details<\/strong><\/p>\n<p>In this session we are going to walk through each step code explaining what is being done one each line.<\/p>\n<p><strong>1. Creating and using the Kernel<\/strong><\/p>\n<p><strong>Creating<\/strong><\/p>\n<p>Kernel is one of the main components of Semantic Kernel and works like a glue on how other components can be combined for a great result. In this sample we are going thru the most common scenarios using this component.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">Kernel kernel = Kernel.CreateBuilder()\r\n    .AddOpenAIChatCompletion(\r\n        modelId: TestConfiguration.OpenAI.ChatModelId,\r\n        apiKey: TestConfiguration.OpenAI.ApiKey)\r\n    .Build();\r\n<\/code><\/pre>\n<p>Using Kernel Builder to configure and build your kernel with OpenAI Chat Completion Connector support.<\/p>\n<p>Depending on the connector packages you have installed the IntelliSense will bring you a list of different configurations you are able to use while configuring your Kernel.<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture11.png\"><img decoding=\"async\" class=\"alignnone wp-image-2659 size-full\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture11.png\" alt=\"Image Picture11\" width=\"277\" height=\"183\" \/><\/a><\/p>\n<p>To facilitate the usage of our examples all secrets created using the user-secrets tool are available in the code in the `TestConfiguration` static class properties which is loaded automatically by the BaseTest class.<\/p>\n<p><strong>1.1 Using InvokePrompt<\/strong><\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">Console.WriteLine(await kernel.InvokePromptAsync(\"What color is the sky?\"));<\/code><\/pre>\n<p>Once you have a Kernel built, this is the simplest way you can define a prompt and get the result from an AI Model.<\/p>\n<p>Kernel provides many different Invocation options each serving its purpose, `InvokePromptAsync` is a dedicated function where you can specify a prompt and easily get the result from the AI Model. The result from a InvokePromptAsync is a FunctionResult type which has a ToString() override that will be implicitly be called in the `Console.WriteLine`.<\/p>\n<p><strong>1.2 Using InvokePrompt with arguments<\/strong><\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">KernelArguments arguments = new() { { \"topic\", \"sea\" } };\r\nConsole.WriteLine(await kernel.InvokePromptAsync(\"What color is the {{$topic}}?\", arguments));<\/code><\/pre>\n<p>Another very interesting aspect of prompt in Semantic Kernel is the ability to use templating, in this example the prompt has a `{{$topic}}` templating mark which signals that to Invoke this Prompt you will need a dynamic topic argument that can be specified at the invocation time, using the KernelArguments dictionary, providing the matching topic argument to render properly before calling the AI Model.<\/p>\n<p><strong>1.3 Using InvokePromptStreaming with arguments<\/strong><\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">await foreach (var update in kernel.InvokePromptStreamingAsync(\"What color is the {{$topic}}? Provide a detailed explanation.\", arguments))\r\n{\r\n     Write(update);\r\n}<\/code><\/pre>\n<p>Streaming is one of the interesting invocation options you have using Kernel, this allows you to update your UI as the information arrives from the Model reducing the perception of delay between the request and the result. This example uses the same strategy as the second example, providing a parameterized prompt template and the argument during invocation.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">await foreach (var update in kernel.InvokePromptStreamingAsync(...))<\/code><\/pre>\n<p>When using streaming function above we return an IAsyncEnumerable&lt;StreamingKernelContent&gt; interface which makes it simple to iterate over each update chunk with a foreach loop.<\/p>\n<p>Each StreamingKernelContent chunk has an override on the ToString() to return the most significant representation of it in a loop depending on the Modality (Text Generation, Chat Completion, \u2026) which in this sample is a ChatCompletion and gives back the `Content` of a message.<\/p>\n<p><strong>1.4 Invoking with Custom Settings<\/strong><\/p>\n<p>While invoking you can also define your specific AI Settings like Max Tokens, Temperature, and many more.<\/p>\n<p>Those settings will vary depending on the type of model, modality and connector used.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">arguments = new(new OpenAIPromptExecutionSettings { MaxTokens = 500, Temperature = 0.5 }) { { \"topic\", \"dogs\" } };\r\nConsole.WriteLine(await kernel.InvokePromptAsync(\"Tell me a story about {{$topic}}\", arguments));<\/code><\/pre>\n<p>When creating a KernelArguments class you can specify on its constructor the settings you want to use while invoking the AI.<\/p>\n<p>Note that we are using a specific OpenAI execution settings class that may vary with other settings to be used on other connectors.<\/p>\n<p><strong>1.5 Getting JSON with Custom Settings<\/strong><\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">arguments = new(new OpenAIPromptExecutionSettings { ResponseFormat = \"json_object\" }) { { \"topic\", \"chocolate\" } };\r\nConsole.WriteLine(await kernel.InvokePromptAsync(\"Create a recipe for a {{$topic}} cake in JSON format\", arguments));<\/code><\/pre>\n<p>In the usage above we are using the ResponseFormat setting from an OpenAI connector to instruct the model to return in JSON format.<\/p>\n<p><strong>2. Using Plugins with Kernel<\/strong><\/p>\n<p>Plugins are one of the most powerful features of Semantic Kernel. A plugin is a set of functions that can be either Native C# Functions (MethodFunctions), or PromptFunctions that as the name suggest have a Prompt Definition to call the AI Model.<\/p>\n<p><strong>C# Method Functions<\/strong><\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">public class TimeInformation\r\n{\r\n    [KernelFunction, Description(\"Retrieves the current time in UTC.\")]\r\n    public string GetCurrentUtcTime() =&gt; DateTime.UtcNow.ToString(\"R\");\r\n}<\/code><\/pre>\n<p>C# functions that can be imported to Plugins are marked with KernelFunction that will instruct the Kernel what are going to be imported into Plugins using Description attribute will be used on how those functions will be detailed to the AI Model for calling so it will have a better understanding on when to use your functions.<\/p>\n<p>When implementing functions in your plugins you can use many different signatures, including asynchronous tasks and different types.<\/p>\n<p>Suggestion: Is very important provide descriptions to your functions and arguments when using them against the AI for best results.<\/p>\n<p><strong>C# Method Functions with Complex Objects<\/strong><\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">public class WidgetFactory\r\n{\r\n    [KernelFunction, Description(\"Creates a new widget of the specified type and colors\")]\r\n    public WidgetDetails CreateWidget(\r\n        [Description(\"The type of widget to be created\")] \r\n        WidgetType widgetType,\r\n        [Description(\"The colors of the widget to be created\")] \r\n        WidgetColor[] widgetColors)\r\n    {\r\n        var colors = string.Join('-', widgetColors.Select(c =&gt; c.GetDisplayName()).ToArray());\r\n        return new()\r\n        {\r\n            SerialNumber = $\"{widgetType}-{colors}-{Guid.NewGuid()}\",\r\n            Type = widgetType,\r\n            Colors = widgetColors\r\n        };\r\n    }\r\n}<\/code><\/pre>\n<p>You can also work with complex objects and Enum types as parameters of your Method Functions.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">[JsonConverter(typeof(JsonStringEnumConverter))]\r\npublic enum WidgetType\r\n{\r\n    [Description(\"A widget that is useful.\")]\r\n    Useful,\r\n\r\n    [Description(\"A widget that is decorative.\")]\r\n    Decorative\r\n}<\/code><\/pre>\n<p>To be able to fully use this with an AI Model the Enums need to be annotated with a JsonDeserialization converter. Note again the importance of having a description for each Enum, so the AI will pick the best one to call your function.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">public class WidgetDetails\r\n{\r\n    public string SerialNumber { get; init; }\r\n    public WidgetType Type { get; init; }\r\n    public WidgetColor[] Colors { get; init; }\r\n}<\/code><\/pre>\n<p>The returning complex object of CreateWidget function. Since this is a returning object processed by Native C# Code, a description for its properties is not necessary.<\/p>\n<p><strong>Adding Plugins to Kernel<\/strong><\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">IKernelBuilder kernelBuilder = Kernel.CreateBuilder();\r\nkernelBuilder.AddOpenAIChatCompletion(\r\n        modelId: TestConfiguration.OpenAI.ChatModelId,\r\n        apiKey: TestConfiguration.OpenAI.ApiKey);\r\nkernelBuilder.Plugins.AddFromType&lt;TimeInformation&gt;();\r\nkernelBuilder.Plugins.AddFromType&lt;WidgetFactory&gt;();\r\nKernel kernel = kernelBuilder.Build();<\/code><\/pre>\n<p>On the above lines pay attention to the Plugins.AddFromType&lt;T&gt;(), Plugins can be added from different sources like other functions, objects, types and prompt directories.<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture12.png\"><img decoding=\"async\" class=\"alignnone wp-image-2660 size-full\" src=\"https:\/\/devblogs.microsoft.com\/semantic-kernel\/wp-content\/uploads\/sites\/78\/2024\/05\/Picture12.png\" alt=\"Image Picture12\" width=\"190\" height=\"87\" \/><\/a><\/p>\n<p>After configuring plugins and creating your Kernel it has awareness of functions that can be used while invoking.<\/p>\n<p><strong>2.1 Questioning that no plugin supports<\/strong><\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">Console.WriteLine(await kernel.InvokePromptAsync(\"How many days until Christmas?\"));<\/code><\/pre>\n<p>If you invoke the kernel with a prompt that asks the AI for information that no plugin previously added to the kernel can provide, the AI Model may hallucinate or deny.<\/p>\n<p><strong>2.2 Using Template to Invoke Plugin Functions<\/strong><\/p>\n<pre class=\"prettyprint language-cs language-csharp\"><code class=\"language-cs language-csharp\">Console.WriteLine(await kernel.InvokePromptAsync(\"The current time is {{TimeInformation.GetCurrentUtcTime}}. How many days until Christmas?\"));<\/code><\/pre>\n<p>Using template with Semantic Kernel you can also provide template function you want to append its result to the template. Important to note that while executing this invocation, the template will be rendered using the GetCurrentUtcTime, appending the current time to the prompt and sending the final rendered prompt it to the AI Model.<\/p>\n<p><strong>2.3 Auto Function Invoking &#8211; Making the AI Model call your Plugin Functions<\/strong><\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">OpenAIPromptExecutionSettings settings = new() { ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions };\r\nConsole.WriteLine(await kernel.InvokePromptAsync(\"How many days until Christmas? Explain your thinking.\", new(settings)));<\/code><\/pre>\n<p>To allow the AI Model to call your functions you need to provide in the settings that you want a specific ToolCallingBehavior, not providing it the AI won\u2019t have visibility to your functions.<\/p>\n<p>Note that this functionality will only work with AI models and connectors that support Tool\/Function Calling like Open AI latest chat completion models.<\/p>\n<p>In this example the AI Model will have visibility to your plugins and will call the functions it sees as necessary to fulfil the ask.<\/p>\n<p><strong>2.4 Auto Function Invoking with your Complex Object Plugin Functions<\/strong>\nOne interesting aspect of complex objects in regards to the AI Model is that it knows the structure of the parameters of your functions and provide them following the description and the deserialization representation of those parameters.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">Console.WriteLine(await kernel.InvokePromptAsync(\"Create a handy lime colored widget for me.\", new(settings)));\r\nConsole.WriteLine(await kernel.InvokePromptAsync(\"Create a beautiful scarlet colored widget for me.\", new(settings)));\r\nConsole.WriteLine(await kernel.InvokePromptAsync(\"Create an attractive maroon and navy colored widget for me.\", new(settings)));<\/code><\/pre>\n<p>The above examples send prompts that will influence the AI decision on different parameter combinations to call the CreateWidget function.<\/p>\n<p><strong>3. Using Yaml Prompt Files<\/strong><\/p>\n<p>While you can provide templates in the code you can also import those from Yaml files. This sample show how can you create a Prompt Function in that way.<\/p>\n<p><strong>Semantic Kernel Format Configuration<\/strong><\/p>\n<pre class=\"prettyprint language-default\">\r\n<code class=\"language-default\">name: GenerateStory\r\ntemplate: |\r\n\u00a0 Tell a story about {{$topic}} that is {{$length}} sentences long.\r\n\r\ntemplate_format: semantic-kernel\r\ndescription: A function that generates a story about a topic.\r\ninput_variables:\r\n\u00a0 - name: topic\r\n\u00a0\u00a0\u00a0 description: The topic of the story.\r\n\u00a0\u00a0\u00a0 is_required: true\r\n\u00a0 - name: length\r\n\u00a0\u00a0\u00a0 description: The number of sentences in the story.\r\n\u00a0\u00a0\u00a0 is_required: true\r\noutput_variable:\r\n\u00a0 description: The generated story.\r\nexecution_settings:\r\n\u00a0 default:\r\n\u00a0\u00a0\u00a0 temperature: 0.6<\/code><\/pre>\n<p><strong>Handlebars Format Configuration<\/strong><\/p>\n<pre class=\"prettyprint language-default\">\r\n<code class=\"language-default\">name: GenerateStory\r\ntemplate: |\r\n\u00a0 Tell a story about {{topic}} that is {{length}} sentences long.\r\ntemplate_format: handlebars\r\ndescription: A function that generates a story about a topic.\r\ninput_variables:\r\n\u00a0 - name: topic\r\n\u00a0\u00a0\u00a0 description: The topic of the story.\r\n\u00a0\u00a0\u00a0 is_required: true\r\n\u00a0 - name: length\r\n\u00a0\u00a0\u00a0 description: The number of sentences in the story.\r\n\u00a0\u00a0\u00a0 is_required: true\r\noutput_variable:\r\n\u00a0 description: The generated story.\r\nexecution_settings:\r\n\u00a0 service1:\u00a0\r\n\u00a0\u00a0\u00a0 model_id: gpt-4\r\n\u00a0\u00a0\u00a0 temperature: 0.6\r\n\u00a0 service2:\r\n\u00a0\u00a0\u00a0 model_id: gpt-3\r\n\u00a0\u00a0\u00a0 temperature: 0.4\r\n\u00a0 default:\r\n \u00a0\u00a0 temperature: 0.5<\/code><\/pre>\n<p>The Yaml file structures above provides important details. about the prompt used, description to the AI Model, specially:<\/p>\n<ul>\n<li>Template: That the prompt also has the template variables {{$topic}} and {{$length}}<\/li>\n<li>Template Format: Semantic Kernel provide abstractions to use different template engines like Handlebars.<\/li>\n<\/ul>\n<p>Note that the template variables according to Handlerbars spec don\u2019t have the \u201c$\u201d before the variable name<\/p>\n<ul>\n<li>Template variable details are defined in \u201cinput_variables\u201d list<\/li>\n<li>Execution Settings can be provided in a per-function level for improved results.<\/li>\n<\/ul>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li>ServiceId: Settings that will be used for the service Id.<\/li>\n<li>When creating a AI Service (i.e. builder.<strong style=\"font-size: 1rem; text-align: var(--bs-body-text-align);\">AddOpenAIChatCompletion<\/strong><span style=\"font-size: 1rem; text-align: var(--bs-body-text-align);\">) you can provide what is the service Id.<\/span><\/li>\n<li>ModelId: Settings will be used only if the model id matches the provided one else defaults.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<pre class=\"prettyprint language-default\">\r\n<code class=\"language-default\">var generateStoryYaml = EmbeddedResource.Read(\"GenerateStory.yaml\");\r\nvar function = kernel.CreateFunctionFromPromptYaml(generateStoryYaml);<\/code><\/pre>\n<p><strong>CreateFunctionFromPromptYaml<\/strong>, without specifying a formatter, uses the \u201csemantic-kernel\u201d default formatter to load the above yaml file content into the generateStoryYaml variable and creates a function instance expecting two arguments resulting from the provided configuration.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">Console.WriteLine(await kernel.InvokeAsync(function, arguments: new()\r\n{\r\n\u00a0 \u00a0 { \"topic\", \"Dog\" },\r\n    { \"length\", \"3\" },\r\n}))<\/code><\/pre>\n<p>Invoke the created function providing the arguments that will be used to render the prompt to the AI Model.<\/p>\n<p>Important: Parameters must match the name defined in the template and must be provided if required.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\"><code class=\"language-cs language-csharp\">using Microsoft.SemanticKernel.PromptTemplates.Handlebars;<\/code><\/pre>\n<p>This package allows you to use Handlebars templates for prompt rendering that will be used in the following snippets.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">var generateStoryHandlebarsYaml = EmbeddedResource.Read(\"GenerateStoryHandlebars.yaml\");\r\nfunction = kernel.CreateFunctionFromPromptYaml(generateStoryHandlebarsYaml, new HandlebarsPromptTemplateFactory());<\/code><\/pre>\n<p>Using a different template format, and specifying a different <strong>HandlevarsPromptTemplateFactory<\/strong> template formatter to <strong>CreateFunctionFromPromptYaml<\/strong>\u00a0 available in the Microsoft.SemanticKernel.PromptTemplates.Handlebars package.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">Console.WriteLine(await kernel.InvokeAsync(function, arguments: new()\r\n{\r\n\u00a0\u00a0 { \"topic\", \"Cat\" },\r\n   { \"length\", \"3\" },\r\n}));<\/code><\/pre>\n<p>As both templates provide the same input variables, calling the created function with a new template formatter don\u2019t require any calling code change.<\/p>\n<p><strong>4. Applying Dependency Injection<\/strong><\/p>\n<p>Semantic Kernel is fully compatible with .Net Dependency Injection abstractions and supports the IServiceCollection pattern.<\/p>\n<p>In this step we are going through how you can leverage DI into your code.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">var collection = new ServiceCollection();\r\ncollection.AddSingleton&lt;ILoggerFactory&gt;(new XunitLogger(this.Output));\r\n\r\nvar kernelBuilder = collection.AddKernel();\r\nkernelBuilder.Services.AddOpenAITextGeneration(TestConfiguration.OpenAI.ModelId, TestConfiguration.OpenAI.ApiKey);\r\nkernelBuilder.Plugins.AddFromType&lt;TimeInformation&gt;();\r\n\r\nvar serviceProvider = collection.BuildServiceProvider();\r\nvar kernel = serviceProvider.GetRequiredService&lt;Kernel&gt;();<\/code><\/pre>\n<p>Semantic Kernel package provides a new AddKernel() extension method to the IServiceCollection interface which allows you to configure and provide the Kernel straight to your DI container.<\/p>\n<p>The main focus of this step relies on how you can configure your service collection and get a instance of the Kernel already with a Plugin working with all injected dependencies automatically.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">KernelArguments arguments = new() { { \"topic\", \"earth when viewed from space\" } };\r\nawait foreach (var update in kernel.InvokePromptStreamingAsync(\"What color is the {{$topic}}? Provide a detailed explanation.\", arguments))\r\n{\r\n\u00a0\u00a0 \u00a0\u00a0\u00a0 \u00a0Console.Write(update);\r\n}<\/code><\/pre>\n<p>With the kernel created you can use it normally to get your streaming Text Completion result.<\/p>\n<p>Note that this example uses Text Generation service instead of Chat Completion, if \u00a0using OpenAI ensure you use the \u201cgpt-3.5-turbo-instruct\u201d model.<\/p>\n<p><strong>5.\u00a0Creating a Chat Prompt<\/strong><\/p>\n<p>We do also have an awesome feature that allows you to represent a chat history in a template with usage of special `&lt;message role=\u201d\u2026\u201d&gt;messaga&lt;\/message&gt;` XML like tags.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">string chatPrompt = \"\"\"\r\n\u00a0\u00a0\u00a0\u00a0 &lt;message role=\"user\"&gt;What is Seattle?&lt;\/message&gt;\r\n\u00a0\u00a0\u00a0\u00a0 &lt;message role=\"system\"&gt;Respond with JSON.&lt;\/message&gt;\r\n\u00a0\u00a0\u00a0\u00a0 \"\"\";\r\nConsole.WriteLine(await kernel.InvokePromptAsync(chatPrompt));<\/code><\/pre>\n<p>The code above uses the chat prompt feature and Semantic Kernel internally transforms that in a Chat History format to get an AI Chat Completion result.<\/p>\n<p><strong>6. Responsible AI Filtering<\/strong><\/p>\n<p>Filters is one of the most advanced features of Semantic Kernel and allow full control over how the invocations to AI and plugins are being used as well as control over template rendering.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">\/\/ Create a kernel with OpenAI chat completion\r\nvar builder = Kernel.CreateBuilder()\r\n\u00a0\u00a0\u00a0\u00a0 .AddOpenAIChatCompletion(\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 modelId: TestConfiguration.OpenAI.ChatModelId,\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 apiKey: TestConfiguration.OpenAI.ApiKey);\r\n\r\nbuilder.Services.AddSingleton&lt;ITestOutputHelper&gt;(this.Output);\r\n\r\n\/\/ Add prompt filter to the kernel\r\nbuilder.Services.AddSingleton&lt;IPromptRenderFilter, PromptFilter&gt;();\r\n\r\nvar kernel = builder.Build();<\/code><\/pre>\n<p>The code above initializes a Kernel and adds both ITestOutputHelper and IPromptRenderFilter interface implementations to the services to get a dependency injection ready Kernel.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">private sealed class PromptFilter(ITestOutputHelper output) : IPromptRenderFilter\r\n{\r\n\u00a0\u00a0\u00a0 private readonly ITestOutputHelper _output = output;\r\n\r\n\u00a0\u00a0\u00a0 public async Task OnPromptRenderAsync(PromptRenderContext context, Func&lt;PromptRenderContext, Task&gt; next)\r\n\u00a0\u00a0\u00a0 {\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 if (context.Arguments.ContainsName(\"card_number\"))\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 {\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 context.Arguments[\"card_number\"] = \"**** **** **** ****\";\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 }\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 await next(context);\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 context.RenderedPrompt += \" NO SEXISM, RACISM OR OTHER BIAS\/BIGOTRY\";\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 this._output.WriteLine(context.RenderedPrompt);\r\n\u00a0\u00a0\u00a0 }\r\n}<\/code><\/pre>\n<p>Every prompt rendering filter needs to implement the IPromptRenderFilter interface which exposes the method OnPromptRenderAsync with a context and a delegate for the next filter in the pipeline.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">if (context.Arguments.ContainsName(\"card_number\"))\r\n{\r\n    context.Arguments[\"card_number\"] = \"**** **** **** ****\";\r\n}<\/code><\/pre>\n<p>Code changes made before calling `await next(context);` happens before the rendering process takes place and allows you to modify the arguments that can be used by the template render engine.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\"><code class=\"language-cs language-csharp\">await next(context);<\/code><\/pre>\n<p>Executes the next filter in the pipeline and start the rendering process.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\"><code class=\"language-cs language-csharp\">context.RenderedPrompt += \" NO SEXISM, RACISM OR OTHER BIAS\/BIGOTRY\";<\/code><\/pre>\n<p>Any modification after the `next(context)`call happens after the template was rendered and from this point forward you are able to evaluate the rendered result and change it accordingly. In the example code a suffix is appended to the rendered prompt that will be used when calling the AI Model.<\/p>\n<p><strong>7. Components Observability<\/strong><\/p>\n<p>This example uses a new type IFunctionInvocationFilter of filter that allows you to intercept all the function invocations triggered by the Kernel.<\/p>\n<p>The lines of focus in this example are:<\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">private class MyFunctionFilter : IFunctionInvocationFilter\r\n{\r\n    public async Task OnFunctionInvocationAsync(FunctionInvocationContext context, Func&lt;FunctionInvocationContext, Task&gt; next)\r\n    {\r\n        ...         \r\n    }\r\n}<\/code><\/pre>\n<p>Every function invocation filter needs to implement the IFunctionInvocationFilter \u00a0interface which exposes the method OnFunctionInvocationAsync with a context and a delegate for the next filter in the pipeline.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\"><code class=\"language-cs language-csharp\">kernelBuilder.Services.AddSingleton&lt;IFunctionInvocationFilter, MyFunctionFilter&gt;();<\/code><\/pre>\n<p>Adds the custom function invocation filter to the services DI container, which will be used by the kernel once it is built.<\/p>\n<p>Exploring the filter implementation below:<\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">this.output.WriteLine($\"Invoking {context.Function.Name}\");\r\n\r\nawait next(context);\r\n\r\nvar metadata = context.Result?.Metadata;\r\nif (metadata is not null &amp;&amp; metadata.ContainsKey(\"Usage\"))\r\n{\r\n    this._output.WriteLine($\"Token usage: {metadata[\"Usage\"]?.AsJson()}\");\r\n}<\/code><\/pre>\n<p>Everything before calling `next(context);` happens before calling the function, and you are able to change, filter or abort the execution. In the code above we are using just the testOutput to display the current function name being invoked. After the function invocation (aftert the next() call) we are getting the current usage metadata returned by OpenAI to display the token usage in Json format.<\/p>\n<p><b>8. <strong>Using Function Pipelining<\/strong><\/b><\/p>\n<p>In prerelease days of Semantic Kernel there was an option of providing a list of functions to be invoked in sequence where the result of a function was fed to the next limited to string results.<\/p>\n<p>This example explores how you can achieve similar behavior using the latest and more powerful version of Semantic Kernel.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\">\r\n<code class=\"language-cs language-csharp\">KernelFunction parseInt32 = KernelFunctionFactory.CreateFromMethod((string s) =&gt; double.Parse(s, CultureInfo.InvariantCulture), \"parseInt32\");\r\nKernelFunction multiplyByN = KernelFunctionFactory.CreateFromMethod((double i, double n) =&gt; i * n, \"multiplyByN\");\r\nKernelFunction truncate = KernelFunctionFactory.CreateFromMethod((double d) =&gt; (int)d, \"truncate\");\r\nKernelFunction humanize = KernelFunctionFactory.CreateFromPrompt(new PromptTemplateConfig()\r\n{\r\n    Template = \"Spell out this number in English: {{$number}}\",\r\n    InputVariables = [new() { Name = \"number\" }],\r\n});<\/code><\/pre>\n<p>As a use case we create four different kernel functions that we want to pipe in sequence.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\"><code class=\"language-cs language-csharp\">KernelFunction pipeline = KernelFunctionCombinators.Pipe([parseInt32, multiplyByN, truncate, humanize], \"pipeline\");<\/code><\/pre>\n<p>Using the example `KernelFunctionCombinators` utility it creates a fifth kernel function that will be responsible for the pipelining behavior.<\/p>\n<pre class=\"prettyprint language-cs language-csharp\"><code class=\"language-cs language-csharp\">KernelArguments args = new()\r\n{\r\n    [\"s\"] = \"123.456\",\r\n    [\"n\"] = (double)78.90,\r\n};\r\nConsole.WriteLine(await pipeline.InvokeAsync(kernel, args));<\/code><\/pre>\n<p>Once the pipeline function is created calling it is similar to any other kernel function. The behavior of the pipeline function in the example will:<\/p>\n<ul>\n<li>invoke the first function parseInt32, read the string 123.456 from the arguments and parse it into (double)123.456<\/li>\n<li>The multiplyByN function will be invoked with the `i` parameter as the first function result and the `n` with the provided double 78.90 returning the double result 9740.6784<\/li>\n<li>The multiplyByN result will be passed as the first argument on truncate function which will remove the decimal digits and return 9740<\/li>\n<li>Finally, the truncated integer result will be passed to the last humanize function in the pipeline that will call the AI Model asking to spell out the number.<\/li>\n<\/ul>\n<p><strong>Dive Deeper<\/strong><\/p>\n<p>Please reach out if you have any questions or feedback through our <a href=\"https:\/\/github.com\/microsoft\/semantic-kernel\/discussions\/categories\/general\">Semantic Kernel GitHub Discussion Channel<\/a>. We look forward to hearing from you!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Hi all, With Microsoft Build approaching, we wanted to share some walk throughs of using Semantic Kernel to get started if you haven&#8217;t already. Today we&#8217;re going to dive into the Getting Started guide we have in the main Semantic Kernel GitHub repository for .NET. Getting Started with Semantic Kernel\u00a0 We are excited to announce [&hellip;]<\/p>\n","protected":false},"author":149071,"featured_media":2370,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"quote","meta":{"_acf_changed":false,"footnotes":""},"categories":[2,1],"tags":[76,48,75,63,9],"class_list":["post-2647","post","type-post","status-publish","format-quote","has-post-thumbnail","hentry","category-samples","category-semantic-kernel","tag-net-getting-started","tag-ai","tag-getting-started","tag-microsoft-semantic-kernel","tag-semantic-kernel","post_format-post-format-quote"],"acf":[],"blog_post_summary":"<p>Hi all, With Microsoft Build approaching, we wanted to share some walk throughs of using Semantic Kernel to get started if you haven&#8217;t already. Today we&#8217;re going to dive into the Getting Started guide we have in the main Semantic Kernel GitHub repository for .NET. Getting Started with Semantic Kernel\u00a0 We are excited to announce [&hellip;]<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/posts\/2647","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/users\/149071"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/comments?post=2647"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/posts\/2647\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/media\/2370"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/media?parent=2647"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/categories?post=2647"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/agent-framework\/wp-json\/wp\/v2\/tags?post=2647"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}