{"id":996,"date":"2021-02-23T13:57:29","date_gmt":"2021-02-23T21:57:29","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/azure-sdk\/?p=996"},"modified":"2021-02-23T21:13:33","modified_gmt":"2021-02-24T05:13:33","slug":"ai-on-iot-edge","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/azure-sdk\/ai-on-iot-edge\/","title":{"rendered":"AI on IoT Edge with NVIDIA\u00ae Jetson Nano\u2122 and the new Azure SDKs"},"content":{"rendered":"<h2>Introduction<\/h2>\n<p>In this post we&#8217;ll demonstrate how we can use the NVIDIA\u00ae Jetson Nano\u2122 device running AI on IoT edge combined with power of Azure platform to create an end-to-end AI on edge solution. We are going to use a custom AI model that is developed using NVIDIA\u00ae Jetson Nano\u2122 device, but you can use any AI model that fits your needs. We will see how we can leverage the new Azure SDKs to create a complete Azure solution.<\/p>\n<p>This post has been divided into three sections. The <strong>Architecture overview section<\/strong> discusses the overall architecture at a high-level. The <strong>Authentication Front-end section<\/strong> discusses the starting and ending points of the system flow. The <strong>Running AI on the Edge section<\/strong> talks about details on how the NVIDIA\u00ae Jetson Nano\u2122 as an IoT Edge device can run AI and leverage the Azure SDK to communicate with the Azure Platform.<\/p>\n<h2>Architecture overview<\/h2>\n<p>There are two main components of the architecture:<\/p>\n<ul>\n<li>\n<p>The Authentication Front-end and AI on the edge run by device side code. The Authentication Front-end is responsible for creating a request, which is added to an Azure Storage Queue.<\/p>\n<\/li>\n<li>\n<p>The device side code is running Python code that is constantly listening to Azure Storage Queue for new requests. It picks up the requests and runs AI on it according to the requests. Once the device side code detects the object, it captures the image of the detected object and posts the captured image to Azure Storage Blob.<\/p>\n<\/li>\n<\/ul>\n<p>The underlying core of the architecture is the use of te new Azure SDKs by the Authentication Front-end and the AI running on the edge. This is done by adding requests to Azure Storage Queue by Authentication Front-end for AI processing and updating the Azure Storage Blob with captured image by the device side Python code.<\/p>\n<h3>Control flow<\/h3>\n<p><img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/azure-sdk\/wp-content\/uploads\/sites\/58\/2021\/02\/2021-02-09-two-factor-auth-jetson-azure-sdk-controlflow.png\" alt=\"Control flow\" title=\"Control flow\" \/><\/p>\n<p>At a high-level, the following actions are taking place:<\/p>\n<ol>\n<li>Authentication Front-end initiates flow by completing the first factor authentication. Once first factor authentication is complete the flow is passed to NVIDIA\u00ae Jetson Nano\u2122 device.<\/li>\n<li>NVIDIA\u00ae Jetson Nano\u2122 device runs custom AI model using code mentioned in following sections. The result of this step is completion of the second factor of authentication.<\/li>\n<li>The control is passed back to Authentication Front-end which validates the results that came from NVIDIA\u00ae Jetson Nano\u2122 device.<\/li>\n<\/ol>\n<h2>Authentication Front-end<\/h2>\n<p>The role of the Authentication Front-end is to initiate the two-factor flow and interact with Azure using the new Azure SDKs.<\/p>\n<h3>Code running on Authentication Front-end<\/h3>\n<p>The code running on Authentication Front-end is mainly comprised of two controllers.<\/p>\n<p>The following describes the code for each of those controllers.<\/p>\n<h4>SendMessageController.cs<\/h4>\n<p>The SendMessageController.cs&#8217;s main job is to complete the first factor of the authentication. The code simulates the completion of the first factor by just ensuring that the username and passwords are the same. In a real world implementation, this should be done by a valid secure authentication mechanism. An example of how to implement secure authentication mechanism is mentioned in the article <a href=\"https:\/\/docs.microsoft.com\/azure\/app-service\/overview-authentication-authorization\">Authentication and authorization in Azure App Service and Azure Functions<\/a>. The second task that SendMessageController.cs is doing is to queue the messages up for the second factor. This is done using the new Azure SDKs.<\/p>\n<p>Here is the code snippet for SendMesssageController.cs:<\/p>\n<pre><code class=\"csharp\">        [HttpPost(\"sendmessage\")]\n        public IActionResult Index()\n        {\n            string userName = string.Empty;\n            string password = string.Empty;\n\n            if (!string.IsNullOrEmpty(Request.Form[\"userName\"]))\n            {\n                userName = Request.Form[\"userName\"];\n            }\n\n            if (!string.IsNullOrEmpty(Request.Form[\"password\"]))\n            {\n                password = Request.Form[\"password\"];\n            }\n\n            \/\/ Simulation of first factor authentication presented here.\n            \/\/ For real world example visit: https:\/\/docs.microsoft.com\/azure\/app-service\/overview-authentication-authorization\n            if(!userName.Equals(password, StringComparison.InvariantCultureIgnoreCase))\n            {\n                return View(null);\n            }\n\n            var objectClassificationModel = new ObjectClassificationModel()\n            {\n                ClassName = userName,\n                RequestId = Guid.NewGuid(),\n                ThresholdPercentage = 70\n            };\n\n            _ = QueueMessageAsync(objectClassificationModel, storageConnectionString);\n\n            return View(objectClassificationModel);\n        }\n\n        public static async Task QueueMessageAsync(ObjectClassificationModel objectClassificationModel, string storageConnectionString)\n        {\n            string requestContent = $\"{objectClassificationModel.RequestId}|{objectClassificationModel.ClassName}|{objectClassificationModel.ThresholdPercentage.ToString()}\";\n\n\n            \/\/ Instantiate a QueueClient which will be used to create and manipulate the queue\n            QueueClient queueClient = new QueueClient(storageConnectionString, queueName);\n\n            \/\/ Create the queue\n            var createdResponse = await queueClient.CreateIfNotExistsAsync();\n            if (createdResponse != null)\n            {\n                Console.WriteLine($\"Queue created: '{queueClient.Name}'\");\n            }\n\n            await queueClient.SendMessageAsync(requestContent);\n        }\n\n<\/code><\/pre>\n<p>In the code snippet mentioned above, the code is simulating the first factor by comparing username and password. After the simulation of first factor, the code is sending a message to an Azure Storage Queue using the new Azure SDK.<\/p>\n<h4>ObjectClassificationController.cs<\/h4>\n<p>The ObjectClassificationController.cs is initiated after the custom code on AI at the Edge has completed. The code validates if the request has been completed by the NVIDIA\u00ae Jetson Nano\u2122 device and then shows the resultant captured image of the detected object.<\/p>\n<p>Here is the code snippet:<\/p>\n<pre><code class=\"csharp\">        public IActionResult Index(string requestId, string className)\n        {\n            string imageUri = string.Empty;\n            Guid requestGuid = default(Guid);\n            if (Guid.TryParse(requestId, out requestGuid))\n            {\n                BlobContainerClient blobContainerClient = new BlobContainerClient(storageConnectionString, containerName);\n                foreach (BlobItem blobItem in blobContainerClient.GetBlobs(BlobTraits.All))\n                {\n                    if (string.Equals(blobItem?.Name, $\"{requestId}\/{imageWithDetection}\", StringComparison.InvariantCultureIgnoreCase))\n                    {\n                        imageUri = $\"{blobContainerClient.Uri.AbsoluteUri}\/{blobItem.Name}\";\n                    }\n                }\n\n                ObjectClassificationModel objectClassificationModel = new ObjectClassificationModel()\n                {\n                    ImageUri = new Uri(imageUri),\n                    RequestId = requestGuid,\n                    ClassName = className\n                };\n\n                return View(objectClassificationModel);\n            }\n\n            return View(null);\n        }\n\n        [HttpGet(\"HasImageUploaded\")]\n        [Route(\"objectclassification\/{imageContainerGuid}\/hasimageuploaded\")]\n        public async Task&lt;IActionResult&gt; HasImageUploaded(string imageContainerGuid)\n        {\n\n            BlobContainerClient blobContainerClient = new BlobContainerClient(storageConnectionString, \"jetson-nano-object-classification-responses\");\n            await foreach(BlobItem blobItem in blobContainerClient.GetBlobsAsync(BlobTraits.All))\n            {\n                if (string.Equals(blobItem?.Name, $\"{imageContainerGuid}\/{imageWithDetection}\", StringComparison.InvariantCultureIgnoreCase))\n                {\n                    return new Json($\"{blobContainerClient.Uri.AbsoluteUri}\/{blobItem.Name}\");\n                }\n            }\n            return new Json(string.Empty);\n        }\n\n<\/code><\/pre>\n<p>The above mentioned code snippet shows two methods that are using the new Azure SDK. The <strong>HasImageUploaded<\/strong> method queries the Azure Storage Blob to find if the image has been uploaded or not. The <strong>Index<\/strong> method simply gets the image reference from Azure Storage Blob. For more information on how to read Azure Blob Storage using the new Azure SDK visit <a href=\"https:\/\/docs.microsoft.com\/azure\/storage\/blobs\/storage-quickstart-blobs-dotnet\">Quickstart: Azure Blob Storage client library v12 for .NET<\/a>.<\/p>\n<p>The following steps are taken on the Authentication Front-end:<\/p>\n<ol>\n<li>User initiates login by supplying username and password.<\/li>\n<li>User is authenticated on the first factor using the combination of username and password.<\/li>\n<li>On successful completion of the first factor, the web interface creates a request and sends that to Azure Storage as shown below: <img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/azure-sdk\/wp-content\/uploads\/sites\/58\/2021\/02\/2021-02-09-two-factor-auth-jetson-azure-sdk-queue-message.png\" alt=\"Azure Storage Queue view\" title=\"Azure Storage Queue detail view\" \/> <img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/azure-sdk\/wp-content\/uploads\/sites\/58\/2021\/02\/2021-02-09-two-factor-auth-jetson-azure-sdk-queue-message-detail.png\" alt=\"Azure Storage Queue view\" title=\"Azure Storage Queue detail view\" \/><\/li>\n<li>The NVIDIA\u00ae Jetson Nano\u2122 device, which is listening to Azure Storage Queue, initiates the second factor and completes the second factor.<\/li>\n<li>Once the second factor is completed, the NVIDIA\u00ae Jetson Nano\u2122 device posts the captured image for the second factor to Azure Storage Blob as shown below: <img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/azure-sdk\/wp-content\/uploads\/sites\/58\/2021\/02\/2021-02-09-two-factor-auth-jetson-azure-sdk-storage-view.png\" alt=\"Azure Blob storage view\" title=\"Azure Blob storage view\" \/><\/li>\n<li>The web interface shows the captured image, completing the flow as shown below: <img decoding=\"async\" src=\".\/images\/posts\/2021-02-09-two-factor-auth-jetson-azure-sdk-gil-detected-image.PNG\" alt=\"Second factor\" title=\"Second factor\" \/><\/li>\n<\/ol>\n<h2>Running AI on the Edge<\/h2>\n<h3>Device pre-requisites<\/h3>\n<ol>\n<li>NVIDIA\u00ae Jetson Nano\u2122 device with camera attached to capture video image.<\/li>\n<li>Custom pre-training model deployed on the device.<\/li>\n<li>Location path to the custom model file (.onnx file). This information is presented as &#8211;model parameter to the command mentioned in Steps section. For this tutorial we have prepared a custom model and saved as &#8220;~\/gil_background_hulk\/resenet18.onnx&#8221;.<\/li>\n<li>Location path to the classification text file (labels.txt). This information is presented as &#8211;labels parameter to the command mentioned in Steps section.<\/li>\n<li>Class name of the object that is target object that needs to be detected. This is presented as &#8211;classNameForTargetObject.<\/li>\n<li>Azure IoT Hub libraries for Python. Install the azure-iot-device package for IoTHubDeviceClient.<\/li>\n<\/ol>\n<pre><code class=\"bash\">pip install azure-iot-device\n<\/code><\/pre>\n<h3>Code running AI on the Edge<\/h3>\n<p>If we look at the technical specifications for NVIDIA\u00ae Jetson Nano\u2122 device, we will notice that it is based on ARM architecture running Ubuntu (in my case it was release: 18.04 LTS). With that knowledge it became clear that Python would be good choice of language running at device side. The device side code is shown below:<\/p>\n<pre><code class=\"python\">#!\/usr\/bin\/python\n\nimport jetson.inference\nimport jetson.utils\n\nimport argparse\nimport sys\n\nimport os\nimport asyncio\nfrom azure.iot.device.aio import IoTHubDeviceClient\nfrom azure.storage.queue.aio import QueueClient\nfrom azure.storage.blob.aio import BlobServiceClient, BlobClient, ContainerClient\n\n# A helper class to support async blob and queue actions.\nclass StorageHelperAsync:\n    async def block_blob_upload_async(self, upload_path, savedFile):\n        blob_service_client = BlobServiceClient.from_connection_string(\n            os.getenv(\"STORAGE_CONNECTION_STRING\")\n        )\n        container_name = \"jetson-nano-object-classification-responses\"\n\n        async with blob_service_client:\n            # Instantiate a new ContainerClient\n            container_client = blob_service_client.get_container_client(container_name)\n\n            # Instantiate a new BlobClient\n            blob_client = container_client.get_blob_client(blob=upload_path)\n\n            # Upload content to block blob\n            with open(savedFile, \"rb\") as data:\n                await blob_client.upload_blob(data)\n                # [END upload_a_blob]\n\n    # Code for listening to Storage queue\n    async def queue_receive_message_async(self):\n        # from azure.storage.queue.aio import QueueClient\n        queue_client = QueueClient.from_connection_string(\n            os.getenv(\"STORAGE_CONNECTION_STRING\"),\n            \"jetson-nano-object-classification-requests\",\n        )\n\n        async with queue_client:\n            response = queue_client.receive_messages(messages_per_page=1)\n            async for message in response:\n                queue_message = message\n                await queue_client.delete_message(message)\n                return queue_message\n\n\nasync def main():\n\n    # Code for object detection\n    # parse the command line\n    parser = argparse.ArgumentParser(\n        description=\"Classifying an object from a live camera feed and once successfully classified a message is sent to Azure IoT Hub\",\n        formatter_class=argparse.RawTextHelpFormatter,\n        epilog=jetson.inference.imageNet.Usage(),\n    )\n    parser.add_argument(\n        \"input_URI\", type=str, default=\"\", nargs=\"?\", help=\"URI of the input stream\"\n    )\n    parser.add_argument(\n        \"output_URI\", type=str, default=\"\", nargs=\"?\", help=\"URI of the output stream\"\n    )\n    parser.add_argument(\n        \"--network\",\n        type=str,\n        default=\"googlenet\",\n        help=\"Pre-trained model to load (see below for options)\",\n    )\n    parser.add_argument(\n        \"--camera\",\n        type=str,\n        default=\"0\",\n        help=\"Index of the MIPI CSI camera to use (e.g. CSI camera 0)nor for VL42 cameras, the \/dev\/video device to use.nby default, MIPI CSI camera 0 will be used.\",\n    )\n    parser.add_argument(\n        \"--width\",\n        type=int,\n        default=1280,\n        help=\"Desired width of camera stream (default is 1280 pixels)\",\n    )\n    parser.add_argument(\n        \"--height\",\n        type=int,\n        default=720,\n        help=\"Desired height of camera stream (default is 720 pixels)\",\n    )\n    parser.add_argument(\n        \"--classNameForTargetObject\",\n        type=str,\n        default=\"\",\n        help=\"Class name of the object that is required to be detected. Once object is detected and threshhold limit has crossed, the message would be sent to Azure IoT Hub\",\n    )\n    parser.add_argument(\n        \"--detectionThreshold\",\n        type=int,\n        default=90,\n        help=\"The threshold value 'in percentage' for object detection\",\n    )\n\n    try:\n        opt = parser.parse_known_args()[0]\n    except:\n        parser.print_help()\n        sys.exit(0)\n\n    # load the recognition network\n    net = jetson.inference.imageNet(opt.network, sys.argv)\n\n    # create the camera and display\n    font = jetson.utils.cudaFont()\n    camera = jetson.utils.gstCamera(opt.width, opt.height, opt.camera)\n    display = jetson.utils.glDisplay()\n    input = jetson.utils.videoSource(opt.input_URI, argv=sys.argv)\n\n    # Fetch the connection string from an environment variable\n    conn_str = os.getenv(\"IOTHUB_DEVICE_CONNECTION_STRING\")\n\n    device_client = IoTHubDeviceClient.create_from_connection_string(conn_str)\n    await device_client.connect()\n\n    counter = 1\n    still_looking = True\n    # process frames until user exits\n    while still_looking:\n        storage_helper = StorageHelperAsync()\n        queue_message = await storage_helper.queue_receive_message_async()\n\n        print(\"Waiting for request queue_messages\")\n        print(queue_message)\n        if queue_message:\n            has_new_message = True\n            queue_message_array = queue_message.content.split(\"|\")\n            request_content = queue_message.content\n            correlation_id = queue_message_array[0]\n            class_for_object_detection = queue_message_array[1]\n            threshold_for_object_detection = int(queue_message_array[2])\n\n            while has_new_message:\n                # capture the image\n                # img, width, height = camera.CaptureRGBA()\n                img = input.Capture()\n\n                # classify the image\n                class_idx, confidence = net.Classify(img)\n\n                # find the object description\n                class_desc = net.GetClassDesc(class_idx)\n\n                # overlay the result on the image\n                font.OverlayText(\n                    img,\n                    img.width,\n                    img.height,\n                    \"{:05.2f}% {:s}\".format(confidence * 100, class_desc),\n                    15,\n                    50,\n                    font.Green,\n                    font.Gray40,\n                )\n\n                # render the image\n                display.RenderOnce(img, img.width, img.height)\n\n                # update the title bar\n                display.SetTitle(\n                    \"{:s} | Network {:.0f} FPS | Looking for {:s}\".format(\n                        net.GetNetworkName(),\n                        net.GetNetworkFPS(),\n                        opt.classNameForTargetObject,\n                    )\n                )\n\n                # print out performance info\n                net.PrintProfilerTimes()\n                if (\n                    class_desc == class_for_object_detection\n                    and (confidence * 100) &gt;= threshold_for_object_detection\n                ):\n                    message = request_content + \"|\" + str(confidence * 100)\n                    font.OverlayText(\n                        img,\n                        img.width,\n                        img.height,\n                        \"Found {:s} at {:05.2f}% confidence\".format(\n                            class_desc, confidence * 100\n                        ),\n                        775,\n                        50,\n                        font.Blue,\n                        font.Gray40,\n                    )\n                    display.RenderOnce(img, img.width, img.height)\n                    savedFile = \"imageWithDetection.jpg\"\n                    jetson.utils.saveImageRGBA(savedFile, img, img.width, img.height)\n\n                    # Create the BlobServiceClient object which will be used to create a container client\n                    blob_service_client = BlobServiceClient.from_connection_string(\n                        os.getenv(\"STORAGE_CONNECTION_STRING\")\n                    )\n                    container_name = \"jetson-nano-object-classification-responses\"\n\n                    # Create a blob client using the local file name as the name for the blob\n                    folderMark = \"\/\"\n                    upload_path = folderMark.join([correlation_id, savedFile])\n\n                    await storage_helper.block_blob_upload_async(upload_path, savedFile)\n\n                    await device_client.send_message(message)\n                    still_looking = True\n                    has_new_message = False\n\n    await device_client.disconnect()\n\n\nif __name__ == \"__main__\":\n    # asyncio.run(main())\n    loop = asyncio.get_event_loop()\n    loop.run_until_complete(main())\n    loop.close()\n<\/code><\/pre>\n<p>Here is the link for code to try out: <a href=\"https:\/\/gist.github.com\/nabeelmsft\/f079065d98d39f271b205b71bc8c48bc\">https:\/\/gist.github.com\/nabeelmsft\/f079065d98d39f271b205b71bc8c48bc<\/a><\/p>\n<h4>Code flow<\/h4>\n<p><img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/azure-sdk\/wp-content\/uploads\/sites\/58\/2021\/02\/2021-02-09-two-factor-auth-jetson-azure-sdk-detectobjectflow.png\" alt=\"Device side code flow\" title=\"Device side code flow\" \/><\/p>\n<p>The following actions take place in python code running on the device side:<\/p>\n<ol>\n<li>The device code is constantly reading the request coming to Azure Storage Queue.<\/li>\n<li>Once request is received; code extracts out which object to detect and what threshold to use for object detection. The example mentioned in the diagram shows the message as: 0000-0000-0000-0000|hulk|80. The code will extract &#8220;hulk&#8221; as the object that needs to be detected and &#8220;80&#8221; as threshold value. This format is just an example that is used to provide input values to device side code.<\/li>\n<li>Using the custom AI model (example: ~\/gil_background_hulk\/resenet18.onnx) running on Jetson Nano device, object is searched based on the request.<\/li>\n<li>As soon as object is detected, the python code running on Jetson Nano device posts captured image to Azure Blob storage.<\/li>\n<li>In addition, the code running on Jetson Nano device sends message to Azure IoT hub informing of correct match for the request.<\/li>\n<\/ol>\n<p>Once the device side code completes the flow, the object detected image is posted to Azure Storage blob along with posting to Azure IoT Hub. The web interface then takes control and completes the rest of the steps.<\/p>\n<h2>Conclusion<\/h2>\n<p>In this post we have seen how simple it is for running AI on edge using Nvidia Jetson nano device leveraging Azure platform. Azure SDKs are designed to work great with a python on linux based IoT devices. We have also seen how Azure SDK plays the role of stitching different components together for a complete end to end solution.<\/p>\n<p><!-- TIPS:\n- Use `SDK` when talking about all of the client libraries.\n- Use `Client libraries\/ry` when talking about individual libraries.\n- Make sure all links do not have Locale, i.e remove `en-us` from all links.\n- All image links need to start with `.\/images\/posts\/*.png` and need to match exact case of the file.\n- Avoid using `here` for link text. Use the title of the link\/file.\n- Please include summary at the end.\n--><\/p>\n<p><!-- FOOTER: DO NOT EDIT OR REMOVE --><\/p>\n<p><div  class=\"d-flex justify-content-center\"><a class=\"cta_button_link btn-primary mb-24\" href=\"https:\/\/aka.ms\/azsdk\/releases\" target=\"_blank\">Azure SDK Releases<\/a><\/div><\/p>\n<h2>Azure SDK Blog Contributions<\/h2>\n<p>Thank you for reading this Azure SDK blog post! We hope that you learned something new and welcome you to share this post. We are open to Azure SDK blog contributions. Please contact us at <a href=\"&#109;&#x61;&#105;&#x6c;&#116;&#x6f;&#58;&#x61;z&#115;&#x64;&#107;&#x62;&#108;&#x6f;&#103;&#x40;&#109;&#105;&#x63;&#114;&#x6f;&#115;&#x6f;&#102;&#x74;&#46;&#x63;o&#109;\">&#x61;z&#115;&#x64;&#107;&#x62;&#108;&#x6f;&#103;&#x40;&#109;&#105;&#x63;&#114;&#x6f;&#115;&#x6f;&#102;&#x74;&#46;&#x63;o&#109;<\/a> with your topic and we&#8217;ll get you set up as a guest blogger.<\/p>\n<h2>Azure SDK Links<\/h2>\n<ul>\n<li>Azure SDK Website: <a href=\"https:\/\/aka.ms\/azsdk\">aka.ms\/azsdk<\/a><\/li>\n<li>Azure SDK Intro (3 minute video): <a href=\"https:\/\/aka.ms\/azsdk\/intro\">aka.ms\/azsdk\/intro<\/a><\/li>\n<li>Azure SDK Intro Deck (PowerPoint deck): <a href=\"https:\/\/aka.ms\/azsdk\/intro\/deck\">aka.ms\/azsdk\/intro\/deck<\/a><\/li>\n<li>Azure SDK Releases: <a href=\"https:\/\/aka.ms\/azsdk\/releases\">aka.ms\/azsdk\/releases<\/a><\/li>\n<li>Azure SDK Blog: <a href=\"https:\/\/aka.ms\/azsdk\/blog\">aka.ms\/azsdk\/blog<\/a><\/li>\n<li>Azure SDK Twitter: <a href=\"https:\/\/twitter.com\/AzureSDK\">twitter.com\/AzureSDK<\/a><\/li>\n<li>Azure SDK Design Guidelines: <a href=\"https:\/\/aka.ms\/azsdk\/guide\">aka.ms\/azsdk\/guide<\/a><\/li>\n<li>Azure SDKs &amp; Tools: <a href=\"https:\/\/azure.microsoft.com\/downloads\">azure.microsoft.com\/downloads<\/a><\/li>\n<li>Azure SDK Central Repository: <a href=\"https:\/\/github.com\/azure\/azure-sdk#azure-sdk\">github.com\/azure\/azure-sdk<\/a><\/li>\n<li>Azure SDK for .NET: <a href=\"https:\/\/github.com\/azure\/azure-sdk-for-net\">github.com\/azure\/azure-sdk-for-net<\/a><\/li>\n<li>Azure SDK for Java: <a href=\"https:\/\/github.com\/azure\/azure-sdk-for-java\">github.com\/azure\/azure-sdk-for-java<\/a><\/li>\n<li>Azure SDK for Python: <a href=\"https:\/\/github.com\/azure\/azure-sdk-for-python\">github.com\/azure\/azure-sdk-for-python<\/a><\/li>\n<li>Azure SDK for JavaScript\/TypeScript: <a href=\"https:\/\/github.com\/azure\/azure-sdk-for-js\">github.com\/azure\/azure-sdk-for-js<\/a><\/li>\n<li>Azure SDK for Android: <a href=\"https:\/\/github.com\/Azure\/azure-sdk-for-android\">github.com\/Azure\/azure-sdk-for-android<\/a><\/li>\n<li>Azure SDK for iOS: <a href=\"https:\/\/github.com\/Azure\/azure-sdk-for-ios\">github.com\/Azure\/azure-sdk-for-ios<\/a><\/li>\n<li>Azure SDK for Go: <a href=\"https:\/\/github.com\/Azure\/azure-sdk-for-go\">github.com\/Azure\/azure-sdk-for-go<\/a><\/li>\n<li>Azure SDK for C: <a href=\"https:\/\/github.com\/Azure\/azure-sdk-for-c\">github.com\/Azure\/azure-sdk-for-c<\/a><\/li>\n<li>Azure SDK for C++: <a href=\"https:\/\/github.com\/Azure\/azure-sdk-for-cpp\">github.com\/Azure\/azure-sdk-for-cpp<\/a><\/li>\n<\/ul>\n<p><!-- FOOTER: DO NOT EDIT OR REMOVE --><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Using NVIDIA\u00ae Jetson Nano\u2122 devices running AI on edge combined with the power of Azure, we can create an end-to-end AI on edge solution. The new Azure SDK provides the communication mechanism between the edge device and Azure Storage. <\/p>\n","protected":false},"author":11172,"featured_media":999,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1],"tags":[786,750,788,787],"class_list":["post-996","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-azure-sdk","tag-ai","tag-azure-sdk","tag-edge-computing","tag-iot"],"acf":[],"blog_post_summary":"<p>Using NVIDIA\u00ae Jetson Nano\u2122 devices running AI on edge combined with the power of Azure, we can create an end-to-end AI on edge solution. The new Azure SDK provides the communication mechanism between the edge device and Azure Storage. <\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/azure-sdk\/wp-json\/wp\/v2\/posts\/996","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/azure-sdk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/azure-sdk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/azure-sdk\/wp-json\/wp\/v2\/users\/11172"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/azure-sdk\/wp-json\/wp\/v2\/comments?post=996"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/azure-sdk\/wp-json\/wp\/v2\/posts\/996\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/azure-sdk\/wp-json\/wp\/v2\/media\/999"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/azure-sdk\/wp-json\/wp\/v2\/media?parent=996"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/azure-sdk\/wp-json\/wp\/v2\/categories?post=996"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/azure-sdk\/wp-json\/wp\/v2\/tags?post=996"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}