{"id":5217,"date":"2017-10-24T14:23:13","date_gmt":"2017-10-24T21:23:13","guid":{"rendered":"\/developerblog\/?p=5217"},"modified":"2020-03-14T19:49:09","modified_gmt":"2020-03-15T02:49:09","slug":"bird-detection-with-azure-ml-workbench","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/ise\/bird-detection-with-azure-ml-workbench\/","title":{"rendered":"Bird Detection with Azure ML Workbench"},"content":{"rendered":"<h2>Introduction<\/h2>\n<p>Estimation of population trends, detection of rare species, and impact assessments\u00a0are important tasks for biologists. Recently, our team had the pleasure of working with <a href=\"http:\/\/conservationmetrics.com\">Conservation Metrics<\/a>, a services provider for automated wildlife monitoring, on a project to identify red-legged kittiwakes in photos from game cameras. Our work\u00a0included labeling data, model training on the\u00a0<a href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/diving-deep-into-what-s-new-with-azure-machine-learning\/\">Azure Machine Learning Workbench<\/a> platform using Microsoft Cognitive Toolkit (CNTK) and Tensorflow, and deploying a prediction web service.<\/p>\n<p>In this code story, we&#8217;ll discuss different aspects of our solution, including:<\/p>\n<ol>\n<li><a href=\"#data_link\">Data<\/a> used in the project and how we <a href=\"#imglabel_link\">labeled<\/a> it<\/li>\n<li><a href=\"#objdetintro_link\">Object detection<\/a>\u00a0and <a href=\"#amlwbintro_link\">Azure ML Workbench<\/a><\/li>\n<li>Training the Birds Detection Model with <a href=\"#traincntk_link\">CNTK<\/a> and <a href=\"#traintf_link\">Tensorflow<\/a><\/li>\n<li><a href=\"#depl_link\">Deployment<\/a> of web services<\/li>\n<li><a href=\"#demo_link\">Demo<\/a> app setup<\/li>\n<\/ol>\n<h2><a id=\"data_link\"><\/a><b>Data<\/b><\/h2>\n<p>Below is a\u00a0<a href=\"https:\/\/www.youtube.com\/watch?v=nt__1605oTM&amp;feature=youtu.be\">video<\/a>\u00a0(provided by Abram Fleishman of San Jose State University and <a href=\"http:\/\/conservationmetrics.com\">Conservation Metrics, Inc<\/a>) capturing the native habitat of red-legged kittiwakes, species that we worked on detecting.\u00a0Biologists use various pieces of equipment, including climbing gear, to install cameras on those cliffs and take daytime and nighttime photos of birds.<\/p>\n<p><iframe title=\"Red-legged Kittiwakes, St. George Island, Alaska\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/nt__1605oTM?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p>We used photos to train the model and The\u00a0<a href=\"https:\/\/github.com\/CatalystCode\/VoTT\">Visual Object Tagging Tool (VOTT)<\/a>\u00a0was helpful with labeling images. It took about\u00a0 20 hours to label the data and close to 12,000 bounding boxes were marked.<\/p>\n<p><img decoding=\"async\" class=\"size-full wp-image-5248 aligncenter\" src=\"https:\/\/devblogs.microsoft.com\/cse\/wp-content\/uploads\/sites\/55\/2017\/10\/kittywake_data.png\" alt=\"\" width=\"854\" height=\"411\" \/><\/p>\n<p>The labeled data can be found in <a href=\"https:\/\/github.com\/olgaliak\/detection-amlworkbench\/tree\/master\/assets\">this GitHub repo<\/a>.<\/p>\n<h3>Data Credit<\/h3>\n<p>This data was collected by <a href=\"https:\/\/rachaelorben.dunked.com\/red-legged-kittiwake-incubation\">Dr. Rachael Orben<\/a> of Oregon State University and Abram Fleishman of San Jose State University and <a href=\"http:\/\/conservationmetrics.com\">Conservation Metrics, Inc.<\/a> as part of a large project investigating early breeding season responses of red-legged kittiwakes to changes in prey availability and linkages to the non-breeding stage in the Bering Sea, Alaska.<\/p>\n<p>&nbsp;<\/p>\n<h2><a id=\"objdetintro_link\"><\/a><b>Introduction to Object Detection<\/b><\/h2>\n<p>For a walkthrough on object detection techniques, please refer to this\u00a0<a href=\"https:\/\/blog.athelas.com\/a-brief-history-of-cnns-in-image-segmentation-from-r-cnn-to-mask-r-cnn-34ea83205de4\">blog post on CNNs<\/a>. Faster R-CNNs (<b>R<\/b>egion proposals with <b>C<\/b>onvolutional\u00a0<b>N<\/b>eural <b>N<\/b>etworks) are a relatively new approach (the first <a href=\"https:\/\/arxiv.org\/abs\/1506.01497\">paper<\/a>\u00a0on this approach was published in 2015). They have been widely adopted in the Machine Learning community, and now have implementations in most of popular\u00a0Deep Neural Net frameworks (DNNs) including PyTorch, CNTK, Tensorflow, Caffe, and others.<\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-5249 size-full\" src=\"https:\/\/devblogs.microsoft.com\/cse\/wp-content\/uploads\/sites\/55\/2017\/10\/Faster-RCNN-e1507269450696.png\" alt=\"Fastre RCNN\" width=\"400\" height=\"394\" \/><\/p>\n<p>In this post, we will cover the Faster R-CNN object detection APIs provided by CNTK and Tensorflow.<\/p>\n<h2><a id=\"amlwbintro_link\"><\/a><b>Azure Machine Learning Workbench<\/b><\/h2>\n<p>For model training and the creation of prediction web services, we explored the recently-announced <a href=\"https:\/\/azure.microsoft.com\/en-us\/blog\/diving-deep-into-what-s-new-with-azure-machine-learning\/\">Azure Machine Learning Workbench<\/a>, which is an analytics toolset enabling data scientists to prepare data, run machine learning training experiments and deploy models at cloud scale (see\u00a0<a href=\"https:\/\/docs.microsoft.com\/en-us\/azure\/machine-learning\/preview\/quickstart-installation\">setup and installation<\/a> documentation).<\/p>\n<p>As we&#8217;ll be working with an\u00a0image-related scenario, we&#8217;ve used the CNTK and Tensorflow <a href=\"https:\/\/en.wikipedia.org\/wiki\/MNIST_database\">MNIST<\/a>\u00a0 handwritten digit classification templates that ship with the tool as a starting point for our experimentation.\u00a0<img decoding=\"async\" class=\"aligncenter wp-image-5282 size-full\" src=\"https:\/\/devblogs.microsoft.com\/cse\/wp-content\/uploads\/sites\/55\/2017\/10\/mnist_template-e1507269373689.png\" alt=\"mnist template\" width=\"400\" height=\"464\" \/><\/p>\n<p>DNN training usually benefits from running on GPUs that allow the\u00a0required matrix operation to run much faster. We provisioned Data Science VMs with GPUs and used the remote Docker execution environment that Azure ML Workbench provides (see <a href=\"https:\/\/docs.microsoft.com\/fi-fi\/azure\/machine-learning\/preview\/how-to-use-gpu\">details<\/a>\u00a0and more information on <a href=\"https:\/\/docs.microsoft.com\/en-us\/azure\/machine-learning\/preview\/experiment-execution-configuration\">execution targets<\/a>) for training models.<\/p>\n<p><!-- TODO - missing image \n\n\n<p><img decoding=\"async\" class=\"alignnone size-full wp-image-5293\" src=\"https:\/\/devblogs.microsoft.com\/cse\/wp-content\/uploads\/sites\/55\/2017\/10\/remote-vm-run1.png\" alt=\"remote docker execution diagram\" width=\"1752\" height=\"858\" \/><\/p>\n\n --><\/p>\n<p>Azure ML logs the results of jobs (experiments) in a <a href=\"https:\/\/docs.microsoft.com\/en-us\/azure\/machine-learning\/preview\/how-to-use-run-history-model-metrics\">run history<\/a>. This capability was quite useful as we experimented with various model parameter settings, as it gives a visual means to select the model with the best performance. Note that you need to add instrumentation using <a href=\"https:\/\/docs.microsoft.com\/en-us\/azure\/machine-learning\/preview\/reference-logging-api\">Azure ML Logging API<\/a>\u00a0to your training\/evaluation code and track metrics of interest (for example, classification accuracy).<\/p>\n<h2><a id=\"imglabel_link\"><\/a>Image Labeling and Exporting<\/h2>\n<p>We used the <a href=\"https:\/\/github.com\/CatalystCode\/VoTT\/releases\">VOTT<\/a>\u00a0utility (available for Windows and MacOS) to label and export data to CNTK and Tensorflow Pascal formats, respectively.<\/p>\n<p>The tool provides a friendly interface for identifying and tagging regions of interest in images and videos. Using it requires collecting the images in a folder, then simple Launch VOTT, point to the image dataset, and proceed to label the regions of interest.<\/p>\n<h2><b><img decoding=\"async\" class=\"wp-image-5268 size-large aligncenter\" src=\"https:\/\/devblogs.microsoft.com\/cse\/wp-content\/uploads\/sites\/55\/2017\/10\/vott2-1024x667.jpg\" alt=\"\" width=\"780\" height=\"508\" \/><\/b><\/h2>\n<p>When finished, click on\u00a0Object Detection, then Export Tags to export to CNTK and Tensorflow.<\/p>\n<p> <img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/cse\/wp-content\/uploads\/sites\/55\/2017\/10\/vott3-1024x668-1.jpg\" alt=\"Image vott3 1024 215 668\" width=\"1024\" height=\"668\" class=\"aligncenter size-full wp-image-10881\" srcset=\"https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2017\/10\/vott3-1024x668-1.jpg 1024w, https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2017\/10\/vott3-1024x668-1-300x196.jpg 300w, https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2017\/10\/vott3-1024x668-1-768x501.jpg 768w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<p>For Tensorflow, VOC Pascal is the export format, so we converted the data to TFRecords to be able to use it in training and evaluation. For details, see the section on Tensorflow below.<\/p>\n<h2><a id=\"traincntk_link\"><\/a>Training Birds Detection Model with CNTK<\/h2>\n<p>As mentioned in the previous section, we used the popular Faster R-CNN approach in our Birds Detection model. You may wish to read CNTK&#8217;s\u00a0<a href=\"https:\/\/docs.microsoft.com\/en-us\/cognitive-toolkit\/object-detection-using-faster-r-cnn\">documentation about object detection using Faster R-CNN<\/a> for more information. In this section, we are going to focus on two parts of our approach.<\/p>\n<ol>\n<li>Using Azure ML Workbench to start training on remote VM<\/li>\n<li>Hyperparameter tuning through Azure ML Workbench<\/li>\n<\/ol>\n<p><strong>Using Azure ML Workbench for Training on a Remote VM<\/strong><\/p>\n<p>Hyperparameter tuning is a major part of the\u00a0effort to build a production-ready machine learning (or deep learning) models after the first draft model is created that shows promising results. Our problem here is to accomplish it efficiently and using Azure ML Workbench to facilitate the process.<\/p>\n<p>Parameter tuning requires a large number of training experiments, typically a time-consuming process. One approach is to train on a powerful local machine or cluster. Our approach, however, is to take training into the cloud by using Docker containers on remote (virtual) machines. The key advantage is that we can now spin off as many containers we want to tune parameters in parallel. Based on <a href=\"https:\/\/docs.microsoft.com\/en-us\/azure\/machine-learning\/preview\/tutorial-classifying-iris-part-2\">this documentation\u00a0for Azure ML<\/a>,\u00a0 register each VM as a compute target for the experiment. Note that there are constraints on password characters; for example, using &#8220;*&#8221; in the password will produce an error.<\/p>\n<pre class=\"lang:sh decode:true\">az ml computetarget attach --name \"my_dsvm\" --address \"my_dsvm_ip_address\" --username \"my_name\" --password \"my_password\" --type remotedocker\r\n<\/pre>\n<p class=\"\">After the command is executed, <em>myvm.compute<\/em> and <em>myvm.rucomfig<\/em> are created in the <em>aml_config<\/em> folder. Since our task is better suited to a GPU machine, we have to make some following modifications:<\/p>\n<h4>In myvm.compute<\/h4>\n<pre class=\"lang:sh decode:true\">baseDockerImage: microsoft\/mmlspark:plus-gpu-0.7.91\r\nnvidiaDocker: true\r\n<\/pre>\n<h4>In myvm.runconfig<\/h4>\n<pre class=\"lang:sh decode:true\">EnvironmentVariables:\r\n    \"STORAGE_ACCOUNT_NAME\": \r\n    \"STORAGE_ACCOUNT_KEY\": \r\nFramework: Python\r\nPrepareEnvironment: true\r\n<\/pre>\n<p>We used Azure storage for storing training data, pre-trained models and model checkpoints. The storage account credentials are provided as\u00a0<em>EnvironmentVariables<\/em>. Be sure to include the necessary packages in <em>conda_dependencies.yml<\/em><\/p>\n<p>Now we can execute the command to start preparing the machine.<\/p>\n<pre class=\"lang:sh decode:true\">az ml experiment \u2013c prepare myvm\r\n<\/pre>\n<p>Followed by training our object detection model.<\/p>\n<pre class=\"lang:sh decode:true\">az ml experiment submit \u2013c Detection\/FasterRCNN\/run_faster_rcnn.py\r\n..\r\n..\r\n..\r\nEvaluating Faster R-CNN model for 53 images.\r\nNumber of rois before non-maximum suppression: 8099\r\nNumber of rois  after non-maximum suppression: 1871\r\nAP for       Kittiwake = 0.7544\r\nMean AP = 0.7544\r\n<\/pre>\n<p><strong>Hyperparameter Tuning with Azure ML Workbench<\/strong><\/p>\n<p>With the help of Azure ML and Workbench, it\u2019s easy to log the hyperparameters and different performance metrics while spinning up several containers to run in parallel (read more about <a href=\"https:\/\/docs.microsoft.com\/en-us\/azure\/machine-learning\/preview\/how-to-use-run-history-model-metrics\">logging detail information<\/a> in the documentation).<\/p>\n<p>The first approach to try is different pre-trained base models. At the time of writing this post, CNTK\u2019s Faster R-CNN API supported two base models: <a href=\"https:\/\/papers.nips.cc\/paper\/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf\"><strong>AlexNet<\/strong><\/a> and <a href=\"https:\/\/arxiv.org\/pdf\/1409.1556.pdf\"><strong>VGG16<\/strong><\/a>. We can use these trained models to extract image features. Though these base models are trained on different datasets like <a href=\"http:\/\/www.image-net.org\/\">ImageNet<\/a>, low- and mid-level image features are common across applications and are therefore shareable. This phenomenon is called &#8220;<strong>transfer learning<\/strong>.&#8221;\u00a0AlexNet has 5 CONV (convolution) layers while VGG16 has 12 CONV layers. In terms of the number of trainable parameters, VGG16 has 138 million, roughly three times more than AlexNet; we used VGG16 as our base model here. The following are the VGG16 hyperparameters we optimized to achieve the best performance out of our evaluation set.<\/p>\n<p>In Detection\/FasterRCNN\/FasterRCNN_config.py:<\/p>\n<pre class=\"lang:default decode:true\"># Learning parameters\r\n__C.CNTK.L2_REG_WEIGHT = 0.0005\r\n__C.CNTK.MOMENTUM_PER_MB = 0.9\r\n# The learning rate multiplier for all bias weights\r\n__C.CNTK.BIAS_LR_MULT = 2.0\r\n<\/pre>\n<p>In Detection\/utils\/configs\/VGG16_config.py:<\/p>\n<pre class=\"lang:default decode:true\">__C.MODEL.E2E_LR_FACTOR = 1.0\r\n__C.MODEL.RPN_LR_FACTOR = 1.0\r\n__C.MODEL.FRCN_LR_FACTOR = 1.0\r\n<\/pre>\n<p>Azure ML Workbench definitely makes visualizing and comparing different parameter configurations easier.<\/p>\n<p><img decoding=\"async\" class=\"size-large wp-image-5287 aligncenter\" src=\"https:\/\/devblogs.microsoft.com\/cse\/wp-content\/uploads\/sites\/55\/2017\/10\/Screen-Shot-2017-10-07-at-12.26.46-AM.png\" alt=\"\" width=\"780\" height=\"424\" \/><\/p>\n<h3>mAP using VGG16 base model<\/h3>\n<pre class=\"lang:default decode:true\">Evaluating Faster R-CNN model for 53 images.\r\nNumber of rois before non-maximum suppression: 6998\r\nNumber of rois  after non-maximum suppression: 2240\r\nAP for       Kittiwake = 0.8204\r\nMean AP = 0.8204<\/pre>\n<p>Please see the\u00a0<a href=\"https:\/\/github.com\/olgaliak\/detection-amlworkbench\/tree\/master\/CNTK_faster-rcnn\">GitHub repo<\/a> for the implementation.<\/p>\n<h2><a id=\"traintf_link\"><\/a>Training Birds Detection Model with Tensorflow<\/h2>\n<p>Google recently released a powerful set of <a href=\"https:\/\/research.googleblog.com\/2017\/06\/supercharge-your-computer-vision-models.html\">object detection APIs<\/a>. We used their documentation on how to train a pet detector with Google&#8217;s <a href=\"https:\/\/cloud.google.com\/blog\/big-data\/2017\/06\/training-an-object-detector-using-cloud-machine-learning-engine\">Cloud Machine Learning Engine<\/a> as inspiration for our project to train our kittiwake bird detection model on Azure ML Workbench. The Tensorflow Object Detection API has a variety of <a href=\"https:\/\/github.com\/tensorflow\/models\/blob\/master\/research\/object_detection\/g3doc\/detection_model_zoo.md\">pre-trained models<\/a> on the\u00a0<a href=\"http:\/\/mscoco.org\/\">COCO dataset<\/a>. In our experiments, we used ResNet-101 (<a href=\"https:\/\/arxiv.org\/abs\/1512.03385\">Deep Residual Network<\/a> with 101 layers) as a base model and used the\u00a0<a href=\"https:\/\/github.com\/tensorflow\/models\/blob\/master\/research\/object_detection\/samples\/configs\/faster_rcnn_resnet101_pets.config\">pets detection sample config<\/a> as a starting point for object detection training configuration.<\/p>\n<p><a href=\"https:\/\/github.com\/olgaliak\/detection-amlworkbench\/tree\/master\/Tensorflow-Object-Detection\">This repository<\/a>\u00a0contains the scripts we used for training objected detection models with Azure ML Workbench and Tensorflow.<\/p>\n<h4>Setting Up Training<\/h4>\n<p><strong>Step 1: <\/strong>Prepare data in the TF Records format required by Tensorflow&#8217;s object detection API. This approch requires that we convert the default output provided by the\u00a0<a href=\"https:\/\/github.com\/olgaliak\/VoTT\/releases\">VOTT tool<\/a>. See details in the generic converter,\u00a0<a href=\"https:\/\/github.com\/olgaliak\/detection-amlworkbench\/blob\/master\/Tensorflow-Object-Detection\/create_pascal_tf_record.py\">create_pascal_tf_record.py<\/a>.<\/p>\n<pre class=\"lang:sh decode:true\">python create_pascal_tf_record.py \r\n    --label_map_path=\/data\/pascal_label_map.pbtxt \r\n    --data_dir=\/data\/ \r\n    --output_path=\/data\/out\/pascal_train.record \r\n    --set=train\r\n\r\npython create_pascal_tf_record.py \r\n    --label_map_path=\/data\/pascal_label_map.pbtxt \r\n    --data_dir=\/data\/ \r\n    --output_path=\/data\/out\/pascal_val.record \r\n    --set=val\r\n<\/pre>\n<p><strong>Step 2:<\/strong> Package the Tensorflow Object Detection and Slim code so they can be installed later in the Docker image used for experimentation.\u00a0 Here are the steps from Tensorflow&#8217;s Object Detection <a href=\"https:\/\/github.com\/tensorflow\/models\/blob\/master\/research\/object_detection\/g3doc\/running_pets.md\">documentation<\/a>:<\/p>\n<pre class=\"\"># From tensorflow\/models\/research\/\r\n python setup.py sdist\r\n (cd slim &amp;&amp; python setup.py sdist)<\/pre>\n<p>Next, move the produced tar files to a location available to your experimentation job (for example, blob storage) and put a link in the conda_dependancies.yaml for your experiment.<\/p>\n<pre class=\"\">dependencies:\r\n-python=3.5.2\r\n-tensorflow-gpu\r\n-pip:\r\n  #... More dependencies here\u2026\r\n  #TF Object Detection\r\n  -<a href=\"https:\/\/olgalidata.blob.core.windows.net\/tfobj\/object_detection-0.1_3.tar.gz\">https:\/\/\/object_detection-0.1.tar.gz<\/a>\r\n  -<a href=\"https:\/\/olgalidata.blob.core.windows.net\/tfobj\/slim-0.1.tar.gz\">https:\/\/<\/a><a href=\"https:\/\/olgalidata.blob.core.windows.net\/tfobj\/object_detection-0.1_3.tar.gz\">\/<\/a><a href=\"https:\/\/olgalidata.blob.core.windows.net\/tfobj\/slim-0.1.tar.gz\">\/slim-0.1.tar.gz<\/a><\/pre>\n<p><strong>Step 3<\/strong>: In your experiment drive script, add import<\/p>\n<pre>from object_detection.train import main as training_module<\/pre>\n<p>Then you can invoke training routing in your code with <em>training_module(_)<\/em>.<\/p>\n<h3>Training and Evaluation Flow<\/h3>\n<p>The Tensorflow Object Detection API assumes that you will run training and evaluation (verification of how well is model performing so far) as <a href=\"https:\/\/github.com\/tensorflow\/models\/blob\/master\/research\/object_detection\/g3doc\/running_locally.md\">separate calls<\/a> from the command line. When running multiple experiments it may be beneficial to run evaluation periodically (for example, every 100 iterations) to get insights as to how well the model can detect objects in unseen data.<\/p>\n<p>In this fork of Tensorflow Object Detection AP, we\u2019ve added <a href=\"https:\/\/github.com\/olgaliak\/models\/blob\/master\/research\/object_detection\/train_eval.py\">train_eval.py<\/a>, which demonstrates this continuous train and evaluation approach.<\/p>\n<pre class=\"\"> print(\"Total number of training steps {}\".format(train_config.num_steps))\r\n print(\"Evaluation will run every {} steps\".format(FLAGS.eval_every_n_steps))\r\n train_config.num_steps = current_step\r\n while current_step &lt;= total_num_steps:\r\n   print(\"Training steps # {0}\".format(current_step))\r\n   trainer.train(create_input_dict_fn, model_fn, train_config, master, task,\r\n   FLAGS.num_clones, worker_replicas, FLAGS.clone_on_cpu, ps_tasks,\r\n   worker_job_name, is_chief, FLAGS.train_dir)\r\n   tf.reset_default_graph()\r\n   evaluate_step()\r\n   tf.reset_default_graph()\r\n   current_step = current_step + FLAGS.eval_every_n_steps\r\n   train_config.num_steps = current_step<\/pre>\n<p>To explore several hyperparameters of the model and estimate its effect on the model we split the data into <a href=\"https:\/\/en.wikipedia.org\/wiki\/Training,_test_and_validation_sets\">train, validation (or development) and test sets<\/a>\u00a0of 160 images, 54 and 55 images, respectively.<\/p>\n<h3>Comparing Runs<\/h3>\n<p>The Tensorflow object detection framework provides the user with many parameter options to explore to see what works best on a given dataset.<\/p>\n<p>In this exercise, we will do a couple of runs and see which one will give us the best model performance.\u00a0 We will use object detection precision, usually referenced as mAP (Mean Average Precision) as our target metric. For each run, we will use <em>azureml.logging<\/em> to report the maximum value of mAP and which training iteration it was observed on. In addition, we will plot chart &#8220;mAP vs Iterations&#8221;, then save it to the output folder to be displayed in Azure ML Workbench.<\/p>\n<h4>Integration of\u00a0 TensorBoard events with Azure ML Workbench<\/h4>\n<p><a href=\"https:\/\/www.tensorflow.org\/get_started\/summaries_and_tensorboard\">TensorBoard<\/a> is a powerful tool for debugging and visualizing DNNs. The Tensorflow Object Detection API already emits summary metrics for Precision. In this project, we integrated Tensorflow summary events, which\u00a0TensorBoard uses for its visualizations, with Azure ML Workbench.<\/p>\n<pre class=\"\"><span class=\"pl-k\">from<\/span> tensorboard.backend.event_processing <span class=\"pl-k\">import<\/span> event_accumulator\r\n<span class=\"pl-k\">from<\/span> azureml.logging <span class=\"pl-k\">import<\/span> get_azureml_logger\r\n\r\nea = event_accumulator.EventAccumulator(eval_path, ...)\r\ndf = pd.DataFrame(ea.Scalars('Precision\/mAP@0.5IOU'))\r\nmax_vals = df.loc[df[\"value\"].idxmax()]\r\n\r\n#Plot chart of how mAP changers as training progresses\r\nfig = plt.figure(figsize=(6, 5), dpi=75)\r\n plt.plot(df[\"step\"], df[\"value\"])\r\n plt.plot(max_vals[\"step\"], max_vals[\"value\"], \"g+\", mew=2, ms=10)\r\n fig.savefig(\".\/outputs\/mAP.png\", bbox_inches='tight')\r\n\r\n# Log to AML Workbench best mAP of the run with corresponding iteration N\r\n run_logger = get_azureml_logger()\r\n run_logger.log(\"max_mAP\", max_vals[\"value\"])\r\n run_logger.log(\"max_mAP_interation#\", max_vals[\"step\"])<\/pre>\n<p>For more details, see the\u00a0code in\u00a0<a href=\"https:\/\/github.com\/olgaliak\/detection-amlworkbench\/blob\/master\/Tensorflow-Object-Detection\/misc\/results_logger.py\">results_logger.py<\/a>.<\/p>\n<p>Here is an analysis of several training runs we&#8217;ve done using Azure ML Workbench experimentation infrastructure.<\/p>\n<p><strong>Run #1<\/strong> uses Stochastic Gradient Descent and data augmentation is turned off. (See <a href=\"http:\/\/ruder.io\/optimizing-gradient-descent\/index.html#stochasticgradientdescent\">this blog post<\/a> for an overview of gradient optimization options.)<\/p>\n<p>From the run history in Azure ML Workbench, we can see\u00a0details on every run:<\/p>\n<p><img decoding=\"async\" class=\"wp-image-5300 aligncenter\" src=\"https:\/\/devblogs.microsoft.com\/cse\/wp-content\/uploads\/sites\/55\/2017\/10\/run1_sgd.png\" alt=\"\" width=\"400\" height=\"477\" \/><\/p>\n<p>Here we see that we have achieved a maximum mAP of 93.37% around iteration #3500. Thereafter the model started overfitting to the training data and performance on the test set started to drop.<\/p>\n<p><strong>Run #2<\/strong> uses the more advanced Adam Optimizer. All other parameters are the same.<\/p>\n<p><img decoding=\"async\" class=\"wp-image-5301 aligncenter\" src=\"https:\/\/devblogs.microsoft.com\/cse\/wp-content\/uploads\/sites\/55\/2017\/10\/run2_adam.png\" alt=\"\" width=\"400\" height=\"337\" \/><\/p>\n<p>Here we reached a mAP of 93.6% much faster than in Run #1. It also seems that the model starts to overfit much sooner as Precision on the evaluation set drops quite rapidly.<\/p>\n<p><strong>Run #3<\/strong> adds data augmentation to training configuration. We will stick with Adam Optimizer for all subsequent runs.<\/p>\n<pre class=\"\">data_augmentation_options{\r\n  random_horizontal_flip{}\r\n}<\/pre>\n<h2><img decoding=\"async\" class=\"wp-image-5303 aligncenter\" src=\"https:\/\/devblogs.microsoft.com\/cse\/wp-content\/uploads\/sites\/55\/2017\/10\/run3_adam_dataug.png\" alt=\"\" width=\"400\" height=\"393\" \/><\/h2>\n<p>Random horizontal flipping of the images helped to improve mAP from 93.6 on Run #2\u00a0 to 94.2%. It also takes more iterations for the model to start overfitting.<\/p>\n<p><strong>Run #4<\/strong> involves the addition of even more data augmentation options.<\/p>\n<pre class=\"\">data_augmentation_options{\r\n  random_horizontal_flip{}\r\n  random_pixel_value_scale{}\r\n  random_crop_image{}\r\n}<\/pre>\n<p>Interesting results are shown below:<\/p>\n<p><img decoding=\"async\" class=\"wp-image-5305 aligncenter\" src=\"https:\/\/devblogs.microsoft.com\/cse\/wp-content\/uploads\/sites\/55\/2017\/10\/run3_adam_more_dataug.png\" alt=\"\" width=\"400\" height=\"399\" \/><\/p>\n<p>Though the mAP we are seeing here is not the greatest (91.1%), it also did not overfit after 7,000 iterations. A natural follow-up could be to train this model even longer and see if there is still the potential to get to a higher mAP.<\/p>\n<p>Here is an\u00a0at-a-glance view of our training progress from Azure ML Workbench:<\/p>\n<p><img decoding=\"async\" class=\"alignnone size-full wp-image-5306\" src=\"https:\/\/devblogs.microsoft.com\/cse\/wp-content\/uploads\/sites\/55\/2017\/10\/Runs_comparison.png\" alt=\"\" width=\"2200\" height=\"571\" \/><\/p>\n<p>Azure ML Workbench allows users to compare runs side by side (below are Run 1, 3 and 4):<\/p>\n<h2><img decoding=\"async\" class=\"alignnone size-full wp-image-5307\" src=\"https:\/\/devblogs.microsoft.com\/cse\/wp-content\/uploads\/sites\/55\/2017\/10\/Runs_comparison_side.png\" alt=\"\" width=\"2521\" height=\"796\" \/><\/h2>\n<h2><\/h2>\n<p>We can also plot evaluation results on image(s) of interest and use those, too, when comparing results. TensorBoard events already have all the required data.<\/p>\n<p><img decoding=\"async\" class=\"alignnone size-full wp-image-5340\" src=\"https:\/\/devblogs.microsoft.com\/cse\/wp-content\/uploads\/sites\/55\/2017\/10\/Runs_comparison_with_images.png\" alt=\"\" width=\"2258\" height=\"805\" \/><\/p>\n<p>&nbsp;<\/p>\n<p>In summary, ResNet-powered object detection allowed us to achieve great results even on smaller datasets. Azure ML Workbench provided a useful infrastructure so we could have a central place for all experiment execution and results comparisons.<\/p>\n<h2><a id=\"depl_link\"><\/a>Deploying Scoring Web Service<\/h2>\n<p>After developing an object detection and classification model with satisfactory performance, we proceeded to deploy the model as a hosted web service so that it can be consumed by the bird-monitoring application in the solution pipeline. We&#8217;ll show how that can be done using the built-in capabilities provided by Azure ML, and also as a custom deployment.<\/p>\n<h2>Web Service Using Azure ML CLI<\/h2>\n<p>Azure ML provides extensive support for model operationalization on local machines or the Azure cloud.<\/p>\n<h3>Installing Azure ML CLI<\/h3>\n<p>To get started on deploying your model as a web service, first, we will need to SSH into the VM we are using.<\/p>\n<pre class=\"lang:sh decode:true\">ssh @<\/pre>\n<p>In this example, we will be using an Azure Data Science VM that already has Azure CLI installed. If you are using another VM, you can install Azure CLI using:<\/p>\n<pre class=\"lang:sh decode:true\">pip install azure-cli\r\npip install azure-cli-ml\r\n<\/pre>\n<p>Log in using:<\/p>\n<pre class=\"lang:sh decode:true\">az login<\/pre>\n<h3>Setting Up the Environment<\/h3>\n<p>Let&#8217;s start by registering the environment provider by using:<\/p>\n<pre class=\"lang:sh decode:true\">az provider register -n Microsoft.MachineLearningCompute<\/pre>\n<p>For deploying the web service on a local machine, we&#8217;ll have to set up the environment first:<\/p>\n<pre class=\"lang:sh decode:true\">az ml env setup -l [Azure region, e.g. eastus2] -n [environment name] -g [resource group]<\/pre>\n<p>This step will create the resource group, storage account, Azure Container Registry (ACR), and Application Insights account.<\/p>\n<p>Set the environment to the one we just created:<\/p>\n<pre class=\"lang:sh decode:true\">az ml env set -n [environment name] -g [resource group]<\/pre>\n<p>Create a Model Management account:<\/p>\n<pre class=\"lang:sh decode:true\">az ml account modelmanagement create -l [Azure region, e.g. eastus2] -n [your account name] -g [resource group name] --sku-instances [number of instances, e.g. 1] --sku-name [Pricing tier for example S1]<\/pre>\n<p>We are now ready to deploy the model! We can create the service using:<\/p>\n<pre class=\"lang:sh decode:true\">az ml service create realtime --model-file [model file\/folder path] -f [scoring file e.g. score.py] -n [your service name] -r [runtime for the Docker container e.g. spark-py or python] -c [conda dependencies file for additional python packages]<\/pre>\n<p>Please note that <code>nvidia-docker<\/code> is not supported for prediction right now. Make sure to edit your Conda dependencies to remove any GPU-specific references like <code>tensorflow-gpu<\/code>.<\/p>\n<p>After deploying the service, you can access information on how to use the web service with:<\/p>\n<pre class=\"lang:sh decode:true\">az ml service usage realtime -i [your service name]<\/pre>\n<p>For example, you can test the service using <code>curl<\/code> with:<\/p>\n<pre class=\"lang:sh decode:true\">curl -X POST -H \"Content-Type:application\/json\" --data !! YOUR DATA HERE !! http:\/\/127.0.0.1:32769\/score<\/pre>\n<h2>Alternative Option to Deploy Scoring Web Service<\/h2>\n<p>Another way to deploy a web service to serve predictions is to create your own instance of <a href=\"https:\/\/sanic.readthedocs.io\/en\/latest\/index.html\">Sanic<\/a> web server.\u00a0Sanic is a Flask-like Python 3.5+ web server\u00a0that helps us create and run web applications. We can use\u00a0the model we have trained in the previous section using CNTK and Faster R-CNN to serve predictions to identify the location of a bird within an image.<\/p>\n<p>First, we need to create a Sanic web application. You can follow the code snippets below (and <a href=\"https:\/\/github.com\/olgaliak\/detection-amlworkbench\/blob\/master\/CNTK_faster-rcnn\/Detection\/app.py\">app.py<\/a>) to create a web application and define where it should run on the server. For each API you want to support, you can also define routes, HTTP methods, and ways to handle each request.<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"\">app = Sanic(__name__)\r\n Config.KEEP_ALIVE = False\r\n\r\nserver = Server()\r\n server.set_model()\r\n\r\n@app.route('\/')\r\n async def test(request):\r\n return text(server.server_running())\r\n\r\n@app.route('\/predict', methods=[\"POST\",])\r\n def post_json(request):\r\n return json(server.predict(request))\r\n\r\napp.run(host= '0.0.0.0', port=80)\r\n print ('exiting...')\r\n sys.exit(0)<\/pre>\n<p>Once we have the web application defined, we need to build our logic to take the path of an image and return the predicted results to the user.<\/p>\n<p>From <a href=\"https:\/\/github.com\/olgaliak\/detection-amlworkbench\/blob\/master\/CNTK_faster-rcnn\/Detection\/predict.py\">predict.py<\/a>, we first download the image that needs to be predicted, then evaluate that image against the previously trained model to return JSON of predicted results.<\/p>\n<pre class=\"lang:python decode:true \" title=\"predict.py\">regressed_rois, cls_probs = evaluate_single_image(eval_model, img_path, cfg)\r\nbboxes, labels, scores = filter_results(regressed_rois, cls_probs, cfg)\r\n<\/pre>\n<p>The returned JSON is an array of predicted labels and bounding boxes for each bird that is detected in the image:<\/p>\n<pre class=\"lang:js decode:true\" title=\"predicted results\">[{\"label\": \"Kittiwake\", \"score\": \"0.963\", \"box\": [246, 414, 285, 466]},...]<\/pre>\n<p>Now that we have implemented the prediction logic and the web service, we can host this application on a server. We will use Docker to make sure our deployment dependencies and process are both easy and repeatable.<\/p>\n<pre class=\"lang:sh decode:true\">cd CNTK_faster-rcnn\/Detection<\/pre>\n<p>Create a Docker image with a Dockerfile so we can run the application as a Docker container:<\/p>\n<pre class=\"lang:sh decode:true\" title=\"Dockerfile\">FROM hsienting\/dl_az\r\n\r\nCOPY .\/ \/app\r\nADD run.sh \/app\/\r\nRUN chmod +x \/app\/run.sh\r\n\r\nENV STORAGE_ACCOUNT_NAME \r\nENV STORAGE_ACCOUNT_KEY \r\nENV AZUREML_NATIVE_SHARE_DIRECTORY \/cmcntk\r\nENV TESTIMAGESCONTAINER data\r\n\r\nEXPOSE 80\r\n\r\nENTRYPOINT [\"\/app\/run.sh\"]\r\n<\/pre>\n<p>Now we are ready to build the Docker image by running:<\/p>\n<pre class=\"lang:sh decode:true\">docker build -t cmcntk .<\/pre>\n<p>Once we have the Docker image <em>cmcntk<\/em> locally, we can now run instances of it as containers. With the command below we are mounting the host volume <em>to <\/em>cmcntk in the container to persist the data (this is where we kept the model trained in the previous step). Then we map host port 80 to container port 80, and run the latest docker image <em>cncntk<\/em>.<\/p>\n<pre class=\"lang:sh decode:true \">docker run -v \/:\/cmcntk -p 80:80 -it cmcntk:latest<\/pre>\n<p>Now we can test the web service using a curl command:<\/p>\n<pre class=\"lang:default decode:true\">curl -X POST http:\/\/localhost\/predict -H 'content-type: application\/json' \r\n-d '{\"filename\": \"\"}'<\/pre>\n<h2><a id=\"demo_link\"><\/a>Exposing Your Services<\/h2>\n<p>At this point our services are running, but now what? How does a client interact them? How do we unify them under a single API? During our work with Conservation Metrics, we created an application as proof of concept to execute the entire classification pipeline.<\/p>\n<h3>Problem<\/h3>\n<p>We know what we want to do with our application, but there are a few limitations with our current services which will restrict how we can communicate with them. These include:<\/p>\n<ul>\n<li>Multiple services which share the same purpose (outputting label data) but do not share the same endpoint<\/li>\n<li>Exposing these services directly to the client requires CORS permissions which would have to be managed across all servers\/load-balancers<\/li>\n<\/ul>\n<h3>Solution<\/h3>\n<p>To create a unified API endpoint, we\u2019ll set up an <a href=\"https:\/\/azure.microsoft.com\/en-us\/services\/api-management\/\">Azure API Management Service<\/a>. With this, we can set up and expose CORS-enabled endpoints for our APIs.<\/p>\n<h4>Azure API Management Services<\/h4>\n<h4>Getting Started<\/h4>\n<ol>\n<li>Go to <a href=\"https:\/\/ms.portal.azure.com\/#create\/hub\">https:\/\/ms.portal.azure.com\/#create\/hub<\/a><\/li>\n<li>Search and select for \u2018API management\u2019<\/li>\n<li>Create and configure your instance to your liking<\/li>\n<\/ol>\n<h5>Configuration<\/h5>\n<p>Create a new API<\/p>\n<p><img decoding=\"async\" class=\"wp-image-5323\" src=\"https:\/\/devblogs.microsoft.com\/cse\/wp-content\/uploads\/sites\/55\/2017\/10\/Azure-API-Management-Inital-Setup.png\" alt=\"\" width=\"1018\" height=\"500\" \/><\/p>\n<p>Initial configuration of your API<\/p>\n<p><img decoding=\"async\" class=\"wp-image-5325\" src=\"https:\/\/devblogs.microsoft.com\/cse\/wp-content\/uploads\/sites\/55\/2017\/10\/Azure-API-Management-Config.png\" alt=\"\" width=\"756\" height=\"500\" \/><\/p>\n<p>Once provisioned, go into your API. Click the &#8220;+Add API&#8221; button then the \u201cBlank API\u201d option. Set up the API to your liking, with\u00a0<code>Web service URL<\/code> being the API on the external server,\u00a0<code>API URL suffix<\/code>\u00a0being the suffix which will be appended to your API Management URL, and <code>Products<\/code> being the API product type you want to register this endpoint under.<\/p>\n<p>With the newly created API, configure the \u201cInbound processing\u201d using the \u201cCode View\u201d and (to enable CORS for all external URLS)\u00a0set your policies to:<\/p>\n<pre class=\"lang:default decode:true \">    \r\n        \r\n        \r\n            \r\n                *\r\n            \r\n            \r\n                *<\/pre>\n<header>*<\/header>\n<p>You can now treat the <code>API URL Suffix<\/code> you set up on your API Management Service just as how you would ping our API directly.<\/p>\n<p>Although there are examples from the <a href=\"https:\/\/manage.windowsazure.com\/\">old Azure Management Portal<\/a>, you may also want to explore a more in-depth <a href=\"https:\/\/github.com\/Azure-Readiness\/hol-azure-machine-learning\/blob\/master\/009-lab-monetization.md#93-create-azure-management-api-service\">getting started guide<\/a> to API Management services and CORS.<\/p>\n<h3>A Sample App<\/h3>\n<p>Our sample client web application executes the following steps:<\/p>\n<ol>\n<li>Reads containers\/blobs directly from Azure Blob Storage<\/li>\n<li>Pings our new Azure API Management Service with images from our blob storage<\/li>\n<li>Displays the returned prediction (bounding boxes for birds) on the image<\/li>\n<\/ol>\n<p>You can find <a href=\"https:\/\/github.com\/olgaliak\/detection-amlworkbench\/tree\/master\/sample-visualization\">the code<\/a> on GitHub.<\/p>\n<h4>Pinging the API Management Service<\/h4>\n<p>The only difference in pinging your API Management services, as opposed to your native API, is that you\u2019ll have to include an additional <code>Ocp-Apim-Subscription-Key<\/code> header in all your requests. This subscription key is bound to your API Management Product, which is an accumulation of the API endpoints you release under it.<\/p>\n<p>To get your Subscription Key:<\/p>\n<ol>\n<li>Go to your API\u2019s \u201cPublisher Portal\u201d<\/li>\n<li>Select your desired user under the Users Menu item<\/li>\n<li>Note the subscription key you wish to use<\/li>\n<\/ol>\n<table>\n<tbody>\n<tr>\n<td>\n<p><figure id=\"attachment_5329\" aria-labelledby=\"figcaption_attachment_5329\" class=\"wp-caption aligncenter\" ><img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/cse\/wp-content\/uploads\/sites\/55\/2017\/10\/Azure-API-Management-Publisher-Portal-768x394-1.png\" alt=\"Image Azure API Management Publisher Portal 768 215 394\" width=\"768\" height=\"394\" class=\"aligncenter size-full wp-image-10869\" srcset=\"https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2017\/10\/Azure-API-Management-Publisher-Portal-768x394-1.png 768w, https:\/\/devblogs.microsoft.com\/ise\/wp-content\/uploads\/sites\/55\/2017\/10\/Azure-API-Management-Publisher-Portal-768x394-1-300x154.png 300w\" sizes=\"(max-width: 768px) 100vw, 768px\" \/><figcaption id=\"figcaption_attachment_5329\" class=\"wp-caption-text\">Go to the publisher portal<\/figcaption><\/figure><\/td>\n<td>\n<p><figure id=\"attachment_5331\" aria-labelledby=\"figcaption_attachment_5331\" class=\"wp-caption aligncenter\" ><img decoding=\"async\" class=\"size-medium wp-image-5331\" src=\"https:\/\/devblogs.microsoft.com\/cse\/wp-content\/uploads\/sites\/55\/2017\/10\/Azure-API-Management-Subscription-Key.png\" alt=\"\" width=\"300\" height=\"138\" \/><figcaption id=\"figcaption_attachment_5331\" class=\"wp-caption-text\">Get your subscription key<\/figcaption><\/figure><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>In this sample application, we now append this subscription key as the value for the now-attached <code>Ocp-Apim-Subscription-Key<\/code> header:<\/p>\n<pre><code class=\"language-javascript\"><span class=\"hljs-keyword\">export<\/span> <span class=\"hljs-keyword\">async<\/span> <span class=\"hljs-function\"><span class=\"hljs-keyword\">function<\/span> <span class=\"hljs-title\">cntk<\/span>(<span class=\"hljs-params\">filename<\/span>) <\/span>{\r\n  <span class=\"hljs-keyword\">return<\/span> fetch(<span class=\"hljs-string\">'\/tensorflow\/'<\/span>, {\r\n    method: <span class=\"hljs-string\">'post'<\/span>,\r\n    headers: {\r\n      Accept: <span class=\"hljs-string\">'application\/json'<\/span>,\r\n      <span class=\"hljs-string\">'Content-Type'<\/span>: <span class=\"hljs-string\">'application\/json'<\/span>,\r\n      <span class=\"hljs-string\">'Cache-Control'<\/span>: <span class=\"hljs-string\">'no-cache'<\/span>,\r\n      <span class=\"hljs-string\">'Ocp-Apim-Trace'<\/span>: <span class=\"hljs-string\">'true'<\/span>,\r\n      <span class=\"hljs-string\">'Ocp-Apim-Subscription-Key'<\/span>: ,\r\n    },\r\n    body: <span class=\"hljs-built_in\">JSON<\/span>.stringify({\r\n      filename,\r\n    }),\r\n  })\r\n}\r\n<\/code><\/pre>\n<h4>Using The Data<\/h4>\n<p>You now have the ability to ping your services with an image and get a list of bounding boxes returned.<\/p>\n<p>The basic use case is to just draw the boxes onto the image. One simple way to do this on a web client is to display the image and overlay the boxes using the <code><\/code>.<\/p>\n<pre><code class=\"language-javascript\">\r\n<span class=\"xml\"><span class=\"hljs-tag\">&lt;<span class=\"hljs-title\">body<\/span>&gt;<\/span>\r\n  <span class=\"hljs-tag\">&lt;<span class=\"hljs-title\">canvas<\/span> <span class=\"hljs-attribute\">id<\/span>=<span class=\"hljs-value\">'myCanvas'<\/span>&gt;<\/span><span class=\"hljs-tag\">&lt;\/<span class=\"hljs-title\">canvas<\/span>&gt;<\/span>\r\n  <span class=\"hljs-tag\">&lt;<span class=\"hljs-title\">script<\/span>&gt;<\/span><span class=\"javascript\">\r\n    <span class=\"hljs-keyword\">const<\/span> imageUrl = <span class=\"hljs-string\">\"some image URL\"<\/span>;\r\n    cntk(imageUrl).then(labels =&gt; {\r\n      <span class=\"hljs-keyword\">const<\/span> canvas = <span class=\"hljs-built_in\">document<\/span>.getElementById(<span class=\"hljs-string\">'myCanvas'<\/span>)\r\n      <span class=\"hljs-keyword\">const<\/span> image = <span class=\"hljs-built_in\">document<\/span>.createElement(<span class=\"hljs-string\">'img'<\/span>);\r\n      image.setAttribute(<span class=\"hljs-string\">'crossOrigin'<\/span>, <span class=\"hljs-string\">'Anonymous'<\/span>);\r\n      image.onload = () =&gt; {\r\n        <span class=\"hljs-keyword\">if<\/span> (canvas) {\r\n          <span class=\"hljs-keyword\">const<\/span> canvasWidth = <span class=\"hljs-number\">850<\/span>;\r\n          <span class=\"hljs-keyword\">const<\/span> scale = canvasWidth \/ image.width;\r\n          <span class=\"hljs-keyword\">const<\/span> canvasHeight = image.height * scale;\r\n          canvas.width = canvasWidth;\r\n          canvas.height = canvasHeight;\r\n          <span class=\"hljs-keyword\">const<\/span> ctx = canvas.getContext(<span class=\"hljs-string\">'2d'<\/span>);\r\n\r\n          <span class=\"hljs-comment\">\/\/ render image on convas and draw the square labels<\/span>\r\n          ctx.drawImage(image, <span class=\"hljs-number\">0<\/span>, <span class=\"hljs-number\">0<\/span>, canvasWidth, canvasHeight);\r\n          ctx.lineWidth = <span class=\"hljs-number\">5<\/span>;\r\n          labels.forEach((label) =&gt; {\r\n            ctx.strokeStyle = label.color || <span class=\"hljs-string\">'black'<\/span>;\r\n            ctx.strokeRect(label.x, label.y, label.width, label.height);\r\n          });\r\n        }\r\n      };\r\n      image.src = imageUrl;\r\n    });\r\n  <\/span><span class=\"hljs-tag\">&lt;\/<span class=\"hljs-title\">script<\/span>&gt;<\/span>\r\n<span class=\"hljs-tag\">&lt;\/<span class=\"hljs-title\">body<\/span>&gt;<\/span>\r\n<span class=\"hljs-tag\">&lt;\/<span class=\"hljs-title\">html<\/span>&gt;<\/span>\r\n<\/span><\/code><\/pre>\n<p>This code will propagate on your canvas with something like the image below:<\/p>\n<p><img decoding=\"async\" class=\"wp-image-5334\" src=\"https:\/\/devblogs.microsoft.com\/cse\/wp-content\/uploads\/sites\/55\/2017\/10\/Kittiwakes-Labelled-300x231.jpg\" alt=\"\" width=\"400\" height=\"308\" \/><\/p>\n<p>Now you can interact with your trained model and demo prediction results! For code details, please refer to <a href=\"https:\/\/github.com\/olgaliak\/detection-amlworkbench\/tree\/master\/sample-visualization\">this GitHub repo<\/a>.<\/p>\n<h2>Summary<\/h2>\n<p>In this post, we have covered our end-to-end flow for object detection, including:<\/p>\n<ul>\n<li>Labeling data<\/li>\n<li>Training\u00a0 CNTK\/Tensorflow object detection using Azure ML Workbench<\/li>\n<li>Comparing experiment runs in\u00a0Azure ML Workbench<\/li>\n<li>Model operationalization\u00a0and deployment of a prediction web service<\/li>\n<li>Demo application\u00a0 for making predictions<\/li>\n<\/ul>\n<h2>Resources<\/h2>\n<ul>\n<li><a href=\"https:\/\/github.com\/olgaliak\/detection-amlworkbench\">Project&#8217;s GitHub repo<\/a>.<\/li>\n<li>Azure Machine Learning Workbench <a href=\"https:\/\/docs.microsoft.com\/en-us\/azure\/machine-learning\/preview\/quickstart-installation\">documentation<\/a><\/li>\n<li><a href=\"https:\/\/github.com\/Microsoft\/CNTK\/tree\/master\/Examples\/Image\/Detection\">Microsoft Cognitive Toolkit Object Detection<\/a> GitHub repo<\/li>\n<li><a href=\"https:\/\/github.com\/tensorflow\/models\/tree\/master\/research\/object_detection\">Tensorflow Object Detection API<\/a>\u00a0GitHub repo<\/li>\n<\/ul>\n<hr \/>\n<p>Cover image provided by Conservation Metrics.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>We demonstrate how to train Object Detection models using CNTK and Tensoflow DNN frameworks. Azure ML Workbench is used as the main training  and model hosting infrastructure.<\/p>\n","protected":false},"author":21373,"featured_media":10871,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[15,16,19],"tags":[88,123,279,350],"class_list":["post-5217","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-containers","category-devops","category-machine-learning","tag-azure-ml-workbench","tag-cntk","tag-object-detection","tag-tensorflow"],"acf":[],"blog_post_summary":"<p>We demonstrate how to train Object Detection models using CNTK and Tensoflow DNN frameworks. Azure ML Workbench is used as the main training  and model hosting infrastructure.<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/posts\/5217","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/users\/21373"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/comments?post=5217"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/posts\/5217\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/media\/10871"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/media?parent=5217"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/categories?post=5217"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/ise\/wp-json\/wp\/v2\/tags?post=5217"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}