{"id":25195,"date":"2019-11-06T10:01:16","date_gmt":"2019-11-06T17:01:16","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/dotnet\/?p=25195"},"modified":"2019-11-13T18:06:56","modified_gmt":"2019-11-14T01:06:56","slug":"announcing-ml-net-1-4-global-availability-machine-learning-for-net","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/dotnet\/announcing-ml-net-1-4-global-availability-machine-learning-for-net\/","title":{"rendered":"Announcing ML.NET 1.4 general availability (Machine Learning for .NET)"},"content":{"rendered":"<p>Coinciding with the <strong>Microsoft Ignite 2019<\/strong> conference, we are thrilled to announce the GA release of <strong><a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> 1.4<\/strong> and updates to <strong>Model Builder<\/strong> in Visual Studio, with exciting new machine learning features that will allow you to innovate your .NET applications.<\/p>\n<p><a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> is an open-source and cross-platform machine learning framework for .NET developers. <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> also includes Model Builder (easy to use UI tool in Visual Studio) and CLI (Command-Line Interface) to make it super easy to build custom Machine Learning (ML) models using Automated Machine Learning (AutoML).<\/p>\n<p>Using <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a>, developers can leverage their existing tools and skillsets to develop and infuse custom ML into their applications by creating custom machine learning models for common scenarios like <em>Sentiment Analysis, Price Prediction, Sales Forecast prediction, Customer segmentation, Image Classification<\/em> and more!<\/p>\n<p>Following are some of the key highlights in this update:<\/p>\n<h2>ML.NET Updates<\/h2>\n<p>In <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> 1.4 GA we have released many exciting improvements and new features that are described in the following sections.<\/p>\n<h2>Image classification based on deep neural network retraining with GPU support (GA release)<\/h2>\n<p><img decoding=\"async\" src=\"https:\/\/user-images.githubusercontent.com\/1712635\/68146323-9e1dc980-feec-11e9-80aa-4055a9b00461.png\" alt=\"ML.NET, TensorFlow, NVIDIA-CUDA\" \/><\/p>\n<p>This feature enables native DNN (Deep Neural Network) transfer learning with ML.NET targeting image classification.<\/p>\n<p>For instance, with this feature you can create your own custom image classifier model by natively training a TensorFlow model from <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> API with your own images.<\/p>\n<p><em>Image classifier scenario \u2013 Train your own custom deep learning model with ML.NET<\/em><\/p>\n<p><img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/dotnet\/wp-content\/uploads\/sites\/10\/2019\/08\/image-classifier-scenario.png\" alt=\"Image Classification Training diagram\" \/><\/p>\n<p><a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> uses <a href=\"https:\/\/www.tensorflow.org\/\">TensorFlow<\/a> through the low-level bindings provided by the <a href=\"https:\/\/github.com\/SciSharp\/TensorFlow.NET\">Tensorflow.NET library<\/a>. The advantage provided by <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> is that you use a high level API very simple to use so with just a couple of lines of C# code you define and train an image classification model. A comparable action when using the low level <a href=\"https:\/\/github.com\/SciSharp\/TensorFlow.NET\">Tensorflow.NET library<\/a> would need hundreds of lines of code.<\/p>\n<p>The <a href=\"https:\/\/github.com\/SciSharp\/TensorFlow.NET\">Tensorflow.NET library<\/a> is an open source and low-level API library that provides the .NET Standard bindings for TensorFlow. That library is part of the open source <a href=\"https:\/\/github.com\/SciSharp\">SciSharp stack libraries<\/a>.<\/p>\n<p>The below stack diagram shows how <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> is implementing these new features on DNN training.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/user-images.githubusercontent.com\/1712635\/68147601-3c129380-feef-11e9-901c-1338ad7d04e8.png\" alt=\"DNN stack diagram\" \/><\/p>\n<p>As the first main scenario for high level APIs, we are currently providing <strong>image classification<\/strong>, but the goal in the future for this new API is to allow easy to use DNN training for additional scenarios such as <strong>object detection<\/strong> and other DNN scenarios in addition to image classification, by providing a powerful yet simple API very easy to use.<\/p>\n<p>This Image-Classification feature was initially released in v1.4-preview. Now, we\u2019re releasing it as a <strong>GA release<\/strong> plus we\u2019ve <strong>added the following new capabilities<\/strong> for this GA release:<\/p>\n<h3>Improvements in v1.4 GA for Image Classification<\/h3>\n<p>The main new capabilities in this feature added since v1.4-preview are:<\/p>\n<ul>\n<li>\n<p><strong>GPU support on Windows and Linux.<\/strong> GPU support is based on <a href=\"https:\/\/developer.nvidia.com\/cuda-zone\">NVIDIA CUDA<\/a>. Check hardware\/software requisites and <a href=\"https:\/\/github.com\/dotnet\/machinelearning\/blob\/master\/docs\/api-reference\/tensorflow-usage.md\">GPU requisites installation procedure here<\/a>. You can also train on CPU if you cannot meet the requirements for GPU.<\/p>\n<ul>\n<li><em>SciSharp TensorFlow redistributable supported for CPU or GPU<\/em>: <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> is compatible with <code>SciSharp.TensorFlow.Redist<\/code> (CPU training), <code>SciSharp.TensorFlow.Redist-Windows-GPU<\/code> (GPU training on Windows) and <code>SciSharp.TensorFlow.Redist-Linux-GPU<\/code> (GPU training on Linux). <\/li>\n<\/ul>\n<\/li>\n<li>\n<p><strong>Predictions on in-memory images:<\/strong> You make predictions with in-memory images instead of file-paths, so you have better flexibility in your app. See <a href=\"https:\/\/github.com\/dotnet\/machinelearning-samples\/tree\/master\/samples\/csharp\/getting-started\/DeepLearning_ImageClassification_Training\/WebApp.Predict\">sample web app using in-memory images here<\/a>.<\/p>\n<\/li>\n<li>\n<p><strong>Training early stopping:<\/strong> It stops the training when optimal accuracy is reached and is not improving any further with additional training cycles (<em>epochs<\/em>).<\/p>\n<\/li>\n<li>\n<p><strong>Learning rate scheduling:<\/strong> Learning rate is an integral and potentially difficult part of deep learning. By providing learning rate schedulers, we give users a way to optimize the learning rate with high initial values which can decay over time. High initial learning rate helps to introduce randomness into the system, allowing the Loss function to better find the global minima. While the decayed learning rate helps to stabilize the loss over time. We have implemented <a href=\"https:\/\/www.tensorflow.org\/api_docs\/python\/tf\/compat\/v1\/train\/exponential_decay\">Exponential Decay Learning rate scheduler<\/a> and <a href=\"https:\/\/www.tensorflow.org\/api_docs\/python\/tf\/compat\/v1\/train\/polynomial_decay\">Polynomial Decay Learning rate scheduler<\/a>.<\/p>\n<\/li>\n<li>\n<p><strong>Added additional supported DNN architectures to the Image Classifier<\/strong>: The supported DNN architectures (pre-trained TensorFlow model) used internally as the base for \u2018transfer learning\u2019 has grown to the following list:<\/p>\n<ul>\n<li><strong>Inception V3<\/strong> (Was available in Preview)<\/li>\n<li><strong>ResNet V2 101<\/strong> (Was available in Preview)<\/li>\n<li><strong>Resnet V2 50<\/strong> (Added in GA)<\/li>\n<li><strong>Mobilenet V2<\/strong> (Added in GA) <\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>Those pre-trained TensorFlow models (DNN architectures) are widely used image recognition models trained on very large image-sets such as the <a href=\"http:\/\/www.image-net.org\/\">ImageNet dataset<\/a> and are the culmination of many ideas developed by multiple researchers over the years. You can now take advantage of it now by using our easy to use API in .NET.<\/p>\n<h3>Example code using the new ImageClassification trainer<\/h3>\n<p>The below API code example shows how easily you can train a new TensorFlow model.<\/p>\n<p><em>Image classifier high level API code example:<\/em><\/p>\n<pre><code class=\"cs\">\/\/ Define model's pipeline with ImageClassification defaults (simplest way)\nvar pipeline = mlContext.MulticlassClassification.Trainers\n      .ImageClassification(featureColumnName: \"Image\",\n                            labelColumnName: \"LabelAsKey\",\n                            validationSet: testDataView)\n   .Append(mlContext.Transforms.Conversion.MapKeyToValue(outputColumnName: \"PredictedLabel\",\n                                                         inputColumnName: \"PredictedLabel\"));\n\n\/\/ Train the model\nITransformer trainedModel = pipeline.Fit(trainDataView);\n<\/code><\/pre>\n<p>The important line in the above code is the line using the <code>ImageClassification<\/code> classifier trainer which as you can see is a high level API where you just need to provide which column has the images, the column with the labels (column to predict) and a validation dataset to calculate quality metrics while training so the model can tune itself (change internal <em>hyper-parameters<\/em>) while training.<\/p>\n<p>There\u2019s another overloaded method for advanced users where you can also specify those optional hyper-parameters such as <em>epochs<\/em>, <em>batchSize<\/em>, <em>learningRate<\/em> and other typical DNN parameters, but most users can get started with the simplified API.<\/p>\n<p>Under the covers this model training is based on a native <em>TensorFlow DNN transfer learning<\/em> from a default architecture (pre-trained model) such as <em>Resnet V2 50<\/em>. You can also select the one you want to derive from by configuring the optional hyper-parameters.<\/p>\n<p>For further learning read the following resources:<\/p>\n<ul>\n<li>\n<p><strong>Sample app:<\/strong> <a href=\"https:\/\/github.com\/dotnet\/machinelearning-samples\/blob\/master\/samples\/csharp\/getting-started\/DeepLearning_ImageClassification_Training\">end-to-end sample app training a TensorFlow model with custom images<\/a>, including a web app for predicting with in-memory images coming through HTTP. Supports GPU or CPU based training.<\/p>\n<\/li>\n<li>\n<p><strong>Tutorial:<\/strong> <a href=\"https:\/\/docs.microsoft.com\/en-us\/dotnet\/machine-learning\/tutorials\/image-classification-api-transfer-learning\">Image Classification training API Tutorial<\/a><\/p>\n<\/li>\n<li>\n<p><strong>Detailed Blog Post:<\/strong> <a href=\"https:\/\/devblogs.microsoft.com\/cesardelatorre\/training-image-classification-recognition-models-based-on-deep-learning-transfer-learning-with-ml-net\/\">Training Image Classification\/Recognition models based on Deep Learning &amp; Transfer Learning with ML.NET<\/a><\/p>\n<\/li>\n<\/ul>\n<h2>Database Loader (GA Release)<\/h2>\n<p><img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/dotnet\/wp-content\/uploads\/sites\/10\/2019\/08\/database-loader-illustration-300x181.png\" alt=\"Database Loader diagram\" \/><\/p>\n<p>This feature was previously introduced as preview and now is released as general availability in v1.4.<\/p>\n<p>The database loader enables to load data from databases into the IDataView and therefore enables model training directly against relational databases. This loader supports any relational database provider supported by <code>System.Data<\/code> in .NET Core or .NET Framework, meaning that you can use any RDBMS such as <strong>SQL Server<\/strong>, <strong>Azure SQL Database<\/strong>, <strong>Oracle<\/strong>, <strong>SQLite<\/strong>, <strong>PostgreSQL<\/strong>, <strong>MySQL<\/strong>, <strong>Progress<\/strong>, etc.<\/p>\n<p>In previous <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> releases, you could also train against a relational database by providing data through an <code>IEnumerable<\/code> collection by using the <code>LoadFromEnumerable()<\/code> API where the data could be coming from a relational database or any other source. However, when using that approach, you as a developer are responsible for the code reading from the relational database (such as using Entity Framework or any other approach) which needs to be implemented properly so you are streaming data while training the ML model, as in this <a href=\"https:\/\/github.com\/dotnet\/machinelearning-samples\/tree\/master\/samples\/csharp\/getting-started\/DatabaseIntegration\">previous sample using LoadFromEnumerable()<\/a>.<\/p>\n<p>However, this new Database Loader provides a much simpler code implementation for you since the way it reads from the database and makes data available through the IDataView is provided out-of-the-box by the <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> framework so you just need to specify your database connection string, what\u2019s the SQL statement for the dataset columns and what\u2019s the data-class to use when loading the data. It is that simple!<\/p>\n<p>Here\u2019s example code on how easily you can now configure your code to load data directly from a relational database into an IDataView which will be used later on when training your model.<\/p>\n<pre><code class=\"cs\">\/\/Lines of code for loading data from a database into an IDataView for a later model training\n\/\/...\nstring connectionString = @\"Data Source=YOUR_SERVER;Initial Catalog= YOUR_DATABASE;Integrated Security=True\";\n\nstring commandText = \"SELECT * from SentimentDataset\";\n\nDatabaseLoader loader = mlContext.Data.CreateDatabaseLoader();\nDbProviderFactory providerFactory = DbProviderFactories.GetFactory(\"System.Data.SqlClient\");\nDatabaseSource dbSource = new DatabaseSource(providerFactory, connectionString, commandText);\n\nIDataView trainingDataView = loader.Load(dbSource);\n\n\/\/ ML.NET model training code using the training IDataView\n\/\/...\n\npublic class SentimentData\n{\n    public string FeedbackText;\n    public string Label;\n}\n<\/code><\/pre>\n<p>It is important to highlight that in the same way as when training from files, when training with a database ML.NET also <strong>supports data streaming<\/strong>, meaning that the whole database doesn&#8217;t need to fit into memory, it&#8217;ll be reading from the database as it needs so you can handle very large databases (i.e. 50GB, 100GB or larger).<\/p>\n<p>Resources for the DatabaseLoader:<\/p>\n<ul>\n<li>\n<p><strong>Sample app:<\/strong> For further learning see this <a href=\"https:\/\/github.com\/dotnet\/machinelearning-samples\/tree\/master\/samples\/csharp\/getting-started\/DatabaseLoader\">complete sample app using the new DatabaseLoader<\/a>.<\/p>\n<\/li>\n<li>\n<p><strong>&#8216;How to&#8217; doc:<\/strong> For a step by step explanation, follow the <a href=\"https:\/\/docs.microsoft.com\/en-us\/dotnet\/machine-learning\/how-to-guides\/load-data-ml-net#load-data-from-a-relational-database\">Load data from a relational database &#8216;How to&#8217; document<\/a><\/p>\n<\/li>\n<\/ul>\n<h3>PredictionEnginePool for scalable deployments released as GA<\/h3>\n<p><img decoding=\"async\" src=\"https:\/\/raw.githubusercontent.com\/dotnet\/machinelearning-samples\/master\/images\/web.png\" alt=\"WebApp icon\" \/> <img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/dotnet\/wp-content\/uploads\/sites\/10\/2019\/08\/icon-azure-functions.png\" alt=\"Azure Function icon\" \/><\/p>\n<p>When deploying an ML model into multithreaded and scalable .NET Core web applications and services (such as <a href=\"https:\/\/docs.microsoft.com\/en-us\/aspnet\/core\/?view=aspnetcore-3.0\">ASP.NET Core<\/a> web apps, <strong>WebAPIs<\/strong> or an <strong>Azure Function<\/strong>) it is recommended to use the <code>PredictionEnginePool<\/code> instead of directly creating the <code>PredictionEngine<\/code> object on every request due to performance and scalability reasons.<\/p>\n<p>The <code>PredictionEnginePool<\/code> comes as part of the <code>Microsoft.Extensions.ML<\/code> NuGet package which is being released as GA as part of the ML.NET 1.4 release.<\/p>\n<p>For further details on how to deploy a model with the PredictionEnginePool, read the following resources:<\/p>\n<ul>\n<li><strong>Tutorials:<\/strong>\n<ul>\n<li><a href=\"https:\/\/docs.microsoft.com\/en-us\/dotnet\/machine-learning\/how-to-guides\/serve-model-web-api-ml-net\">Deploy a model into an ASP.NET Core Web API<\/a>. <\/li>\n<li><a href=\"https:\/\/docs.microsoft.com\/en-us\/dotnet\/machine-learning\/how-to-guides\/serve-model-serverless-azure-functions-ml-net\">Deploy a model into an Azure Function<\/a>. <\/li>\n<\/ul>\n<\/li>\n<li><strong>Sample App:<\/strong> \n<ul>\n<li><a href=\"https:\/\/github.com\/dotnet\/machinelearning-samples\/tree\/master\/samples\/csharp\/end-to-end-apps\/ScalableMLModelOnWebAPI-IntegrationPkg\">ASP.NET Core Web API using an ML.NET model<\/a>. <\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>For further background information on why the <code>PredictionEnginePool<\/code> is recommended, read this <a href=\"https:\/\/devblogs.microsoft.com\/cesardelatorre\/how-to-optimize-and-run-ml-net-models-on-scalable-asp-net-core-webapis-or-web-apps\/\">blog post<\/a>.<\/p>\n<h3>Enhanced for .NET Core 3.0 \u2013 Released as GA<\/h3>\n<p><img decoding=\"async\" src=\"https:\/\/user-images.githubusercontent.com\/1712635\/68167643-05eb0900-ff1b-11e9-82c1-ea5447d175d2.png\" alt=\".NET Core 3.0 icon\" \/><\/p>\n<p><a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> is now building for .NET Core 3.0 (optional). <em>This feature was previosly released as preview but it is <strong>now released as GA<\/strong>.<\/em><\/p>\n<p>This means <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> can take advantage of the new features when running in a .NET Core 3.0 application. The first new feature we are using is the new hardware intrinsics feature, which allows .NET code to accelerate math operations by using processor specific instructions.<\/p>\n<p>Of course, you can still run <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> on older versions, but when running on .NET Framework, or .NET Core 2.2 and below, <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> uses C++ code that is hard-coded to x86-based SSE instructions. SSE instructions allow for four 32-bit floating-point numbers to be processed in a single instruction.<\/p>\n<p>Modern x86-based processors also support AVX instructions, which allow for processing eight 32-bit floating-point numbers in one instruction. <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a>\u2019s C# hardware intrinsics code supports both AVX and SSE instructions and will use the best one available. This means when training on a modern processor, <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> will now train faster because it can do more concurrent floating-point operations than it could with the existing C++ code that only supported SSE instructions.<\/p>\n<p>Another advantage the C# hardware intrinsics code brings is that when neither SSE nor AVX are supported by the processor, for example on an ARM chip, <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> will fall back to doing the math operations one number at a time. This means more processor architectures are now supported by the core <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> components. (Note: There are still some components that don\u2019t work on ARM processors, for example <em>FastTree<\/em>, <em>LightGBM<\/em>, and <em>OnnxTransformer<\/em>. These components are written in C++ code that is not currently compiled for ARM processors).<\/p>\n<p>For more information on how <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> uses the new hardware intrinsics APIs in .NET Core 3.0, please check out <em>Brian Lui\u2019s blog post<\/em> <a href=\"https:\/\/devblogs.microsoft.com\/dotnet\/using-net-hardware-intrinsics-api-to-accelerate-machine-learning-scenarios\/\">Using .NET Hardware Intrinsics API to accelerate machine learning scenarios<\/a>.<\/p>\n<h2>Use ML.NET in Jupyter notebooks<\/h2>\n<p><img decoding=\"async\" src=\"https:\/\/user-images.githubusercontent.com\/1712635\/68157050-be0bb800-ff01-11e9-9726-58d0814857a1.png\" alt=\"Jupyter and MLNET logos\" \/><\/p>\n<p>Coinciding with <em>Microsoft Ignite 2019<\/em> Microsoft is also <strong><a href=\"https:\/\/aka.ms\/jupyterdotnetblogpost\">announcing the new .NET support on Jupyter notebooks<\/a><\/strong>, so you can now run <em>any<\/em> .NET code (C# \/ F#) in Jupyter notebooks and therefore run <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> code in it as well! &#8211; Under the covers, this is enabled by the new <em>.NET kernel for Jupyter<\/em>.<\/p>\n<p>The <strong><a href=\"https:\/\/jupyter.org\/\">Jupyter Notebook<\/a><\/strong> is an open-source web application that allows you to create and share documents that contain live code, visualizations and narrative text.<\/p>\n<p>In terms of <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> this is awesome for many scenarios like <em>exploring and documenting model training experiments, data distribution exploration, data cleaning, plotting data charts, learning scenarios<\/em> such as <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> <em>courses, hands-on-labs and quizzes, etc.<\/em><\/p>\n<p>You can simply start <strong>exploring<\/strong> what kind of <strong>data<\/strong> is loaded in an <code>IDataView<\/code>:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/user-images.githubusercontent.com\/1712635\/68157750-2018ed00-ff03-11e9-8e22-1b485417e3d8.png\" alt=\"Exploring data in Jupyter\" \/><\/p>\n<p>Then you can continue by <strong>plotting<\/strong> data distribution in the Jupyter notebook following an <em>Exploratory Data Analysis (EDA)<\/em> approach:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/user-images.githubusercontent.com\/1712635\/68157932-80a82a00-ff03-11e9-99dc-71465627f0bd.png\" alt=\"Plotting in Jupyter\" \/><\/p>\n<p>You can also <strong>train<\/strong> an ML.NET model and have its training time documented:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/user-images.githubusercontent.com\/1712635\/68158040-b64d1300-ff03-11e9-8c7e-0cda9ebd0d63.png\" alt=\"Training in Jupyter\" \/><\/p>\n<p>Right afterwards you can see the model\u2019s <strong>quality metrics<\/strong> in the notebook, and have it documented for later review:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/user-images.githubusercontent.com\/1712635\/68158121-d8469580-ff03-11e9-822b-6bc4af156262.png\" alt=\"Metrics in Jupyter\" \/><\/p>\n<p>Additional examples are <strong>\u2018plotting the results of predictions vs. actual data\u2019<\/strong> and <strong>\u2018plotting a regression line along with the predictions vs. actual data\u2019<\/strong> for a better and visual analysis:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/user-images.githubusercontent.com\/1712635\/68158235-19d74080-ff04-11e9-9281-2dbc7dcbdaae.png\" alt=\"Jupyter additional\" \/><\/p>\n<p>For additional explanation details, check out this detailed blog post:<\/p>\n<ul>\n<li><strong>Detailed Blog Post:<\/strong> <a href=\"http:\/\/devblogs.microsoft.com\/cesardelatorre\/using-ml-net-in-jupyter-notebooks\/\">Using ML.NET on Jupyter notebooks &#8211; Blog Post<\/a> <\/li>\n<\/ul>\n<p>For a direct \u201ctry it out experience\u201d, please go to this Jupyter notebook hosted at MyBinder and simply run the ML.NET code:<\/p>\n<ul>\n<li><a href=\"https:\/\/aka.ms\/mlnetonjupytersamplebinder\"><img decoding=\"async\" src=\"https:\/\/mybinder.org\/badge_logo.svg\" alt=\"Binder\" \/><\/a> <a href=\"https:\/\/aka.ms\/mlnetonjupytersamplebinder\"><strong>Live Jupyter Notebook with ML.NET<\/strong><\/a>. This will launch a ready-to-use Jupyter environment in the web for trying the experience without needing to install anything.<\/li>\n<\/ul>\n<h2>Updates for Model Builder in Visual Studio<\/h2>\n<p>The Model Builder tool for Visual Studio has been updated to use the latest <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> GA version (1.4 GA) plus it includes new exciting features such the <strong>visual experience in Visual Studio<\/strong> for <strong>local Image Classification model training<\/strong>.<\/p>\n<h2>Model Builder updated to latest <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> GA version<\/h2>\n<p>Model Builder was updated to use latest GA version of <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> (1.4) and therefore the generated C# code also references <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> 1.4 NuGet packages.<\/p>\n<h2>Visual and local Image Classification model training in VS<\/h2>\n<p>As introduced at the begining of this blog post you can locally train an Image Classification model with the ML.NET API. However, when dealing with image files and image folders, the easiesnt way to do it is with a visual interface like the one provided by Model Builder in Visual Studio, as you can see in the image below:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/user-images.githubusercontent.com\/1712635\/68260820-01921f00-fff3-11e9-886e-831a49dfbd95.png\" alt=\"Model Builder in VS\" \/><\/p>\n<p>When using Model Builder for training an Image Classifier model you simply need to visually select the folder (with a structure based on one sub-folder per image class) where you have the images to use for training and evaluation and simply start training the model. Then, when finished training you will get the C# code generated for inference\/predictions and even for training if you want to use C# code for training from other environments like CI pipelines. Is that easy!<\/p>\n<ul>\n<li>\n<p>Check further details on Model Builder and Image classification in this blog post: <a href=\"https:\/\/devblogs.microsoft.com\/dotnet\/model-builder-updates-mlnet\/\">ML.NET Model Builder Updates<\/a><\/p>\n<\/li>\n<li>\n<p><a href=\"https:\/\/marketplace.visualstudio.com\/items?itemName=MLNET.07\">Download ML.NET Model Builder here<\/a><\/p>\n<\/li>\n<\/ul>\n<h2>Try <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> and Model Builder today!<\/h2>\n<p><img decoding=\"async\" src=\"https:\/\/user-images.githubusercontent.com\/1712635\/68166828-75132e00-ff18-11e9-9686-f70202f7c040.png\" alt=\"ML.NET logo\" \/><\/p>\n<ul>\n<li>Get started with <a href=\"https:\/\/www.microsoft.com\/net\/learn\/apps\/machine-learning-and-ai\/ml-dotnet\/get-started\">ML.NET here<\/a>.<\/li>\n<li>Get started with <a href=\"https:\/\/aka.ms\/modelbuilder\">Model Builder here<\/a>.<\/li>\n<li>Refer to <a href=\"https:\/\/docs.microsoft.com\/dotnet\/machine-learning\/\">documentation<\/a> for tutorials and more resources.<\/li>\n<li>Learn from <a href=\"https:\/\/github.com\/dotnet\/machinelearning-samples\">samples apps<\/a> targeting a variety of scenarios.<\/li>\n<li>Watch free <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> videos at the <a href=\"https:\/\/aka.ms\/mlnetyoutube\">ML.NET Youtube playlist<\/a>. <\/li>\n<\/ul>\n<p>We are excited to release these updates for you and we look forward to seeing what you will build with <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a>. If you have any questions or feedback, you can ask here at this blog post or at the <a href=\"https:\/\/github.com\/dotnet\/machinelearning\/issues\">ML.NET repo at GitHub<\/a>.<\/p>\n<p>Happy coding!<\/p>\n<p>The <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> team.<\/p>\n<p><em>This blog was authored by Cesar de la Torre plus additional contributions of the <a href=\"https:\/\/dotnet.microsoft.com\/learn\/ml-dotnet\/what-is-mldotnet\">ML.NET<\/a> team.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Coinciding with the Microsoft Ignite 2019 conference, we are thrilled to announce the GA release of ML.NET 1.4 and updates to Model Builder in Visual Studio, with exciting new machine learning features that will allow you to innovate your .NET applications. ML.NET is an open-source and cross-platform machine learning framework for .NET developers. ML.NET also [&hellip;]<\/p>\n","protected":false},"author":362,"featured_media":58792,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[685,196,195,328,688,691],"tags":[],"class_list":["post-25195","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-dotnet","category-dotnet-core","category-dotnet-framework","category-aiml","category-machine-learning","category-ml-dotnet"],"acf":[],"blog_post_summary":"<p>Coinciding with the Microsoft Ignite 2019 conference, we are thrilled to announce the GA release of ML.NET 1.4 and updates to Model Builder in Visual Studio, with exciting new machine learning features that will allow you to innovate your .NET applications. ML.NET is an open-source and cross-platform machine learning framework for .NET developers. ML.NET also [&hellip;]<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/posts\/25195","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/users\/362"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/comments?post=25195"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/posts\/25195\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/media\/58792"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/media?parent=25195"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/categories?post=25195"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/dotnet\/wp-json\/wp\/v2\/tags?post=25195"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}