{"id":3666,"date":"2023-12-21T09:07:05","date_gmt":"2023-12-21T17:07:05","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/surface-duo\/?p=3666"},"modified":"2024-01-03T11:17:10","modified_gmt":"2024-01-03T19:17:10","slug":"flutter-onnx-runtime","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/surface-duo\/flutter-onnx-runtime\/","title":{"rendered":"Use ONNX Runtime in Flutter"},"content":{"rendered":"<p>Hello Flutter developers!<\/p>\n<p>\n  After recently reading about how <a href=\"https:\/\/cloudblogs.microsoft.com\/opensource\/2023\/02\/08\/performant-on-device-inferencing-with-onnx-runtime\/\">Pieces.app uses ONNX runtime inside a Flutter app<\/a>, I was determined to try it myself. This article shows a summary of the journey I took and provides a few tips for you if you want to do the same.\n<\/p>\n<p>\n  Since we have <a href=\"https:\/\/dart.dev\/interop\/c-interop\">FFI in Dart for calling C code<\/a> and <a href=\"https:\/\/onnxruntime.ai\/docs\/get-started\/with-c.html\">ONNX Runtime offers a C library<\/a>, this is the best way to integrate across most platforms. Before I walk down that path, I decide to have a look at pub.dev to see if anyone did this before me. My thinking here is that anything running ONNX Runtime is a good starting point, even if I must contribute to the project to make it do what I need. In the past, if a plugin lacked functionality, I would fork it, write what was missing and then <a href=\"https:\/\/docs.flutter.dev\/packages-and-plugins\/using-packages#dependencies-on-unpublished-packages\">use the fork as a git dependency<\/a>. If it was appropriate, I would also open a PR to upstream the changes.\n<\/p>\n<p><div class=\"alert alert-success\">Tip: Easiest way to contribute to OSS is to solve your own issues and upstream the changes.<\/div><\/p>\n<p>\n  <img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-screenshot-of-a-pub-dev-search-for-the-term-onnx.png\" class=\"wp-image-3667\" alt=\"A screenshot of a pub.dev search for the term ONNX. Four results show up. Each has a small number of likes and low popularity.\" width=\"600\" srcset=\"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-screenshot-of-a-pub-dev-search-for-the-term-onnx.png 1362w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-screenshot-of-a-pub-dev-search-for-the-term-onnx-300x293.png 300w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-screenshot-of-a-pub-dev-search-for-the-term-onnx-1024x1001.png 1024w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-screenshot-of-a-pub-dev-search-for-the-term-onnx-768x751.png 768w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-screenshot-of-a-pub-dev-search-for-the-term-onnx-24x24.png 24w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-screenshot-of-a-pub-dev-search-for-the-term-onnx-48x48.png 48w\" sizes=\"(max-width: 1362px) 100vw, 1362px\" \/><br\/><em>Figure 1: Searching for ONNX on pub.dev<\/em>\n<\/p>\n<p>\n  If I\u2019m <a href=\"https:\/\/pub.dev\/packages?q=onnx\">searching for ONNX<\/a>, four packages show up. As it sometimes happens on pub.dev, some packages are started and published but not finished. After looking at the code, I concluded that only <a href=\"https:\/\/pub.dev\/packages\/onnxruntime\">onnxruntime<\/a> has enough work put into it that it\u2019s worth giving a shot. At first glance, it seems to only run on Android and iOS, but after looking at the code, I see it is based on the ONNX Runtime C Library and it uses Dart FFI, which means I can make it run on other platforms down the line. Off I go with a brand new flutter project <code>flutter create onnxflutterplay<\/code> and then <code>flutter pub add onnxruntime<\/code>.\n<\/p>\n<p><div class=\"alert alert-success\">Tip: Whenever you decide which library to use, have a look at the code and the issues raised on GitHub. This gives you a better picture of overall quality and completeness.<\/div><\/p>\n<p>\n  The library comes with an <a href=\"https:\/\/github.com\/gtbluesky\/onnxruntime_flutter\/tree\/main\/example\">example<\/a>. It seems to be an audio processing sample, which is far too complicated for where I am right now. I want to understand the basics and run the simplest ONNX model I can think of. This will also prove to me that the plugin works. I start searching for the simplest model I can think of and end up with the model from the <a href=\"https:\/\/github.com\/microsoft\/onnxruntime-inference-examples\/tree\/main\/mobile\/examples\/basic_usage\/model\">ONNX Runtime basic usage example<\/a>. It takes two float numbers as input and outputs their sum. I follow the instructions and generate my first ever ORT model. This is how the model looks like in <a href=\"https:\/\/github.com\/lutzroeder\/netron\">Netron<\/a>.\n<\/p>\n<p>\n  <img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/screenshot-of-the-netron-app-with-a-simple-model.png\" class=\"wp-image-3668\" alt=\"Screenshot of the Netron app, with a simple model open that requires two float inputs. The model outputs a single float value.\" width=\"600\" srcset=\"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/screenshot-of-the-netron-app-with-a-simple-model.png 868w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/screenshot-of-the-netron-app-with-a-simple-model-279x300.png 279w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/screenshot-of-the-netron-app-with-a-simple-model-768x826.png 768w\" sizes=\"(max-width: 868px) 100vw, 868px\" \/><br\/><em>Figure 2: Netron app showing a simple model<\/em>\n<\/p>\n<p>\n  To figure out how to use the model, I have a few resources at my disposal. First, I have the <a href=\"https:\/\/github.com\/microsoft\/onnxruntime-inference-examples\/blob\/main\/mobile\/examples\/basic_usage\/ios\/OrtBasicUsage\/SwiftOrtBasicUsage.swift\">sample code from the model repo<\/a>, which is Swift code and might be intimidating, but is well documented and quite similar to Kotlin and Dart. I need to be comfortable looking at other languages anyway, since most AI researchers use Python. I see the names \u201cA\u201d, \u201cB\u201d and \u201cC\u201d and the float type being used explicitly. The other resource I have is <a href=\"https:\/\/github.com\/gtbluesky\/onnxruntime_flutter\/blob\/main\/example\/lib\/model_type_test.dart\">a test from the flutter plugin<\/a>. It uses simple data types for input and output, which shows me how to pack \u201cA\u201d and \u201cB\u201d inputs properly. You can see the <a href=\"https:\/\/github.com\/andreidiaconu\/onnxflutterplay\/blob\/main\/lib\/main.dart\">complete code on GitHub<\/a>. This is what I end up with:\n<\/p>\n<p>\n  <img decoding=\"async\" width=\"1005\" height=\"690\" src=\"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-screenshot-of-code-the-code-is-as-follows-vo.png\" class=\"wp-image-3669\" alt=\"A screenshot of code. The code is as follows:\n  void _inferSingleAdd() async {\n    OrtEnv.instance.init();\n    final sessionOptions = OrtSessionOptions();\n    final rawAssetFile = await rootBundle.load(&quot;assets\/models\/single_add.ort&quot;);\n    final bytes = rawAssetFile.buffer.asUint8List();\n    final session = OrtSession.fromBuffer(bytes, sessionOptions);\n    final runOptions = OrtRunOptions();\n    final inputOrt = OrtValueTensor.createTensorWithDataList(\n        Float32List.fromList([5.9]),\n    );\n    final inputs = {'A':inputOrt, 'B': inputOrt};\n    final outputs = session.run(runOptions, inputs);\n    inputOrt.release();\n    runOptions.release();\n    sessionOptions.release();\n    \/\/ session.release();\n    OrtEnv.instance.release();\n    List c = outputs[0]?.value as List;\n    print(c[0] ?? &quot;none&quot;);\n  }\" srcset=\"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-screenshot-of-code-the-code-is-as-follows-vo.png 1005w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-screenshot-of-code-the-code-is-as-follows-vo-300x206.png 300w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-screenshot-of-code-the-code-is-as-follows-vo-768x527.png 768w\" sizes=\"(max-width: 1005px) 100vw, 1005px\" \/><br\/><em>Figure 3: Code for inferring the simple model<\/em>\n<\/p>\n<p>\n  I run into some exceptions with the <code>session.release()<\/code> call. From my investigations, this library might expect to be called from an isolate and I am not doing that yet. To move past the errors, I simply commented that line \u2013 but if I was doing this for a production app I would give the isolate a try and investigate further. For now, this will do.\n<\/p>\n<p><div class=\"alert alert-success\">Tip: When setting up ONNX Runtime, use a simple model. It eliminates issues that stem from processing, model complexity, supported operators, and so on.<\/div><\/p>\n<p>\n  Next step in my journey is to try a larger model. My end goal here is to work with images, and I feel prepared to start using the simplest model I can find. The perfect model to continue with is one that takes an image input and only applies some color filter or other easy to debug operation. I start looking for such a model but can\u2019t find one. I land on a style transfer model from the <a href=\"https:\/\/github.com\/onnx\/models\/tree\/main\/validated\/vision\/style_transfer\/fast_neural_style\">ONNX Model Zoo archive<\/a>. I pick the mosaic pretrained model and I immediately open it in <a href=\"https:\/\/github.com\/lutzroeder\/netron\">Netron<\/a>.\n<\/p>\n<p>\n  <img decoding=\"async\" width=\"1372\" height=\"913\" src=\"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/screenshot-of-the-netron-app-with-a-model-open-th.png\" class=\"wp-image-3670\" alt=\"Screenshot of the Netron app, with a model open that requires large float matrices as input and output. The model is complex and large.\" srcset=\"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/screenshot-of-the-netron-app-with-a-model-open-th.png 1372w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/screenshot-of-the-netron-app-with-a-model-open-th-300x200.png 300w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/screenshot-of-the-netron-app-with-a-model-open-th-1024x681.png 1024w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/screenshot-of-the-netron-app-with-a-model-open-th-768x511.png 768w\" sizes=\"(max-width: 1372px) 100vw, 1372px\" \/><br\/><em>Figure 4: Netron showing a complex model<\/em>\n<\/p>\n<p>\n  You can clearly see the input and output there: float32[1,3,224,224]. The numbers in brackets represent the shape of the tensor. The shape is important because we process our input and output to match that shape. When that shape was not respected, I usually got a runtime error telling me it expected something else. You can feed some models raw PNG or JPEG files, but not this model. This model requires a bit of processing.\n<\/p>\n<p><div class=\"alert alert-success\">Tip: Install Netron so you can simply double-click to view on your PC. Always check the inputs and outputs this way to avoid confusion.<\/div><\/p>\n<p>\n  I did not know about tensor shapes before this work, so maybe it\u2019s worth pausing a bit to discuss what it means. If you have a simple matrix with 10 rows of 100 elements each, the shape is [10, 100]. The shape is the number of elements on each of the axes of the tensor. For an experienced computer vision machine learning developer, I expect that something like [1, 3, 224, 224] immediately screams \u201cone image with 3 channels per pixel (Red, Green, Blue) of size 224 by 224 pixels\u201d.\n<\/p>\n<p>\n  I first convert the ONNX file into ORT format and then add it to the app. I also prepare an image. I do not want to fiddle with resizing and transforming the input or output yet, so I fire up mspaint and make a 224 by 224 pixels, completely red image. During debugging, I also make a half red, half green image.\n<\/p>\n<p>  <img decoding=\"async\" width=\"224\" height=\"224\" src=\"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-red-square-the-red-used-here-is-the-purest-red.png\" class=\"wp-image-3671\" alt=\"A red square. The red used here is the purest red available to a computer, meaning a 255 value for red and 0 for green and blue. This is typically represented as #FF0000\" srcset=\"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-red-square-the-red-used-here-is-the-purest-red.png 224w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-red-square-the-red-used-here-is-the-purest-red-150x150.png 150w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-red-square-the-red-used-here-is-the-purest-red-24x24.png 24w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-red-square-the-red-used-here-is-the-purest-red-48x48.png 48w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-red-square-the-red-used-here-is-the-purest-red-96x96.png 96w\" sizes=\"(max-width: 224px) 100vw, 224px\" \/><br\/><em>Figure 5: Red square<\/em>\n<\/p>\n<p>\n  <img decoding=\"async\" width=\"224\" height=\"224\" src=\"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-square-split-down-the-middle-vertically-the-lef.png\" class=\"wp-image-3672\" alt=\"A square split down the middle vertically. The left side is red and the right side is green.\" srcset=\"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-square-split-down-the-middle-vertically-the-lef.png 224w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-square-split-down-the-middle-vertically-the-lef-150x150.png 150w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-square-split-down-the-middle-vertically-the-lef-24x24.png 24w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-square-split-down-the-middle-vertically-the-lef-48x48.png 48w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-square-split-down-the-middle-vertically-the-lef-96x96.png 96w\" sizes=\"(max-width: 224px) 100vw, 224px\" \/><br\/><em>Figure 6: Half red, half green square<\/em>\n<\/p>\n<p>\n  A red image of the exact size I need provides me with a simple to debug input. Working with ONNX Runtime or Machine Learning in general proves to be a lot of pre- or post-processing. \n<\/p>\n<p><div class=\"alert alert-success\">Tip: Working with images means very large arrays, which are hard to follow. Whenever some input or output is hard to debug, ask yourself what image you can manufacture to help find the problem.<\/div><\/p>\n<p>\n  For example, colors for each pixel are represented differently in Flutter or Android compared to these ONNX models. To drive this point, let\u2019s consider an unusual 1&#215;10 image. We have 10 pixels in total. Each has 4 color components. Let\u2019s number each pixel 1 to 10 and each color component R (Red), G (Green), B (Blue) and A (Alpha). In the sample below, Flutter stores the image as:\n<\/p>\n<pre>\r\n  R1 G1 B1 A1 R2 G2 B2 A2 R3 G3 B3 A3 [\u2026] R10 G10 B10 A10\r\n<\/pre>\n<p>\n  From what I see, due to how tensor reshaping works, to get the right ONNX Runtime Tensor, the image data must look like this:\n<\/p>\n<pre>\r\n  R1 R2 R3 [\u2026] R10 G1 G2 G3 [\u2026] G10 B1 B2 B3 [\u2026] B10\r\n<\/pre>\n<p>\n  Reordering the colors and dropping the Alpha component to fit this format is our pre-processing and the code looks like this:\n<\/p>\n<p>\n  <img decoding=\"async\" width=\"1176\" height=\"427\" src=\"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-screenshot-of-code-the-code-is-as-follows-fu.png\" class=\"wp-image-3673\" alt=\"A screenshot of code. The code is as follows:\n  Future&lt;List&lt;double&gt;&gt; imageToFloatTensor(ui.Image image) async {\n    final imageAsFloatBytes = (await image.toByteData(format: ui.ImageByteFormat.rawRgba))!;\n    final rgbaUints = Uint8List.view(imageAsFloatBytes.buffer);\n\n    final indexed = rgbaUints.indexed;\n    return [\n    ...indexed.where((e) =&gt; e.$1 % 4 == 0).map((e) =&gt; e.$2.toDouble()),\n    ...indexed.where((e) =&gt; e.$1 % 4 == 1).map((e) =&gt; e.$2.toDouble()),\n    ...indexed.where((e) =&gt; e.$1 % 4 == 2).map((e) =&gt; e.$2.toDouble()),\n    ];\n  }\" srcset=\"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-screenshot-of-code-the-code-is-as-follows-fu.png 1176w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-screenshot-of-code-the-code-is-as-follows-fu-300x109.png 300w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-screenshot-of-code-the-code-is-as-follows-fu-1024x372.png 1024w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-screenshot-of-code-the-code-is-as-follows-fu-768x279.png 768w\" sizes=\"(max-width: 1176px) 100vw, 1176px\" \/><br\/><em>Figure 7: Code for converting image to tensor<\/em>\n<\/p>\n<p>\n  Working with a red image here helps me debug the actual numbers I see in the tensor data. I expect to see 50176 (224&#215;224) occurrences of the value 255 (maximum for red), followed by all zeros (green and blue).  The result I get back from the model output also needs to be processed back to a Flutter image. This does the exact opposite of the input processing. Notice that I added the alpha back and set it to 255:\n<\/p>\n<p>\n  <img decoding=\"async\" width=\"1234\" height=\"630\" src=\"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-screenshot-of-code-the-code-is-as-follows-fu-1.png\" class=\"wp-image-3674\" alt=\"A screenshot of code. The code is as follows:\n  Future&lt;ui.Image&gt; floatTensorToImage(List tensorData) {\n    final outRgbaFloats = Uint8List(4 * 224 * 224);\n    for (int x = 0; x &lt; 224; x++) {\n      for (int y = 0; y &lt; 224; y++) {\n        final index = x * 224 * 4 + y * 4;\n        outRgbaFloats[index + 0] = tensorData[0][0][x][y].clamp(0, 255).toInt(); \/\/ r\n        outRgbaFloats[index + 1] = tensorData[0][1][x][y].clamp(0, 255).toInt(); \/\/ g\n        outRgbaFloats[index + 2] = tensorData[0][2][x][y].clamp(0, 255).toInt(); \/\/ b\n        outRgbaFloats[index + 3] = 255; \/\/ a\n      }\n    }\n    final completer = Completer&lt;ui.Image&gt;();\n    ui.decodeImageFromPixels(outRgbaFloats, 224, 224, ui.PixelFormat.rgba8888, (ui.Image image) {\n      completer.complete(image);\n    });\n\n    return completer.future;\n  }\" srcset=\"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-screenshot-of-code-the-code-is-as-follows-fu-1.png 1234w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-screenshot-of-code-the-code-is-as-follows-fu-1-300x153.png 300w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-screenshot-of-code-the-code-is-as-follows-fu-1-1024x523.png 1024w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/a-screenshot-of-code-the-code-is-as-follows-fu-1-768x392.png 768w\" sizes=\"(max-width: 1234px) 100vw, 1234px\" \/><br\/><em>Figure 8: Code for converting tensor to image<\/em>\n<\/p>\n<p>\n  When working with images, input and output are usually formatted the same way and post-processing mirrors what you do in pre-processing. You can feed the pre-processing into post-processing directly, without running the model and then render the results, to validate they are symmetrical. This does not mean that the model will work well with the data, but it can surface issues with your processing.\n<\/p>\n<p><div class=\"alert alert-success\">Tip: Working with images? Feed your pre-processing to your post-processing and display it on the screen. This makes many issues easy to spot.<\/div><\/p>\n<p><a name=\"bluetit\"><\/a>\n  And here is the result, using a photo of an <a href=\"#bluetit\" onclick=\"window.open('https:\/\/en.wikipedia.org\/wiki\/File:Eurasian_blue_tit_Lancashire.jpg')\">Eurasian blue tit<\/a> by <a href=\"https:\/\/commons.wikimedia.org\/wiki\/User:Baresi_franco\">Francis Franklin<\/a>:\n<\/p>\n<p><a href=\"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/flutter-onnx.jpg\"><img decoding=\"async\" src=\"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/flutter-onnx.jpg\" alt=\"Two images of birds: left is a photo of an Eurasian blue tit, the right is a stylized interpretation that has been generated by the sample code\" width=\"1380\" height=\"710\" class=\"alignnone size-full wp-image-3684\" srcset=\"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/flutter-onnx.jpg 1380w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/flutter-onnx-300x154.jpg 300w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/flutter-onnx-1024x527.jpg 1024w, https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/12\/flutter-onnx-768x395.jpg 768w\" sizes=\"(max-width: 1380px) 100vw, 1380px\" \/><\/a><br\/><em>Figure 9: Bird image before and after stylizing as mosaic<\/em><\/p>\n<p>\n  Throughout this journey, I learned that making small steps is the way to go. Working with ORT can feel like using a black box and taking baby steps is essential for understanding the input and output at every stage.\n<\/p>\n<p><div class=\"alert alert-success\">Tip: Take the smallest step you can think of. There is a lot that can go wrong when processing large tensors such as those for images. Creating bespoke images to use as input is also a skill you need to learn.<\/div><\/p>\n<h2>\n  Call to action\n<\/h2>\n<ul>\n<li><a href=\"https:\/\/github.com\/andreidiaconu\/onnxflutterplay\">Clone the project from GitHub<\/a> and continue from there, or follow along the article and build your own project from scratch.\n  <\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Hello Flutter developers! After recently reading about how Pieces.app uses ONNX runtime inside a Flutter app, I was determined to try it myself. This article shows a summary of the journey I took and provides a few tips for you if you want to do the same. Since we have FFI in Dart for calling [&hellip;]<\/p>\n","protected":false},"author":54297,"featured_media":3683,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[740],"tags":[729,728],"class_list":["post-3666","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-machine-learning","tag-machine-learning","tag-onnx"],"acf":[],"blog_post_summary":"<p>Hello Flutter developers! After recently reading about how Pieces.app uses ONNX runtime inside a Flutter app, I was determined to try it myself. This article shows a summary of the journey I took and provides a few tips for you if you want to do the same. Since we have FFI in Dart for calling [&hellip;]<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-json\/wp\/v2\/posts\/3666","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-json\/wp\/v2\/users\/54297"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-json\/wp\/v2\/comments?post=3666"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-json\/wp\/v2\/posts\/3666\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-json\/wp\/v2\/media\/3683"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-json\/wp\/v2\/media?parent=3666"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-json\/wp\/v2\/categories?post=3666"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-json\/wp\/v2\/tags?post=3666"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}