February 2nd, 2023

On-device machine learning with ONNX

Craig Dunn
Principal SW Engineer

Hello Android developers,

This week we’re going to get started with on-device machine learning using the ONNX Runtime and check out an Android sample that identifies the objects using the camera video stream.

What is ONNX?

ONNX stands for Open Neural Network eXchange and is an open-source format for AI models. ONNX supports interoperability between frameworks and optimization and acceleration options on each supported platform.

The ONNX Runtime is available across a large variety of platforms, and provides developers with the tools to run machine learning models locally. Pre-trained models can be exported to ONNX for distribution on a variety of platforms including Android. The ONNX community includes many companies embedding in their products, providing models, and contributing to the ecosystem.

Get started with machine Learning on Android

To see a machine learning model in action, follow the instructions for the mobile image recognition example on Android on the ONNX Runtime website. The example uses an existing pre-trained model – MOBILENET v2 – which is exported from ONNX format for use in the app. While there are instructions provided to do this yourself it’s easier to download them directly if you just want to see the sample run.

Once you have the ONNX model and classification files, clone the Android source code from GitHub.

Before you build, copy the ONNX files to the raw resources directory and ensure the filenames match the Java resources references (MainActivity.java lines 144-154):

// Read MobileNet V2 classification labels
private fun readLabels(): List<String> {
    return resources.openRawResource(R.raw.imagenet_classes).bufferedReader().readLines()
}
// Read ort model into a ByteArray, run in background
private suspend fun readModel(): ByteArray = withContext(Dispatchers.IO) {
    val modelID =
        if (enableQuantizedModel) R.raw.mobilenetv2_uint8 else R.raw.mobilenetv2_float
    resources.openRawResource(modelID).readBytes()
}

Note that the download includes an older version and a v5 version of the models – I found that I needed the v5 files (which I renamed to match the resource names in code).

When you run the sample and grant camera permission, the model will display the top three best matches for the subject in the video frame:

Screenshot showing an example classification of a toy terrier dog

Figure 2: Inference percentages when detecting a dog breed

This ONNX Android sample only performs image classification, but other mobile samples also cover basic usage, speech recognition, object detection, and vision-related models. Using these samples as a foundation you can incorporate other trained models into your apps!

Feedback and resources

More information about the ONNX Runtime is available at onnxruntime.ai and also on YouTube.

If you have any questions about applying machine learning, or would like to tell us about your apps, use the feedback forum or message us on Twitter @surfaceduodev.

There won’t be a livestream this week, but check out the archives on YouTube. We’ll see you online again soon!

Author

Craig Dunn
Principal SW Engineer

Craig works on the Surface Duo Developer Experience team, where he enjoys writing cross-platform code for Android using a variety of tools including the web, React Native, Flutter, Unity, and Xamarin.

0 comments

Discussion are closed.