Building image recognition React app using ONNX.js

January 15, 2019 0 Comments

Building image recognition React app using ONNX.js

 

 

Maybe these are the berries for our goblet of wine that we predicted with >97% accuracy?

The Open Neural Network Exchange (ONNX) is an open standard for representing machine learning models. With ONNX, AI developers can more easily move models between state-of-the-art tools and choose combinations that are best for them. ONNX is developed and supported by a community of partners including AWS, Facebook OpenSource, Microsoft, AMD, IBM, Intel AI, etc.

On November 29th, 2018 Microsoft entered the arena of AI on browsers with the announcement for ONNX.js, their open-source library to run ONNX models on browsers. This is yet another option for web developers for running ML models on browsers and build amazing user experiences on the web.

With the development of Keras.js & TensorFlow.js, Microsoft had to come up with a solution that can deliver better results along with a good developer experience. IMHO, Microsoft succeeded in the performance arena to a large extent. Here are a few things that make it stand out.

  • ONNX.js can run on both CPU and GPU.
  • For running on CPU, WebAssembly is adapted to execute models at near-native speed. Furthermore, ONNX.js utilizes Web Workers to provide a “multi-threaded” environment to parallelize data processing. This is a really a great feature, as Keras.js and TensorFlow.js don't support WebAssembly usage on any browser.
  • For running on GPUs, a popular standard for accessing GPU capabilities — WebGL is adopted.

Here are the results of benchmarking done by Microsoft. Read more about it here.

From ONNX.js GitHub repo

Despite having such outstanding performance attributes, ONNX.js lacks some basic utility functions, such as converting an image to a tensor, which is available in TensorFlow.js. Being an open source library, we can expect the community will add such utilities soon for developers.

Spend less time searching and more time building. Sign up for a weekly dive into the biggest news, best tutorials, and most interesting projects from the deep learning world.

We tried developing a simple react app that labels an image using a SqueezNet model.

Let’s build a React App with TypeScript.

npx create-react-app onnx-hearbeat --typescript
yarn add onnxjs blueimp-load-image ndarray ndarray-ops lodash

We’ll use blueimp-load-image for drawing images and ndarray & ndarray-ops for processing images later in this tutorial.

Import Tensor and InferenceSession from ONNX.js:

import {Tensor, InferenceSession} from 'onnxjs';

In order to execute any model, we have to create an InferenceSession in ONNX.js. It encapsulates the environment that ONNX.js operations need in order to execute. It loads and runs ONNX models with the desired configurations.

session = new InferenceSession({ backendHint, Profiler });

As seen above, the InferenceSession constructor takes an object with backend hint and profiler.

backendHint: Specify a preferred backend to for model execution. Currently, available backend hints are:

  • cpu: CPU backend
  • wasm: WebAssembly backend
  • webgl : WebGL backend

If not set, the backend will be determined by the platform and environment.

profiler: Config.Profiler An object specifying profiler configurations used in an InferenceSession. If not set, the profiler will run in the default configuration.

Now our session is ready to load our model. This operation may take a little longer depending upon your Internet connectivity and the size of your model. SqueezNet model[1.1] used in our demo application is 5.9MB.

const url = './models/squeezenet1_1.onnx';
await session.loadModel(url);
Image from Flickr

Let us pick up a goblet of wine’s image and prepare it for our SqueezetNet model.

In order to label an image, we have to convert our image into tensors that will work as an input for our ONNX model. In order to process an image, we have to load an image on our DOM inside acanvas. As discussed above, we’ll use theblueimp-load-image package for loading images from URLs on canvas.

The 2D context of this canvas element can now be used to convert the images into tensors. Here is our utility method for converting images into tensors:

Since we’ve now have loaded our model into inference sessions and converted our image into a tensor, we’re all set to utilize the power of ONNX.js. Here’s our function that will return the model output and time consumed during inference:

Now we can map the model’s output on imagenet classes using this function:

Performed on Google Chrome, MacBook Pro 2015. Goblet FTW!


Tag cloud