Fetching Models

Where it All Begins
The very first step in using ML in your app is fetching a model. NatML supports fetching models from different sources, and this functionality is encapsulated in the MLModelData class.

Fetching from Hub

NatML Hub is a platform for managing and deploying ML models.
The NatML predictor catalog.
NatML Hub provides a predictor catalog from which models can be fetched:
// Fetch model data from NatML
var modelData = await MLModelData.FromHub("@natsuite/yolox");
// Create a model from model data
var model = modelData.Deserialize();
When you upload your model to NatML Hub, we will automatically convert your model to CoreML, ONNX, and TensorFlow Lite, making your model cross-platform.
Predictors fetched from NatML are cached on-device, so your users only ever have to download the model once.

Retrieving your Access Key

In order to fetch models from NatML Hub, you must have a valid access key:
Get your access key from NatML Hub
Once you have an access key, you can add it to your Unity project in Project Settings > NatML:
Specifying your access key in a Unity project.

Using Model Files

NatML supports using CoreML (.mlmodel), ONNX (.onnx), and TensorFlow Lite (.tflite) models. Simply drag and drop the model file into your Unity project. The model file is imported as an MLModelData instance.
Dropping a CoreML model into Unity.
There are restrictions on what platforms support a given ML model format:
  • CoreML models can only be used on iOS and macOS.
  • ONNX models can only be used on Windows.
  • TensorFlow Lite models can only be used on Android.
To use your ML model in cross-platform apps, upload the model to NatML Hub.
Model data created from raw ML models do not contain any supplementary data.

Using Supplemental Data

In addition to loading models, the MLModelData class contains supplementary data needed to make predictions with the model. For example, when using a classification model, you will need a list of class labels which correspond to the model's output probabilities. The MLModelData class encapsulates all of the information needed to both load a model, and make predictions with it.

Class Labels

Classification and detection models output probabilities corresponding to a set of predefined class labels. The MLModelData class provides these class labels:
// Create a classifier model and get its corresponding labels
MLModel model = modelData.Deserialize();
string[] labels = modelData.labels;
// Create a MobileNet predictor using the provided class labels
var predictor = new MobileNetv2Predictor(model, labels);

Image Normalization

Some vision models require input images to have certain normalization, which is a mathematical transformation of the input data. This normalization information can be used by an MLImageFeature when creating input data for a model:
// Create an image feature from an image
Texture2D image = ...;
var feature = new MLImageFeature(image);
// Apply the required normalization expected by the model
(feature.mean, feature.std) = modelData.normalization;

Image Aspect Mode

Certain vision models, like object detectors, require input images to be cropped in a specific way so as to preserve all the information in the image while making a prediction. This aspect mode can be used with an MLImageFeature when creating an input feature for prediction:
// Create an image feature from an image
Texture2D image = ...;
var feature = new MLImageFeature(image);
// Apply the required aspect mode expected by the model
feature.aspectMode = modelData.aspectMode;

Audio Format

Certain audio models that make predictions on raw audio data will require that the audio data have a specific audio format. The audio format can be used with an MLAudioFeature when creating an input feature for prediction:
// Create an image feature from an audio clip
AudioClip audioClip = ...;
var feature = new MLAudioFeature(audioClip);
// Apply the required audio format expected by the model
(feature.sampleRate, feature.channelCount) = modelData.audioFormat;