Unity
  • NatML for Unity
  • Preliminaries
    • Getting Started
    • Requirements
  • Workflows
    • Core Concepts
    • Fetching Models
    • Using Predictors
  • Authoring
    • Creating Predictors
    • Distributing Predictors
  • API Reference
    • IMLPredictor
    • MLModel
      • MLEdgeModel
        • Configuration
      • MLCloudModel
    • MLFeature
      • MLArrayFeature
      • MLImageFeature
      • MLStringFeature
      • MLAudioFeature
      • MLVideoFeature
      • MLDepthFeature
      • MLXRCpuDepthFeature
    • MLFeatureType
      • MLArrayType
      • MLAudioType
      • MLImageType
      • MLVideoType
      • MLStringType
    • MLPredictorExtensions
  • Integrations
    • Media Devices
    • Augmented Reality
    • Video Recording
  • Insiders
    • Changelog
    • Open Source
    • GitHub
    • Discord
    • Blog
Powered by GitBook
On this page
  • Creating the Model
  • From NatML Hub
  • From a Model File
  • From Model Data
  • Inspecting Feature Types
  • Input Features
  • Output Features
  • Inspecting Metadata
  • Inspecting Classification Labels
  • Inspecting Feature Normalization
  • Normalization
  • Inspecting the Aspect Mode
  • Inspecting the Audio Format
  • Audio Format
  • Making Predictions
  • Disposing the Model
  • Embedding the Model

Was this helpful?

  1. API Reference
  2. MLModel

MLEdgeModel

class NatML.MLEdgeModel : MLModel

PreviousMLModelNextConfiguration

Last updated 2 years ago

Was this helpful?

The MLEdgeModel represents an ML model that makes predictions on the local device. As such, it forms the basis for implementing edge predictors in code.

Creating the Model

The edge model can be created from a NatML Hub predictor, from a file, or from model data:

From NatML Hub

/// <summary>
/// Create an edge ML model.
/// </summary>
/// <param name="tagOrPath">Predictor tag or path to model file.</param>
/// <param name="configuration">Optional model configuration.</param>
/// <param name="accessKey">NatML access key.</param>
static Task<MLEdgeModel> Create (string tagOrPath, Configuration configuration = null, string accessKey = null);

The model can be created from a predictor on NatML Hub:

The predictor MUST have an active graph for your current platform.

When loading Edge predictors, NatML caches the model graph on device. This means that a user only has to download the model graph once.

From a Model File

// Given the path to an ML graph
var modelPath = "/Users/developer/Desktop/yolov5.onnx";
// Create an edge model
var model = await MLEdgeModel.Create(modelPath);

There are restrictions on what platforms support a given ML model format:

  • CoreML models can only be used on iOS and macOS.

  • ONNX models can only be used on Windows and Web.

  • TensorFlow Lite models can only be used on Android.

Edge models created from model files do not contain any supplementary data, like class labels, normalization, and others.

From Model Data

/// <summary>
/// Create an edge ML model.
/// </summary>
/// <param name="modelData">ML model data.</param>
/// <param name="configuration">Optional model configuration.</param>
static Task<MLEdgeModel> Create (MLModelData modelData, Configuration configuration = null);

The model can be created from MLModelData instances, which are opaque representations of an ML model file in your Unity project.

Inspecting Feature Types

Edge models provide information about their expected input and output feature types. This type information is crucial for writing Edge predictors.

Input Features

/// <summary>
/// Model input feature types.
/// </summary>
MLFeatureType[] inputs { get; }

Output Features

/// <summary>
/// Model output feature types.
/// </summary>
MLFeatureType[] outputs { get; }

Inspecting Metadata

/// <summary>
/// Get the model metadata dictionary.
/// </summary>
IReadOnlyDictionary<string, string> metadata { get; }

Inspecting Classification Labels

/// <summary>
/// Model classification labels.
/// This is `null` if the predictor does not have use classification labels.
/// </summary>
string[] labels { get; }

For classification and detection models, this field contains the list of class labels associated with each class in the output distribution. If class labels don't apply to the model, it will return null.

Edge models created from files will never have labels. Use NatML Hub instead.

Inspecting Feature Normalization

/// <summary>
/// Expected feature normalization for predictions with this model.
/// </summary>
Normalization normalization { get; }

Normalization

struct Normalization {
    /// <summary>
    /// Per-channel normalization means.
    /// </summary>
    float[] mean { get; }
    /// <summary>
    /// Per-channel normalization standard deviations.
    /// </summary>
    float[] std { get; }
}

When working with image features, the Normalization struct can be easily deconstructed:

// Get the model's preferred image normalization
Vector4 mean, std;
(mean, std) = modelData.normalization;

Inspecting the Aspect Mode

/// <summary>
/// Expected image aspect mode for predictions with this model.
/// </summary>
MLImageFeature.AspectMode aspectMode { get; }

Inspecting the Audio Format

/// <summary>
/// Expected audio format for predictions with this model.
/// </summary>
AudioFormat audioFormat { get; }

Audio and speech models often require or produce audio data with a specific sample rate and channel count. As such, MLEdgeModel defines an AudioFormat struct:

Audio Format

struct AudioFormat {
    /// <summary>
    /// Sample rate.
    /// </summary>
    int sampleRate { get; }
    /// <summary>
    /// Channel count.
    /// </summary>
    int channelCount { get; }
}

When working with audio features, the AudioFormat struct can be easily deconstructed like so:

// Get the model's audio format
int sampleRate, channelCount;
(sampleRate, channelCount) = modelData.audioFormat;

Making Predictions

/// <summary>
/// Make a prediction on one or more Edge ML features.
/// </summary>
/// <param name="inputs">Input edge ML features.</param>
/// <returns>Output edge ML features.</returns>
MLFeatureCollection<MLEdgeFeature> Predict (params MLEdgeFeature[] inputs);

The MLEdgeModel exposes a Predict method which makes predictions on one or more MLEdgeFeature instances.

The input and output features MUST be disposed when they are no longer needed. Call Dispose on the individual features, or on the returned feature collection to do so.

Disposing the Model

/// <summary>
/// Dispose the model and release resources.
/// </summary>
void Dispose ();

Embedding the Model

/// <summary>
/// Embed the edge model into the app at build time.
/// </summary>
/// <param name="tag">Predictor tag.</param>
/// <param name="accessKey">NatML access key. If `null` the project access key will be used.</param>
class EmbedAttribute (string tag, string accessKey = null) : Attribute;

When fetching edge models from NatML Hub, the model graph must first be downloaded to the device before it is cached and loaded. For larger ML models or for users with poor internet, this download can take a long time. As such, the MLEdgeModel class defines the EmbedAttribute to embed the ML graph into the app binary at build time.

Note that the build size of the application will increase as a result of the embedded model data.

The attribute can be placed on any class or struct definition. At build time, NatML will find all such attributes, and embed the corresponding model data in the build. The attribute can be used like so:

ObjectDetector.cs
// An example script that embeds the `ssd-lite` model data from NatML Hub
[MLEdgeModel.Embed("@natsuite/ssd-lite")]
class ObjectDetector : MonoBehaviour {
    
    async void Start () {
        // Fetch the edge model from memory
        // In a build, this will complete immediately
        var model = await MLEdgeModel.Create("@natsuite/ssd-lite");
        // Use the edge model
        ...
    }
}

The model creation process can be configured using the argument.

The model can be created from an ML model file. Simply call the method and pass in a file path:

To use your ML model in cross-platform apps, upload the model to .

Refer to the section of the class for more information.

Refer to the section of the class for more information.

Refer to the section of the class for more information.

Vision models often require that images be to a specific mean and standard deviation. As such, MLEdgeModel defines a Normalization struct:

Vision models might require that input image features be scaled a certain way when they are resized to fit the model's input size. The aspectMode can be passed directly to an .

Instead of using MLEdgeFeature instances directly, we highly recommend using the managed feature classes instead (, , and so on).

All calls to Predict MUST be serialized on a single thread. Calling Predict across several threads is undefined behaviour and can result in a crash. As such, you must not make predictions within a ThreadPool. Use to make predictions on a dedicated worker thread instead.

Refer to the section of the class for more information.

Configuration
NatML Hub
normalized
MLImageFeature
MLArrayFeature
MLAudioFeature
Create
MLModel
MLModel
MLModel
MLModel
LogoExplore - NatML
NatML Hub predictor catalog.
Input Features
Output Features
Inspecting Metadata
Disposing the Model
MLPredictorExtensions.ToAsync