NatML
Search…
⌃K

MLModelData

class NatML.MLModelData
The MLModelData class is a self-contained archive containing an MLModel along with supplemental data that is useful to make predictions with the model.

Fetching Model Data

Model data can be fetched in several ways:

From NatML Hub

/// <summary>
/// Fetch ML model data from NatML.
/// </summary>
/// <param name="tag">Model tag.</param>
/// <param name="accessKey">NatML access key.</param>
/// <returns>ML model data.</returns>
static Task<MLModelData> FromHub (string tag, string accessKey = null);
NatML provides a model hosting, delivery, and analytics service called NatML Hub. NatML Hub provides model data for both Edge (on-device) and Cloud (server-side) predictors.
You can get your access key from your profile page on NatML Hub.
When loading Edge predictors, NatML caches the model graph on device. This means that a user only has to download the model graph once.

From File

/// <summary>
/// Fetch ML model data from a local file.
/// Note that the model data will not contain any supplementary data.
/// </summary>
/// <param name="path">Path to ML model file.</param>
/// <returns>ML model data.</returns>
static Task<MLModelData> FromFile (string path);
INCOMPLETE.

Inspecting Model Data

/// <summary>
/// NatML Hub predictor tag.
/// </summary>
string tag { get; }
The model data exposes the NatML Hub predictor tag that it corresponds to.

Specifying the Compute Target

/// <summary>
/// Specify the compute target used for model predictions.
/// </summary>
ComputeTarget computeTarget { get; set; }
The model data allows you to specify the compute target that will be used to accelerate model predictions:

Compute Target

/// <summary>
/// Compute target used for model predictions.
/// </summary>
enum ComputeTarget : int {
/// <summary>
/// Use all available compute targets including the CPU, GPU, and neural processing units.
/// </summary>
All = 0,
/// <summary>
/// Use only the CPU.
/// </summary>
CPUOnly = 1,
}

Specifying the Compute Device

/// <summary>
/// Specify the compute device used for model predictions.
/// The native type of this pointer is platform-specific.
/// </summary>
IntPtr computeDevice { get; set; }
On systems with multiple GPU's, the model data allows for specifying the preferred compute device for accelerating model predictions.
The computeDevice is exposed as an opaque pointer to a platform-specific type:
Set this field to IntPtr.Zero to use the default compute device.

Creating the Model

/// <summary>
/// Deserialize the model data to create an ML model that can be used for prediction.
/// You MUST dispose the model once you are done with it.
/// </summary>
/// <returns>ML model.</returns>
MLModel Deserialize ();
An MLModel is created from model data. The model can then be used with a predictor to make predictions.
You must Dispose the model when you are done with it. Failing to do so will result in severe resource leaks.

Using Supplemental Data

Model data contains additional information needed to make a prediction with a model.

Classification Labels

/// <summary>
/// Model classification labels.
/// This is `null` if the predictor does not have use classification labels.
/// </summary>
string[] labels { get; }
For classification and detection models, this field contains the list of class labels associated with each class in the output distribution. If class labels don't apply to the model, it will return null.

Feature Normalization

/// <summary>
/// Expected feature normalization for predictions with this model.
/// </summary>
Normalization normalization { get; }
Vision models often require that images be normalized to a specific mean and standard deviation. As such, MLModelData includes a Normalization struct:
struct Normalization {
/// <summary>
/// Per-channel normalization means.
/// </summary>
float[] mean { get; }
/// <summary>
/// Per-channel normalization standard deviations.
/// </summary>
float[] std { get; }
}
When working with image features, the Normalization struct can be easily deconstructed like so:
// Get the model's preferred image normalization
Vector4 mean, std;
(mean, std) = modelData.normalization;

Image Aspect Mode

/// <summary>
/// Expected image aspect mode for predictions with this model.
/// </summary>
MLImageFeature.AspectMode aspectMode { get; }
Vision models might require that input image features be scaled a certain way when they are resized to fit the model's input size. The aspectMode can be passed directly to an MLImageFeature.

Audio Format

/// <summary>
/// Expected audio format for predictions with this model.
/// </summary>
AudioFormat audioFormat { get; }
Audio and speech models often require or produce audio data with a specific sample rate and channel count. As such, MLModelData provides an audio format struct:
struct AudioFormat {
/// <summary>
/// Sample rate.
/// </summary>
int sampleRate { get; }
/// <summary>
/// Channel count.
/// </summary>
int channelCount { get; }
}
When working with audio features, the AudioFormat struct can be easily deconstructed like so:
// Get the model's audio format
int sampleRate, channelCount;
(sampleRate, channelCount) = modelData.audioFormat;