MLEdgeModel
class NatML.MLEdgeModel : MLModel
Last updated
class NatML.MLEdgeModel : MLModel
Last updated
The MLEdgeModel
represents an ML model that makes predictions on the local device. As such, it forms the basis for implementing edge predictors in code.
The edge model can be created from a NatML Hub predictor, from a file, or from model data:
The model can be created from a predictor on NatML Hub:
The predictor MUST have an active graph for your current platform.
When loading Edge predictors, NatML caches the model graph on device. This means that a user only has to download the model graph once.
The model creation process can be configured using the Configuration
argument.
The model can be created from an ML model file. Simply call the Create
method and pass in a file path:
There are restrictions on what platforms support a given ML model format:
CoreML models can only be used on iOS and macOS.
ONNX models can only be used on Windows and Web.
TensorFlow Lite models can only be used on Android.
To use your ML model in cross-platform apps, upload the model to NatML Hub.
Edge models created from model files do not contain any supplementary data, like class labels, normalization, and others.
The model can be created from MLModelData
instances, which are opaque representations of an ML model file in your Unity project.
Edge models provide information about their expected input and output feature types. This type information is crucial for writing Edge predictors.
Refer to the Input Features section of the MLModel
class for more information.
Refer to the Output Features section of the MLModel
class for more information.
Refer to the Inspecting Metadata section of the MLModel
class for more information.
For classification and detection models, this field contains the list of class labels associated with each class in the output distribution. If class labels don't apply to the model, it will return null
.
Edge models created from files will never have labels
. Use NatML Hub instead.
Vision models often require that images be normalized to a specific mean and standard deviation. As such, MLEdgeModel
defines a Normalization
struct:
When working with image features, the Normalization
struct can be easily deconstructed:
Vision models might require that input image features be scaled a certain way when they are resized to fit the model's input size. The aspectMode
can be passed directly to an MLImageFeature
.
Audio and speech models often require or produce audio data with a specific sample rate and channel count. As such, MLEdgeModel
defines an AudioFormat
struct:
When working with audio features, the AudioFormat
struct can be easily deconstructed like so:
The MLEdgeModel
exposes a Predict
method which makes predictions on one or more MLEdgeFeature
instances.
Instead of using MLEdgeFeature
instances directly, we highly recommend using the managed feature classes instead (MLArrayFeature
, MLAudioFeature
, and so on).
The input and output features MUST be disposed when they are no longer needed. Call Dispose
on the individual features, or on the returned feature collection to do so.
All calls to Predict
MUST be serialized on a single thread. Calling Predict
across several threads is undefined behaviour and can result in a crash. As such, you must not make predictions within a ThreadPool
. Use MLPredictorExtensions.ToAsync
to make predictions on a dedicated worker thread instead.
Refer to the Disposing the Model section of the MLModel
class for more information.
When fetching edge models from NatML Hub, the model graph must first be downloaded to the device before it is cached and loaded. For larger ML models or for users with poor internet, this download can take a long time. As such, the MLEdgeModel
class defines the EmbedAttribute
to embed the ML graph into the app binary at build time.
Note that the build size of the application will increase as a result of the embedded model data.
The attribute can be placed on any class
or struct
definition. At build time, NatML will find all such attributes, and embed the corresponding model data in the build. The attribute can be used like so: