Creating Predictors
ML Where your Users Are
As you might have noticed above,
MLEdgeModel
instances typically won't be used directly. Instead, they will be used through Edge Predictors, which are lightweight classes that can transform input data into the model's expected input features; and can transform the model's output features into easily usable types. Below are the general steps in implementing Edge predictors:All Edge predictors must inherit from the
IMLPredictor<TOutput>
interface. The predictor has a single generic type argument, TOutput
, which is a developer-friendly type that is returned when a prediction is made. For example, the MobileNetv2Predictor
predictor class which classifies an image uses a tuple for its output type:// The MobileNetv2 classification predictor returns a tuple
class MobileNetv2Predictor : IMLPredictor<(string label, float confidence)> { ... }
Edge predictors should define a static
Create
method which creates an MLEdgeModel
instance by loading the model either from a local file or from NatML Hub. Once created, the predictor can be created using a constructor./// <summary>
/// Create a custom predictor.
/// </summary>
public static async Task<MobileNetv2Predictor> Create () {
// Load edge model
var model = await MLEdgeModel.Create(...);
// Create predictor
var predictor = new MobileNetv2Predictor(model);
// Return predictor
return predictor;
}
This pattern relies on a constructor that accepts an
MLEdgeModel
instance:/// <summary>
/// Create an instance of our predictor
/// </summary>
private MobileNetv2Predictor (MLEdgeModel model) {
...
}
It is highly recommended to keep the constructor
private
so that consumers can only create the predictor using the Create
method.Here is a full example of our predictor implementation thus far:
MobileNetv2Predictor.cs
public class MobileNetv2Predictor : IMLPredictor<(string label, string confidence)> {
#region --Client API--
/// <summary>
/// Create a custom predictor.
/// </summary>
public static async Task<MobileNetv2Predictor> Create () {
// Load edge model
var model = await MLEdgeModel.Create(...);
// Create predictor
var predictor = new MobileNetv2Predictor(model);
// Return predictor
return predictor;
}
#endregion
#region --Implementation--
private readonly MLEdgeModel model;
private MobileNetv2Predictor (MLEdgeModel model) {
this.model = model;
}
#endregion
}
All Edge predictors must implement a public
Predict
method which accepts a params MLFeature[]
and returns a TOutput
. In our case, we have:/// <summary>
/// Make a prediction with the model.
/// </summary>
/// <param name="inputs">Input feature.</param>
/// <returns>Output label with unnormalized confidence value.</returns>
public (string label, float confidence) Predict (params MLFeature[] inputs);
Within the
Predict
method, the predictor should do three things:The predictor should check that the client has provided the correct number of input features, and that the features have the model's expected types. In our case, we will check that the user passes in an image feature:
/// <summary>
/// Make a prediction with the model.
/// </summary>
/// <param name="inputs">Input feature.</param>
/// <returns>Output label with unnormalized confidence value.</returns>
public (string label, float confidence) Predict (params MLFeature[] inputs) {
// Check that the input is an image feature
if (!(inputs[0] is MLImageFeature imageFeature))
throw new InvalidArgumentException(@"Predictor makes predictions on image features");
// ...
}
If these checks fail, an appropriate exception should be thrown. Do this instead of returning an un-initialized output.
To make predictions, the predictor must create
MLEdgeFeature
instances from input features. Creating an MLEdgeFeature
typically requires a corresponding MLFeatureType
which dictates any required pre-processing when creating the edge feature. You will typically use the model's input feature types for this purpose:// Get or create the native feature type which the model expects
MLFeatureType inputType = model.inputs[0];
// Create an Edge feature from the input feature
using MLEdgeFeature edgeFeature = (inputFeature as IMLEdgeFeature).Create(inputType);
To check if a feature can be used for Edge predictions, cast it to an
IMLEdgeFeature
and check that the result of the cast is not null
.Once you have created all the required Edge features, you can then make predictions with the
MLEdgeModel
:// Make a prediction with one or more native input features
using var outputFeatures = model.Predict(edgeFeature);
Once you have output Edge features from the model, you can then marshal the feature data into a more developer-friendly type. This is where most of the heavy-lifting happens in a predictor:
// Marshal the output feature data into a developer-friendly type
var arrayFeature = new MLArrayFeature<float>(outputFeatures[0]);
// Do stuff with this data...
...
Finally, return your predictor's output:
// Create the prediction result from the output data
TOutput result = ...;
// Return it
return result;
All Edge predictors must define a
Dispose
method, because IMLPredictor
implements the IDisposable
interface. This method should be used to dispose any explicitly-managed resources used by the predictor. If a predictor does not have any explicitly-managed resources to dispose, then the predictor should hide the Dispose
method using interface hiding:// Hide the `Dispose` method so that clients cannot use it directly
void IDisposable.Dispose () { }
The predictor must not
Dispose
any models provided to it. This is the responsibility of the client.Last modified 3mo ago