IMLPredictor
interface IMLPredictor<TOutput>
In NatML, predictors are lightweight primitives that make edge (on-device) predictions with one or more MLModel
instances. Predictors play a very crucial role in using ML models, because of their two primary purposes:
Predictors provide models with the exact input data they need.
Predictors convert model outputs to a form that is usable by developers.
You will typically never have to implement IMLPredictor
yourself. Instead, discover existing edge predictors on NatML Hub.
Defining the Predictor
All edge predictors must implement this interface. The predictor has a single generic type argument, TOutput
, which is a developer-friendly type that is returned when a prediction is made. For example, a MobileNetv2Predictor
for the MobileNet v2 image classifier model will use a tuple for its output type:
// The MobileNetv2 classification predictor returns a (label, score) tuple
class MobileNetv2Predictor implements IMLPredictor<[string, number]> { ... }
Writing the Constructor
All edge predictors must define one or more constructors that accept one or more MLModel
instances, along with any other predictor data needed to make predictions with the model(s). For example:
/**
* Create a predictor
* @param model ML model used to make predictions.
*/
public constructor (model: MLModel) { ... }
Within the constructor, the model should store a readonly
reference to the model(s). The type of this reference should be MLEdgeModel
:
// Define the `model` member
private readonly model: MLEdgeModel;
// And in the constructor...
constructor (model: MLModel) {
this.model = model as MLEdgeModel;
}
Making Predictions
All edge predictors must implement a public predict
method which accepts a variadic MLFeature[]
and returns a TOutput
:
/**
* Make a prediction on one or more input features.
* @param inputs Input features.
* @returns Prediction output.
*/
public predict (...inputs: MLFeature[]): TOutput;
Within the predict
method, the predictor should do three things:
Input Checking
The predictor should check that the client has provided the correct number of input features, and that the features have the model's expected types.
If these checks fail, an appropriate exception should be thrown. Do this instead of returning an un-initialized output.
Prediction
NatML for NodeJS currently does not support making predictions with edge predictors.
Marshaling
Once you have raw output features from the model, you can then marshal the feature data into a more developer-friendly type. This is where most of the heavy-lifting happens in a predictor:
// Marshal the output feature data into a developer-friendly type
const outputArrayFeature = new MLArrayFeature<Float32Array>(rawOutputFeatures[0]);
// Do stuff with this data...
...
Finally, return your predictor's output:
// Create the prediction result from the output data
const result: TOutput = ...;
// Return it
return result;
Disposing the Predictor
Edge predictors may define a dispose
method. This method should be used to dispose any explicitly-managed resources used by the predictor, like recurrent state for recurrent models.
The predictor must not dispose
any models provided to it. This is the responsibility of the client.
Last updated
Was this helpful?