IMLPredictor
interface IMLPredictor<TOutput>
In NatML, predictors are lightweight primitives that make edge (on-device) predictions with one or more MLModel instances. Predictors play a very crucial role in using ML models, because of their two primary purposes:
Predictors provide models with the exact input data they need.
Predictors convert model outputs to a form that is usable by developers.
You will typically never have to implement IMLPredictor yourself. Instead, discover existing edge predictors on NatML Hub.
Defining the Predictor
All edge predictors must implement this interface. The predictor has a single generic type argument, TOutput, which is a developer-friendly type that is returned when a prediction is made. For example, a MobileNetv2Predictor for the MobileNet v2 image classifier model will use a tuple for its output type:
// The MobileNetv2 classification predictor returns a (label, score) tuple
class MobileNetv2Predictor implements IMLPredictor<[string, number]> { ... }Writing the Constructor
All edge predictors must define one or more constructors that accept one or more MLModel instances, along with any other predictor data needed to make predictions with the model(s). For example:
/**
* Create a predictor
* @param model ML model used to make predictions.
*/
public constructor (model: MLModel) { ... }Within the constructor, the model should store a readonly reference to the model(s). The type of this reference should be MLEdgeModel:
Making Predictions
All edge predictors must implement a public predict method which accepts a variadic MLFeature[] and returns a TOutput:
Within the predict method, the predictor should do three things:
Input Checking
The predictor should check that the client has provided the correct number of input features, and that the features have the model's expected types.
If these checks fail, an appropriate exception should be thrown. Do this instead of returning an un-initialized output.
Prediction
NatML for NodeJS currently does not support making predictions with edge predictors.
Marshaling
Once you have raw output features from the model, you can then marshal the feature data into a more developer-friendly type. This is where most of the heavy-lifting happens in a predictor:
Finally, return your predictor's output:
Disposing the Predictor
Edge predictors may define a dispose method. This method should be used to dispose any explicitly-managed resources used by the predictor, like recurrent state for recurrent models.
The predictor must not dispose any models provided to it. This is the responsibility of the client.
Last updated
Was this helpful?