# Creating Predictors

As you might have noticed above, [`MLEdgeModel`](https://docs.natml.ai/unity/api/mlmodel/mledgemodel) instances typically won't be used directly. Instead, they will be used through Edge Predictors, which are lightweight classes that can transform input data into the model's expected input features; and can transform the model's output features into easily usable types. Below are the general steps in implementing Edge predictors:

## Defining the Predictor

**All Edge predictors must** inherit from the [`IMLPredictor<TOutput>`](https://docs.natml.ai/unity/api/imlpredictor) interface. The predictor has a single generic type argument, `TOutput`, which is a developer-friendly type that is returned when a prediction is made. For example, the `MobileNetv2Predictor` predictor class which classifies an image uses a tuple for its output type:

```csharp
// The MobileNetv2 classification predictor returns a tuple
class MobileNetv2Predictor : IMLPredictor<(string label, float confidence)> { ... }
```

## Defining Constructors

**Edge predictors should** define a static `Create` method which creates an `MLEdgeModel` instance by loading the model either from a local file or from NatML Hub. Once created, the predictor can be created using a constructor.

```csharp
/// <summary>
/// Create a custom predictor.
/// </summary>
public static async Task<MobileNetv2Predictor> Create () {
    // Load edge model
    var model = await MLEdgeModel.Create(...);
    // Create predictor
    var predictor = new MobileNetv2Predictor(model);
    // Return predictor
    return predictor;
}
```

This pattern relies on a constructor that accepts an `MLEdgeModel` instance:

```csharp
/// <summary>
/// Create an instance of our predictor
/// </summary>
private MobileNetv2Predictor (MLEdgeModel model) {
    ...
}
```

{% hint style="success" %}
It is highly recommended to keep the constructor `private` so that consumers can only create the predictor using the `Create` method.
{% endhint %}

Here is a full example of our predictor implementation thus far:

{% code title="MobileNetv2Predictor.cs" %}

```csharp
public class MobileNetv2Predictor : IMLPredictor<(string label, string confidence)> {
    
    #region --Client API--
    /// <summary>
    /// Create a custom predictor.
    /// </summary>
    public static async Task<MobileNetv2Predictor> Create () {
        // Load edge model
        var model = await MLEdgeModel.Create(...);
        // Create predictor
        var predictor = new MobileNetv2Predictor(model);
        // Return predictor
        return predictor;
    }
    #endregion
    
    
    #region --Implementation--
    private readonly MLEdgeModel model;
   
    private MobileNetv2Predictor (MLEdgeModel model) {
        this.model = model;
    }
    #endregion
}
```

{% endcode %}

## Making Predictions

**All Edge predictors must** implement a public `Predict` method which accepts a `params MLFeature[]` and returns a `TOutput`. In our case, we have:

```csharp
/// <summary>
/// Make a prediction with the model.
/// </summary>
/// <param name="inputs">Input feature.</param>
/// <returns>Output label with unnormalized confidence value.</returns>
public (string label, float confidence) Predict (params MLFeature[] inputs);
```

Within the `Predict` method, the predictor should do three things:

### Input Checking

The predictor should check that the client has provided the correct number of input features, and that the features have the model's [expected types](https://docs.natml.ai/unity/api/mlmodel#input-features). In our case, we will check that the user passes in an image feature:

```csharp
/// <summary>
/// Make a prediction with the model.
/// </summary>
/// <param name="inputs">Input feature.</param>
/// <returns>Output label with unnormalized confidence value.</returns>
public (string label, float confidence) Predict (params MLFeature[] inputs) {
    // Check that the input is an image feature
    if (!(inputs[0] is MLImageFeature imageFeature))
        throw new InvalidArgumentException(@"Predictor makes predictions on image features");
    // ...
}
```

{% hint style="warning" %}
If these checks fail, an appropriate exception should be thrown. Do this instead of returning an un-initialized output.
{% endhint %}

### Prediction

To make predictions, the predictor must create [`MLEdgeFeature`](https://docs.natml.ai/unity/authoring/broken-reference) instances from input features. Creating an `MLEdgeFeature` typically requires a corresponding `MLFeatureType` which dictates any required pre-processing when creating the edge feature. You will typically use the model's input feature types for this purpose:

```csharp
// Get or create the native feature type which the model expects
MLFeatureType inputType = model.inputs[0];
// Create an Edge feature from the input feature
using MLEdgeFeature edgeFeature = (inputFeature as IMLEdgeFeature).Create(inputType);
```

{% hint style="info" %}
To check if a feature can be used for Edge predictions, cast it to an [`IMLEdgeFeature`](https://docs.natml.ai/unity/authoring/broken-reference) and check that the result of the cast is not `null`.
{% endhint %}

Once you have created all the required Edge features, you can then make predictions with the `MLEdgeModel`:

```csharp
// Make a prediction with one or more native input features
using var outputFeatures = model.Predict(edgeFeature);
```

### Marshaling

Once you have output Edge features from the model, you can then marshal the feature data into a more developer-friendly type. This is where most of the heavy-lifting happens in a predictor:

```csharp
// Marshal the output feature data into a developer-friendly type
var arrayFeature = new MLArrayFeature<float>(outputFeatures[0]);
// Do stuff with this data...
...
```

Finally, return your predictor's output:

```csharp
// Create the prediction result from the output data
TOutput result = ...;
// Return it
return result;
```

## Disposing the Predictor

**All Edge predictors must** define a `Dispose` method, because `IMLPredictor` implements the [`IDisposable`](https://docs.microsoft.com/en-us/dotnet/api/system.idisposable?view=net-5.0) interface. This method should be used to dispose any explicitly-managed resources used by the predictor. If a predictor does not have any explicitly-managed resources to dispose, then the predictor should hide the `Dispose` method using interface hiding:

```csharp
// Hide the `Dispose` method so that clients cannot use it directly
void IDisposable.Dispose () { }
```

{% hint style="danger" %}
The predictor **must not** `Dispose` any models provided to it. This is the responsibility of the client.
{% endhint %}
