NatML
Search…
⌃K

MLPredictorExtensions

class NatML.MLPredictorExtensions
This class contains extension methods for working with predictors.

Async Predictions

/// <summary>
/// Create an async predictor from a predictor.
/// This typically results in significant performance improvements,
/// as predictions are run on a worker thread.
/// </summary>
/// <param name="predictor">Backing predictor to create an async predictor with.</param>
/// <returns>Async predictor which runs predictions on a worker thread.</returns>
static MLAsyncPredictor<TOutput> ToAsync<TOutput> (this IMLPredictor<TOutput> predictor);
Some models might not be able to run in realtime. This doesn't mean they can't be used; in fact many models run slower-than-realtime in interactive applications. In this situation, it becomes beneficial to run predictions asynchronously. NatML provides the MLAsyncPredictor which is a wrapper around any existing predictor for this purpose:
// Create a predictor
var predictor = new MobileNetv2Predictor(...);
// Then make it async!
var asyncPredictor = predictor.ToAsync();
The async predictor spins up a dedicated worker thread for making predictions, completely freeing up your app to perform other processing:
// Before, we used to make predictions on the main thread:
var (label, confidence) = predictor.Predict(...);
// Now, we can make predictions on a dedicated worker thread:
var (label, confidence) = await asyncPredictor.Predict(...);
When making predictions in streaming applications (like in camera apps), you can check if the async predictor is ready to make more predictions, so as not to backup the processing queue:
// If the predictor is ready, queue more work
if (asyncPredictor.readyForPrediction)
var output = await asyncPredictor.Predict(...);
Finally, you must Dispose the predictor when you are done with it, so as not to leave threads and other resources dangling.
Do not use predictors from multiple threads. So once you create an MLAsyncPredictor from an inner predictor, do not use the inner predictor.