NatML
Search…
MLAsyncPredictor
class NatSuite.ML.Extensions.MLAsyncPredictor<TOutput> : IMLPredictor<Task<TOutput>>
This predictor wraps an existing predictor and makes predictions on a dedicated worker thread. As a result, it frees up the main thread to perform other work, and returns the prediction results asynchronously.
All public predictors on NatML Hub can be converted to async predictors.
Because this predictor makes predictions on a background thread, there are restrictions on what kinds of methods can be used in the backing predictor. Specifically, Unity does not allow most of its API's to be used from a background thread.

Inspecting the Predictor

1
/// <summary>
2
/// Backing predictor used by the async predictor.
3
/// </summary>
4
IMLPredictor<TOutput> predictor { get; }
Copied!
The predictor exposes the backing predictor which it uses to make predictions.
Do not use the backing predictor directly once it has been converted to an async predictor.

Making Predictions

1
/// <summary>
2
/// Make a prediction on one or more input features.
3
/// </summary>
4
/// <param name="inputs">Input features.</param>
5
/// <returns>Prediction output.</returns>
6
Task<TOutput> Predict (params MLFeature[] inputs);
Copied!
INCOMPLETE.
1
/// <summary>
2
/// Whether the predictor is ready to process new requests immediately.
3
/// </summary>
4
bool readyForPrediction { get; }
Copied!

Disposing the Predictor

1
/// <summary>
2
/// Dispose the predictor and release resources.
3
/// When this is called, all outstanding prediction requests are cancelled.
4
/// </summary>
5
void Dispose ();
Copied!
INCOMPLETE.
Last modified 4mo ago