Unity
  • NatML for Unity
  • Preliminaries
    • Getting Started
    • Requirements
  • Workflows
    • Core Concepts
    • Fetching Models
    • Using Predictors
  • Authoring
    • Creating Predictors
    • Distributing Predictors
  • API Reference
    • IMLPredictor
    • MLModel
      • MLEdgeModel
        • Configuration
      • MLCloudModel
    • MLFeature
      • MLArrayFeature
      • MLImageFeature
      • MLStringFeature
      • MLAudioFeature
      • MLVideoFeature
      • MLDepthFeature
      • MLXRCpuDepthFeature
    • MLFeatureType
      • MLArrayType
      • MLAudioType
      • MLImageType
      • MLVideoType
      • MLStringType
    • MLPredictorExtensions
  • Integrations
    • Media Devices
    • Augmented Reality
    • Video Recording
  • Insiders
    • Changelog
    • Open Source
    • GitHub
    • Discord
    • Blog
Powered by GitBook
On this page
  • Models
  • Edge Models
  • Features
  • Predictors

Was this helpful?

  1. Workflows

Core Concepts

All You Need to Know

PreviousRequirementsNextFetching Models

Last updated 1 year ago

Was this helpful?

You do not need to be an expert in AI research to develop or deploy ML models. NatML focuses on making ML deployment as painless as possible for interactive media developers. Before jumping in, It is crucial to understand a few core concepts, and how they interact with one another:

Models

An ML model is a 'black box' which consumes one or more input features and predicts one or more output features. For example, a vision classifier receives an image feature and produces a probability distribution feature, telling you how likely the image is one label or another.

Under the hood, an ML model is a computation graph. This graph is what empowers ML models with the ability to learn and make predictions on new data.

At NatML, we prefer to refer to ML models as graphs. It's pedantic, sure, but avoids any ambiguity.

Edge Models

These are models that run predictions on the local device. They are exposed with the class.

NatML supports working with (.mlmodel), (.onnx), and (.tflite) edge graphs.

Features

A feature is any data that can be consumed or produced by an . For example, you will use a lot of image features when working with vision models. NatML has built-in support for common features that might be used with ML models, including Texture2D and WebCamTexture instances:

// Some common features include:
float[] arrayFeature = ...;
Texture2D imageFeature = ...;
WebCamTexture webCamFeature = ...;
AudioClip audioFeature = ...;

Predictors

Predictors are lightweight primitives that use one or more models to make predictions on features. They are self-contained units that know how to transform inputs into a format that a model expects. But more importantly, they are able to transform outputs of a model into a usable format. For example, you might have a predictor that uses the MobileNet ML model to classify images:

// Create the MobileNet v2 predictor
var predictor = await MobileNetv2Predictor.Create(model);

Whereas a raw classification model outputs a probability distribution, the classification predictor can transform this distribution into a form which is much more usable by developers. It simply returns a class label (string) along with a classification score (float):

var (label, confidence) = predictor.Predict(...);
Debug.Log($"Model predicted {label} with confidence {confidence}");

Every has a corresponding . This type describes the feature and data that is contained within it. Similarly, every has a set of input and output feature types, describing what data the model can consume and produce, respectively.

You can create custom predictors for different models, share them on !

MLEdgeModel
CoreML
ONNX
TensorFlow Lite
MLModel
MLFeature
MLFeatureType
MLModel
NatML