Graph
Search
⌃K

Core Concepts

All You Need to Know 🤓
You do not need to be an expert in AI research to deploy ML models. NatML focuses on making ML deployment as painless as possible for developers. Before diving in, it is crucial to understand a few core concepts, and how they interact with one another. These concepts are:
  • Models
  • Features
  • Predictors
  • Predictor Graphs
  • Predictor Endpoints

Models

An ML model is a 'black box' which receives one or more features and outputs one or more features. For example, a vision classifier model receives an image feature and produces a probability distribution feature, telling you how likely the image is one label or another.
Under the hood, an ML model is a computation graph. This graph is what empowers ML models with the ability to learn and make predictions on new data.
At NatML, we prefer to refer to a raw ML models as a "Graph" for clarity.

Features

A feature is any data that can be consumed or produced by a model. For example, you will use a lot of image features when working with vision models.

Predictors

On NatML, a predictor fully describes an ML model, along with any data and code required to use the aforementioned model. As a result, predictors form the foundation of NatML, serving as the most fundamental unit. NatML supports the two kinds of predictors you will encounter in the wild:
  1. 1.
    Edge Predictors. These are predictors that run the prediction on the local device (e.g. a user's iPhone or their browser tab). NatML enables edge predictors with Predictor Graphs.
  2. 2.
    Cloud Predictors. These are predictors that run the prediction server-side (e.g. the OpenAI API). NatML enables cloud predictors with Predictor Endpoints.
NatML does not make a distinction between edge and cloud predictors. Instead, we provide all the needed infrastructure to create both kinds of predictors.
Though rare, NatML predictors can both be edge and cloud predictors at the same time.

Predictor Graphs

On NatML, a Graph represents a machine learning model in a specific format. At runtime, edge ML clients will fetch a graph from NatML and use a machine learning runtime (e.g. NatML, ONNXRuntime, CoreML) to run aforementioned graph on their local device. NatML supports the following graph formats:
Format
Platforms
NatML Graph Format
​CoreML​
iOS, macOS
COREML
​TensorFlow Lite​
Android, others
TFLITE
​ONNX​
Windows, Web, others
ONNX
NatML provides a high-performance, cross platform ML runtime. Check it out on GitHub.
NatML provides infrastructure for converting between graph formats, optimizing graphs, and encrypting graphs for security.

Cloud Predictors

Cloud predictors are predictors that execute the ML graph server-side, on NatML's servers. These predictors are suited for using models that might be too large for use on local devices (e.g. LLMs); or for applications that require higher accuracy predictions. Cloud predictors enable predictions through their Endpoints.

Predictor Endpoints

On NatML, an Endpoint defines the server, code, GPUs, networking infrastructure, and all other resources that are used to host a cloud predictor at a given URL. NatML is designed to drastically simplify the process of creating and managing these cloud resources.
Endpoints are created from Jupyter notebooks that define a predict function:
Defining a predictor endpoint from a Jupyter Notebook.
Predictor endpoints allow you to use machine learning from anything that has an internet connection!
NatML handles provisioning servers, GPUs, feature serialization and deserialization, and much more. All you have to do is bring your notebook!