NatML
Search…
MLImageFeature
class NatML.Features.MLImageFeature : MLFeature, IMLEdgeFeature
This feature contains a pixel buffer. Because computer vision models have similar pre-processing requirements, the image feature is able to perform these operations when predictions are made with it.

Creating the Feature

The image feature can be created from several common image inputs:

From a Texture2D

/// <summary>
/// Create an image feature.
/// </summary>
/// <param name="texture"></param>
MLImageFeature (Texture2D texture);
The image feature can be created from a Texture2D.
The input texture MUST be readable.
This constructor allocates a pixel buffer every time it is used, so prefer using one of the other constructors that accepts a pixel buffer instead.

From a Color Buffer

/// <summary>
/// Create an image feature.
/// </summary>
/// <param name="pixelBuffer">Pixel buffer to create image feature from.</param>
/// <param name="width">Pixel buffer width.</param>
/// <param name="height">Pixel buffer height.</param>
MLImageFeature (Color32[] pixelBuffer, int width, int height);
The image feature can be created from a color buffer.

From a Pixel Buffer

/// <summary>
/// Create an image feature.
/// </summary>
/// <param name="pixelBuffer">Pixel buffer to create image feature from. MUST have an RGBA8888 layout.</param>
/// <param name="width">Pixel buffer width.</param>
/// <param name="height">Pixel buffer height.</param>
MLImageFeature (byte[] pixelBuffer, int width, int height);
The image feature can be created from a raw pixel buffer.
The pixel buffer must have an RGBA8888 layout.

From a Native Array

/// <summary>
/// Create an image feature from a pixel buffer.
/// </summary>
/// <param name="pixelBuffer">Pixel buffer.</param>
/// <param name="width">Pixel buffer width.</param>
/// <param name="height">Pixel buffer height.</param>
MLImageFeature (NativeArray<byte> pixelBuffer, int width, int height);
The image feature can be created from a NativeArray<byte>. This is useful when making predictions with pixel data from Unity's Texture2D API's.
The native array must remain valid for the lifetime of the image feature.

From a Native Buffer

/// <summary>
/// Create an image feature from a pixel buffer.
/// </summary>
/// <param name="pixelBuffer">Pixel buffer.</param>
/// <param name="width">Pixel buffer width.</param>
/// <param name="height">Pixel buffer height.</param>
MLImageFeature (void* pixelBuffer, int width, int height);
The image feature can be created from a native pixel buffer. This is useful when making predictions with data from native plugins or external libraries like OpenCV.
The pixel buffer must have an RGBA8888 layout.
The pixel buffer must remain valid for the lifetime of the image feature.

Inspecting the Feature

The image feature exposes its underlying type, along with convenience properties for inspecting the aforementioned type.

Feature Type

/// <summary>
/// Feature type.
/// </summary>
MLFeatureType type { get; }
Refer to the Inspecting the Feature section of the MLFeature class for more information.
The type is always an MLImageType.

Image Width

/// <summary>
/// Image width.
/// </summary>
int width { get; }
The image feature provides this convenience property for accessing the width of the feature type.

Image Height

/// <summary>
/// Image height.
/// </summary>
int height { get; }
The image feature provides this convenience property for accessing the height of the feature type.

Image Preprocessing

The image feature supports preprocessing when creating an MLEdgeFeature for edge predictions.

Normalization

When making Edge predictions on image features, some models might require that input data is normalized to some be within some range. The image feature provides these properties as an easy way to perform any required normalization.
The default range for image features is [0.0, 1.0].
When using NatML Hub, the normalization coefficients can be specified when creating a predictor:
Specifying normalization coefficients on NatML Hub.
The specified normalization coefficients can then be used like so:
// Fetch model data from NatML Hub
var modelData = await MLModelData.FromHub("@author/some-model");
// Create image feature
var imageFeature = new MLImageFeature(...);
// Apply normalization
(imageFeature.mean, imageFeature.std) = modelData.normalization;

Mean

/// <summary>
/// Normalization mean.
/// </summary>
Vector4 mean { get; set; }
The image feature supports specifying a per-channel normalization mean when creating an MLEdgeFeature.

Standard Deviation

/// <summary>
/// Normalization standard deviation.
/// </summary>
Vector4 std { get; set; }
The image feature supports specifying a per-channel normalization standard deviation when creating an MLEdgeFeature.

Aspect Mode

/// <summary>
/// Aspect mode.
/// </summary>
AspectMode aspectMode { get; set; }
The image feature supports specifying an aspect mode when creating an MLEdgeFeature with a different aspect ratio than the image feature. The aspectMode specifies how the difference in aspect ratio should be handled:
Aspect Mode
Example
AspectMode.ScaleToFit
AspectMode.AspectFill
AspectMode.AspectFit
When the aspectMode is AspectMode.AspectFit, the edge feature will be padded with transparent pixels,
(0.0,0.0,0.0,0.0)(0.0, 0.0, 0.0, 0.0)
.

Accessing Feature Data

The image feature provides methods for copying image data:

Copying Pixel Data

/// <summary>
/// Copy the image data in this feature into a pixel buffer.
/// </summary>
/// <param name="pixelBuffer">Destination pixel buffer.</param>
virtual void CopyTo<T> (T[] pixelBuffer) where T : unmanaged;
INCOMPLETE.
/// <summary>
/// Copy the image data in this feature into a pixel buffer.
/// </summary>
/// <param name="pixelBuffer">Destination pixel buffer.</param>
virtual void CopyTo<T> (NativeArray<T> pixelBuffer) where T : unmanaged;
INCOMPLETE.
/// <summary>
/// Copy the image data in this feature into the provided pixel buffer.
/// </summary>
/// <param name="pixelBuffer">Destination pixel buffer.</param>
virtual void CopyTo (void* pixelBuffer);

Converting to Texture

/// <summary>
/// Convert the image feature to a texture.
/// This method MUST only be used from the Unity main thread.
/// </summary>
/// <param name="result">Optional. Result texture to copy data into.</param>
/// <returns>Result texture.</returns>
virtual Texture2D ToTexture (Texture2D result = default);
INCOMPLETE.
This method MUST only be used from the Unity main thread.

Region of Interest

/// <summary>
/// Get a region-of-interest in the image feature.
/// </summary>
/// <param name="rect">ROI rectangle in normalized coordinates.</param>
/// <param name="rotation">Rectangle clockwise rotation in degrees.</param>
/// <param name="background">Background color for unmapped pixels.</param>
/// <returns>Region-of-interest image feature.</returns>
MLImageFeature RegionOfInterest (Rect rect, float rotation = 0f, Color32 background = default);
INCOMPLETE.
/// <summary>
/// Get a region-of-interest in the image feature.
/// </summary>
/// <param name="rect">ROI rectangle in pixel coordinates.</param>
/// <param name="rotation">Rectangle clockwise rotation in degrees.</param>
/// <param name="background">Background color for unmapped pixels.</param>
/// <returns>Region-of-interest image feature.</returns>
MLImageFeature RegionOfInterest (RectInt rect, float rotation = 0f, Color32 background = default);

Coordinate Transformations

Image features expose methods for converting points and rectangles from arbitrary feature space into the image space.
This is useful for correcting for aspect ratio differences during prediction.

Transforming Points

/// <summary>
/// Transform a normalized point from feature space into image space.
/// </summary>
/// <param name="rect">Input point.</param>
/// <param name="featureType">Feature type that defines the input space.</param>
/// <returns>Normalized point in image space.</returns>
virtual Vector2 TransformPoint (Vector2 point, MLImageType featureType);
INCOMPLETE.

Transforming Rectangles

/// <summary>
/// Transform a normalized region-of-interest rectangle from feature space into image space.
/// This method is used by detection models to correct for aspect ratio padding when making predictions.
/// </summary>
/// <param name="rect">Input rectangle.</param>
/// <param name="featureType">Feature type that defines the input space.</param>
/// <returns>Normalized rectangle in image space.</returns>
virtual Rect TransformRect (Rect rect, MLImageType featureType);
INCOMPLETE.

Vision Operations

The image feature class defines routines for common vision operations:

Non Maximum Suppression

/// <summary>
/// Perform non-max suppression on a set of candidate boxes.
/// </summary>
/// <param name="rects">Candidate boxes.</param>
/// <param name="scores">Candidate scores.</param>
/// <param name="maxIoU">Maximum IoU for preserving overlapping boxes.</param>
/// <returns>Indices of boxes to keep.</returns>
static int[] NonMaxSuppression (IReadOnlyList<Rect> rects, IReadOnlyList<float> scores, float maxIoU);
INCOMPLETE.

Intersection-over-Union

/// <summary>
/// Calculate the intersection-over-union (IoU) of two rectangles.
/// </summary>
static float IntersectionOverUnion (Rect a, Rect b);
INCOMPLETE.

Creating Edge Features

/// <summary>
/// Create an Edge ML feature that is ready for prediction with Edge ML models.
/// </summary>
/// <param name="featureType">Feature type used to create the Edge ML feature.</param>
/// <returns>Edge ML feature.</returns>
MLEdgeFeature IMLEdgeFeature.Create (in MLFeatureType type);
INCOMPLETE.