MLAudioFeature
class NatML.Features.MLAudioFeature : MLFeature, IMLEdgeFeature, IEnumerable<(MLAudioFeature feature, long timestamp)>
This feature contains raw audio data. Currently, NatML only supports floating-point linear PCM audio data.
Creating the Feature
The audio feature can be created from several different audio inputs:
From an AudioClip
/// <summary>
/// Create an audio feature from an audio clip.
/// </summary>
/// <param name="clip">Audio clip.</param>
/// <param name="duration">Optional duration to extract in seconds.</param>
MLAudioFeature (AudioClip clip, float duration = ...);
The audio feature can be created from an AudioClip
, with the optional ability to specify the duration of the clip to extract.
From a Sample Buffer
/// <summary>
/// Create an audio feature from a sample buffer.
/// </summary>
/// <param name="sampleBuffer">Linear PCM sample buffer.</param>
/// <param name="sampleRate">Sample rate.</param>
/// <param name="channelCount">Channel count.</param>
MLAudioFeature (float[] sampleBuffer, int sampleRate, int channelCount);
The audio feature can be created from a sample buffer in managed memory, along with audio format information.
From a Native Array
/// <summary>
/// Create an audio feature from a sample buffer.
/// </summary>
/// <param name="sampleBuffer">Linear PCM sample buffer.</param>
/// <param name="sampleRate">Sample rate.</param>
/// <param name="channelCount">Channel count.</param>
MLAudioFeature (NativeArray<float> sampleBuffer, int sampleRate, int channelCount);
The audio feature can be created from a NativeArray<float>
sample buffer, along with audio format information.
The sampleBuffer
MUST remain valid for the lifetime of the audio feature.
From a Native Buffer
/// <summary>
/// Create an audio feature from a sample buffer.
/// </summary>
/// <param name="sampleBuffer">Linear PCM sample buffer.</param>
/// <param name="sampleRate">Sample rate.</param>
/// <param name="channelCount">Channel count.</param>
/// <param name="sampleCount">Total sample count.</param>
MLAudioFeature (float* sampleBuffer, int sampleRate, int channelCount, int sampleCount);
The audio feature can be created from a sample buffer, along with audio format information.
The sampleBuffer
MUST remain valid for the lifetime of the audio feature.
From a Buffer List
/// <summary>
/// Create an audio feature from a sample buffer list.
/// </summary>
/// <param name="sampleBuffer">List of linear PCM sample buffers.</param>
/// <param name="sampleRate">Sample rate.</param>
/// <param name="channelCount">Channel count.</param>
MLAudioFeature (IEnumerable<float[]> bufferList, int sampleRate, int channelCount);
The audio feature can be created from an audio buffer list. This is useful for audio-based predictors that make predictions on longer segments of audio data, like speech-to-text models.
Inspecting the Feature
/// <summary>
/// Feature type.
/// </summary>
MLFeatureType type { get; }
Refer to the Inspecting the Feature section of the MLFeature
class for more information.
Audio Preprocessing
The audio feature supports preprocessing when creating an MLEdgeFeature
for edge predictions that use raw waveform data:
Sample Rate
/// <summary>
/// Desired sample rate for Edge predictions.
/// </summary>
int sampleRate { get; set; }
For Edge predictors that make predictions on raw audio waveform data, the audio feature can resample audio data to the specified sampleRate
.
Channel Count
/// <summary>
/// Desired channel count for Edge predictions.
/// </summary>
int channelCount { get; set; }
For Edge predictors that make predictions on raw audio waveform data, the audio feature can multiplex or demultiplex audio data to the specified channelCount
.
Normalization
When making Edge predictions on audio features, some models might require that input data is normalized to some be within some range. The audio feature provides these properties as an easy way to perform any required normalization.
When using NatML Hub, the normalization coefficients can be specified when creating a predictor:
The specified normalization coefficients can then be used like so:
// Fetch model data from NatML Hub
var modelData = await MLModelData.FromHub("@author/some-model");
// Create audio feature
var audioFeature = new MLAudioFeature(...);
// Apply normalization
audioFeature.mean = modelData.normalization.mean[0];
audioFeature.std = modelData.normalization.std[0];
Mean
/// <summary>
/// Normalization mean.
/// </summary>
float mean { get; set; } = 0;
The audio feature supports specifying a normalization mean when creating an MLEdgeFeature
.
Standard Deviation
/// <summary>
/// Normalization standard deviation.
/// </summary>
float std { get; set; } = 1f;
The audio feature supports specifying a normalization standard deviation when creating an MLEdgeFeature
.
Creating an Edge Feature
/// <summary>
/// Create an Edge ML feature that is ready for prediction with Edge ML models.
/// </summary>
/// <param name="featureType">Feature type used to create the Edge ML feature.</param>
/// <returns>Edge ML feature.</returns>
MLEdgeFeature IMLEdgeFeature.Create (in MLFeatureType type);
INCOMPLETE.
Last updated
Was this helpful?