NatCorder
Search…
IMediaRecorder
interface NatSuite.Recorders.IMediaRecorder
This interface defines all common functionality for media recorders. NatCorder includes several implementations of this interface, each of which can record video, audio, or both. Below are the methods provided by this implementation.
All recorder methods are thread safe, so they can be used from any thread.

Frame Size

1
/// <summary>
2
/// Recording frame size.
3
/// </summary>
4
(int width, int height) frameSize { get; }
Copied!
NatCorder is mainly designed for recording video. As a result, most recorders will be created with a frame size (width and height) which defines the pixel size of the output video. With this in mind, the IMediaRecorder interface exposes the frameSize property:

Committing Media Samples

All recorders are designed with a push architecture. This means that the client (you) will push frames to the recorder when they want. This is different from say a screen recorder, which will automatically pull frames for you--whether from the screen or from the microphone. Currently the client can commit video frames or audio frames.

Committing Video Frames

1
/// <summary>
2
/// Commit a video pixel buffer for encoding.
3
/// The pixel buffer MUST have an RGBA8888 pixel layout.
4
/// </summary>
5
/// <param name="pixelBuffer">Pixel buffer to commit.</param>
6
/// <param name="timestamp">Pixel buffer timestamp in nanoseconds.</param>
7
void CommitFrame<T> (T[] pixelBuffer, long timestamp) where T : unmanaged;
Copied!
This method takes in a managed RGBA8888 pixel buffer. This could be a Color32[] provided by Unity's Texture2D.GetPixels32 or WebCamTexture.GetPixels32 methods; it could be a managed byte[] provided by the NatDevice camera preview or an OpenCV matrix; or it can be any other managed numeric array that contains data which can be interpreted as an RGBA8888 pixel buffer.
Note that the dimensions of the committed pixel buffer must match the frame size that was used to create the recorder. In other words, the byte size of the pixel buffer must be equal to frameSize.width * frameSize.height * 4.
The CommitFrame method has an overload that takes in a pointer to anRGBA8888 pixel buffer in native memory. This is useful for applications that want to avoid garbage collection when working with large pixel buffers in managed memory.
1
/// <summary>
2
/// Commit a video pixel buffer for encoding.
3
/// The pixel buffer MUST have an RGBA8888 pixel layout.
4
/// </summary>
5
/// <param name="nativeBuffer">Pixel buffer in native memory to commit.</param>
6
/// <param name="timestamp">Pixel buffer timestamp in nanoseconds.</param>
7
void CommitFrame (void* nativeBuffer, long timestamp);
Copied!
Do not commit raw pointers unless you know what you are doing, as these are much more likely to result in a hard crash if something goes wrong.

Committing Audio Frames

1
/// <summary>
2
/// Commit an audio sample buffer for encoding.
3
/// </summary>
4
/// <param name="sampleBuffer">Linear PCM audio sample buffer, interleaved by channel.</param>
5
/// <param name="timestamp">Sample buffer timestamp in nanoseconds.</param>
6
void CommitSamples (float[] sampleBuffer, long timestamp);
Copied!
NatCorder expects that all audio frames are provided as floating-point linear PCM samples. When the audio frames contain more than one channel (for example, stereo audio), the samples are expected to be interleaved by channel.
The CommitSamples method has an overload that takes a pointer to a float sample buffer in native memory. This is useful for applications that want to avoid garbage collection and extra allocations for high performance recording.
1
/// <summary>
2
/// Commit an audio sample buffer for encoding.
3
/// The sample buffer MUST be a linear PCM floating point buffer interleaved by channel.
4
/// </summary>
5
/// <param name="nativeBuffer">Sample buffer in native memory to commit.</param>
6
/// <param name="sampleCount">Total number of samples in the buffer.</param>
7
/// <param name="timestamp">Sample buffer timestamp in nanoseconds.</param>
8
void CommitSamples (float* nativeBuffer, int sampleCount, long timestamp);
Copied!
The sampleCount parameter should account for the multiple channels of audio present within the buffer. In other words, the byte size of the nativeBuffer must be equal to sampleCount * sizeof(float).

Frame Timestamps

All frames are committed with a corresponding timestamp. This timestamp must be in nanoseconds. You can either manually compute the timestamps, or use an IClock instance. All timestamps are expected to be zero-based, meaning that the very first timestamp for either an audio or video frame must be zero. Not meeting this requirement will not raise an exception or a crash, but it will likely cause drifting and other synchronization issue in the resulting media file.

Finishing Recording

1
/// <summary>
2
/// Finish writing and return the path to the recorded media file.
3
/// </summary>
4
Task<string> FinishWriting ();
Copied!
When you are done committing frames, you can end the recording session with this method. When the method is called, the recorder will complete its recording operations, finalize the media file, then release any resources. The method returns a path to the recorded media file once the recorder is finished with its cleanup operations. If recording fails for any reason, the task will raise an exception. But it is worth noting that a recorder will rarely ever fail to finish writing successfully.
No further media frames must be committed once FinishWriting is called. Doing so will typically result in a hard crash.
All recorders will write the media file to the application's private documents directory. There is no way to change this behaviour, so if you want the video in a specific place, you can use the System.IO API's to move the file where you want it.
Last modified 1mo ago