NatCorder
Search…
Performance Considerations
Squeezing Out Every Last Drop
Video recording is a computationally intensive process. It requires moving around hundreds of megabytes of pixel data each second. As a result, careful considerations must be made to maintain maximum performance during recording.

Recording Resolution

It is highly recommended to start with 1280x720 before trying to increase the resolution. The process of finding a good recording resolution given your app's rendering characteristics is a highly empirical process. As a result, you must profile your app in your development environments. In Unity, use the Profiler. On iOS development, use the Xcode frame debugger and Instruments. On Android, use the Android profiler.
Never record at screen resolution, as screens have now gotten so high resolution that the encoder might not be able to keep up with their pixel densities.

Recording Frame Rate

It is recommended to record at 30FPS, even if your app renders at 60FPS. If you record with a CameraInput, it will commit a frame to the recorder on every Unity update by default. So if your app runs at 60FPS, the recording will also be 60FPS.
You can change the recording frame rate by reducing the frequency at which frames are committed to the recorder. The CameraInput class provides the frameSkip property for this purpose.

Multithreaded Rendering

On most platforms, Unity can be configured to issue graphics commands in a dedicated worker thread. This is called Multithreaded Rendering. It is highly recommended to enable this setting.
Beyond that, the choice of graphics API's will influence recording performance. On iOS and macOS, we highly recommend using Metal. On Android, we highly recommend using Vulkan. On Windows, DirectX 11 or 12 will provide best performance.
Recording performance is greatly reduced when rendering with OpenGL ES on iOS or Android.

Multithreaded Recording

All recorder Commit* methods will block until the frame has been consumed. This process takes time, depending on whether the encoder is ready to take more samples, or how much load the system is under.
Recorders will never drop committed frames during recording. Instead, they will block until the frame has been consumed before returning control.
To counter this behaviour, all recorder methods are thread safe. This means that they can be invoked from different threads, simultaneously, without the client needing to synchronize accesses. It is highly recommended to take advantage of this architecture by committing frames in worker threads wherever possible.

Memory Concerns

When recording video or audio frames, the source pixel buffer or sample buffer might be resident in native memory, for example:
1
// Create a Texture2D
2
var texture = new Texture2D(...);
3
// Get the texture data
4
NativeArray<byte> textureData = texture.GetRawTextureData<byte>();
Copied!
In these situations, copying data into a managed array and committing that array can be inefficient. All recorders can accept a raw pointer into native memory:
1
// Get base address // This requires unsafe code
2
void* textureDataBaseAddress = NativeArrayUnsafeUtility.GetUnsafeReadOnlyPtr(textureData);
3
// Commit the native texture data directly
4
recorder.CommitFrame(textureDataBaseAddress);
Copied!
In general, try to reduce the number of memory copies when committing frames to recorders. And if you have no choice, make sure to keep a persistent array which you copy into, as opposed to allocating new arrays consistently. So don't do this:
1
void Update () {
2
// BAD // We are allocating a new array each time
3
recorder.CommitFrame(webCamTexture.GetPixels32());
4
}
Copied!
Do this instead:
1
Color32[] pixelArray; // initialize this somewhere
2
3
void Update () {
4
// GOOD // Copy into the same array first
5
webCamTexture.GetPixels32(pixelArray);
6
// Commit that array
7
recorder.CommitFrame(pixelArray);
8
}
Copied!