Performance Considerations

Squeezing Out Every Last Drop

Video recording is a computationally intensive process. It requires moving around hundreds of megabytes of pixel data each second. As a result, careful considerations must be made to maintain maximum performance during recording.

Recording Resolution

It is highly recommended to start with 1280x720 before trying to increase the resolution. The process of finding a good recording resolution given your app's rendering characteristics is a highly empirical process. As a result, you must profile your app in your development environments. In Unity, use the Profiler. On iOS development, use the Xcode frame debugger and Instruments. On Android, use the Android profiler.

Never record at screen resolution, as screens have now gotten so high resolution that the encoder might not be able to keep up with their pixel densities.

Recording Frame Rate

It is recommended to record at 30FPS, even if your app renders at 60FPS. If you record with a CameraInput, it will commit a frame to the recorder on every Unity update by default. So if your app runs at 60FPS, the recording will also be 60FPS.

You can change the recording frame rate by reducing the frequency at which frames are committed to the recorder. The CameraInput class provides the frameSkip property for this purpose.

Multithreaded Rendering

On most platforms, Unity can be configured to issue graphics commands in a dedicated worker thread. This is called Multithreaded Rendering. It is highly recommended to enable this setting.

Beyond that, the choice of graphics API's will influence recording performance. On iOS and macOS, we highly recommend using Metal. On Android, we highly recommend using Vulkan. On Windows, DirectX 11 or 12 will provide best performance.

Recording performance is greatly reduced when rendering with OpenGL ES on Android.

Multithreaded Recording

All recorder Commit* methods will block until the frame has been consumed. This process takes time, depending on whether the encoder is ready to take more samples, or how much load the system is under.

Recorders will never drop committed frames during recording. Instead, they will block until the frame has been consumed before returning control.

To counter this behaviour, all recorder methods are thread safe. This means that they can be invoked from different threads, simultaneously, without the client needing to synchronize accesses. It is highly recommended to take advantage of this architecture by committing frames in worker threads wherever possible.

Memory Concerns

When recording video or audio frames, the source pixel buffer or sample buffer might be resident in native memory, for example:

// Create a Texture2D
var texture = new Texture2D(...);
// Get the texture data
NativeArray<byte> textureData = texture.GetRawTextureData<byte>();

In these situations, copying data into a managed array and committing that array can be inefficient. All recorders can accept a NativeArray<T> or a raw pointer into native memory:

// Commit the native texture data directly
recorder.CommitFrame(textureData);

In general, try to reduce the number of memory copies when committing frames to recorders. And if you have no choice, make sure to keep a persistent array which you copy into, as opposed to allocating new arrays consistently. So don't do this:

void Update () {
    // BAD // We are allocating a new array each time
    recorder.CommitFrame(webCamTexture.GetPixels32());
}

Do this instead:

Color32[] pixelArray; // initialize this somewhere

void Update () {
    // GOOD // Copy into the same array first
    webCamTexture.GetPixels32(pixelArray);
    // Commit that array
    recorder.CommitFrame(pixelArray);
}

Last updated