Distributing Predictors

Sharing with the World

Predictors are designed to be shared. Whether you choose to open-source your predictor or sell it, here are some general guidelines:

Packaging Predictors

We highly recommend packaging a predictor with the following layout:

model-name/             // Package name should be lowercase and dasherized

├─ Runtime/
│  ├─ MLPackage.asmdef  // Assembly definition for your package scripts
│  ├─ Predictor.cs      // Model predictor
│  ├─ ...

├─ Sample/ 
│  ├─ example.unity     // Example scene demonstrating model
│  ├─ ...


├─ README.md            // Readme explaining how the predictor is used
├─ LICENSE.md           // License if applicable

Your package assembly definition should reference NatML.ML for access to NatML classes and interfaces.

You can use NatML Hub to generate a template predictor package that already has this layout, saving you time.

Publishing on NatML

All public predictors on NatML Hub must pass a review process to ensure that they meet developer experience and performance standards. Below are the criteria used in the review process:

Developer Experience

The foundational principle in designing the developer experience is to reduce cognitive load. The developer should not have to learn many--or ideally, any--new concepts in order to use your predictor.

Try to keep the number of public methods in your predictor at a minimum. Ideally, there should only be one public method: Predict.

The README should be the entrypoint for developers. Keeping in line with the considerations above, the README should very quickly discuss how the predictor is used, with code snippets.

Most developers will simply not read the README, so keeping it short and sweet would increase their chances of actually reading it.

API Design

NatML predictors have a typical usage pattern:

  1. Create the predictor.

  2. Call Predict with one or more features.

  3. Use the output(s) directly, or call a post-processing method on the output(s).

Predictors must not deviate from this usage pattern. Specifically, the predictor must not have any public methods for feature pre- or post-processing.

Predictors should be thread-safe, and should support background processing. As a result, the Predict method should not use any Unity APIs which cannot be used from background threads. This includes familiar classes like Texture2D, RenderTexture, ComputeShader, and Job.

If your predictor requires pre-processing on the main thread, you should instead create a CustomFeature class which derives from MLFeature and implements IMLEdgeFeature or IMLCloudFeature.

If your predictor requires further post-processing before the outputs can be used, then your predictor should return an instance of an inner class. This inner class should expose a method to perform the required post-processing. This is a common pattern for computer vision predictors that output an image:

// Predictor outputs an inner class
Predictor.Output output = predictor.Predict(...);
// Then developer performs post-processing on the output
RenderTexture result = ...;
output.PostProcessIntoRenderTexture(result);

One advantage of this pattern is that the developer can run your post-processing code on the main thread, giving you full access to Unity API's.

Finally, all public methods must be annotated with XML documentation. This is critical for developers to know how to use different methods in your classes.

Most code editors have intellisense which automatically display the XML docs to the developer. This significantly increases developer productivity.

Performance

Predictors should be written for maximum performance and minimal overhead. Predictors, along with any pre- or post-processors, must not use any performance-degrading API's that might have significant adverse effects on the entire app.

Predictor packages that use GPU readbacks (Texture2D.ReadPixels, ComputeBuffer.GetData) or Disk IO will be immediately rejected.

Last updated