Unique customer identifier provided by SDK vendor.
A callback is fired when the SDK is ready for frame processing. Effects configuration should only be applied after this callback has been triggered.
pass true to delete cached models
moved cache_models configuration to config + add seperate function to clear cache Set the cache_models configuration to true. This can also be configured using sdk.config({ cache_models: true | false }). The model will be cached during the first load to local storage. This will speed up all subsequent initializations of effects using ML models.
Ability to configure sdk execution environment
configuration object
General configuration options
config = {
api_url: 'url', // Enable custom URL configuration for SDK authentication, suitable for on-premises solutions.
model_url: 'url', // Custom URL for the segmentation model; in most cases, this parameter does not require configuration.
sdk_url: 'url', // This parameter specifies the URL to the SDK folder for cases where you host the model files yourself.
effects: ['virtual_background', 'smart_zoom', 'low_light', 'color_correction'], // List of effects which should be loaded at initializatin
preset: 'balanced', // You can set the default segmentation preset to one of the following options: quality, balanced, speed, or lightning.
proxy: true/false, // The configuration specifies whether segmentation should operate in a separate worker thread (not in the main UI thread), with the default value set to true.
provider: wasm/webgpu/auto, // Allow users to select where to execute the segmentation. In auto mode, the SDK will verify the availability of a GPU. If a GPU is not available, it will automatically fall back to using WASM
stats: true/false, // To enable or disable the sending of statistics.
cache_models: true/false, // To cache models locally, this will speed up the load time. By default is true
test_inference: true/false, // False by default. If set to true, the SDK will test inference consistency on the WebGPU backend.
models: {
'colorcorrector': 'url', // The feature allows for the provision of a custom model name; if left empty, the feature will be disabled.
'facedetector': 'url', // The feature allows for the provision of a custom model name; if left empty, the feature will be disabled.
'lowlighter': 'url', // The feature allows for the provision of a custom model name; if left empty, the feature will be disabled.
},
wasmPaths: { // Currently, WASM files are loaded from the same directory where the SDK is placed, but custom URLs are also supported (for example, you can load them from CDNs).
'ort-wasm.wasm': 'url',
'ort-wasm-simd.wasm': 'url',
'ort-wasm-threaded.wasm': 'url',
'ort-wasm-simd-threaded.wasm': 'url'
}
}
Example of how to change default segmentation preset
config = {
preset: 'lightning'
}
Example of how to disable colorcorrection and facedetection
config = {
effects: ['virtual_background']
}
Example of how to hot models on custom domain
config = {
sdk_url: 'https://domain.com/sdk/' // in this derectory should be subfolder models with all required models
}
Destroy SDK instance and cleanup all resources including WebGL contexts. This method should be called when you're done with the SDK instance to prevent memory leaks. After calling destroy(), you should create a new SDK instance if you need to use it again.
Initialize the frame processor for processing individual video frames. This method sets up the rendering pipeline and effect processor without requiring a MediaStream. It creates the necessary components (Renderer, EffectProcessor) and loads all configured effects. If the frame processor is already initialized, this method returns immediately.
Use this method when you want to process individual VideoFrame objects directly via processFrame(), rather than processing an entire MediaStream via useStream().
IMPORTANT - Mode Isolation:
A promise that resolves when initialization is complete and all effects are loaded.
Process a single VideoFrame through the effects pipeline. This method allows for frame-by-frame processing without requiring a MediaStream. The frame processor must be initialized via initFrameProcessor() before calling this method.
All enabled effects (virtual background, color correction, smart zoom, etc.) will be applied to the input frame, and a new processed VideoFrame will be returned.
IMPORTANT - Mode Isolation:
Important: The input VideoFrame is consumed by this method. You should not use it after passing it to processFrame(). The SDK automatically closes the input frame even if processing fails.
Backpressure Control: The SDK can handle up to 2 frames in-flight simultaneously:
It's recommended to track in-flight frames and drop incoming frames when the limit is reached to prevent memory buildup and maintain smooth performance.
The input VideoFrame to process. This frame will be consumed by the processing pipeline.
A promise that resolves to the processed VideoFrame with all effects applied.
Throws an error if the Frame Processor is not initialized. Call initFrameProcessor() first.
Basic usage:
await sdk.initFrameProcessor();
const processedFrame = await sdk.processFrame(inputFrame);
Recommended pattern with backpressure control:
let stats = { inFlight: 0, droppedFrames: 0 };
// In your frame callback:
// Drop frames if too many in-flight (backpressure control)
if (stats.inFlight >= 2) {
stats.droppedFrames++;
inputFrame.close();
continue;
}
stats.inFlight++;
sdk.processFrame(inputFrame)
.then(outputFrame => {
// Draw output frame to canvas
ctx.drawImage(outputFrame, 0, 0);
// Close the output frame
outputFrame.close();
})
.catch(err => {
console.error('Frame processing failed:', err);
// Note: inputFrame is already closed by SDK even on error
})
.finally(() => {
// Decrement in-flight counter (inputFrame already closed by SDK)
stats.inFlight--;
});
Set media source of the background. Video sources will be played automatically from the beginning.
the link to image/video of the server or one of the following objects: MediaStream, MediaStreamTrack, HTMLVideoElement, ImageBitmap, Canvas.
Set chroma key parameters.
Partial settings for chroma keying.
True if the settings were applied successfully.
Set the layout with custom params
objects with the custom params
Optional size?: numberOptional xOptional ypersent = {
xOffset?: number, // horizontal offset relative to center, value can be a number from -1 to 1
yOffset?: number, // vertical offset relative to center, value can be a number from -1 to 1
size?: number, mask size percentage// value can be a number from 0 to 1 *
}
Set the layout mode. You can disable segmentation and show full camera frame or hide camera frame.
could be the one of the following: 'segmentation' | 'full' | 'hide' | 'transparent' Segmentation - Process the segmentation and display the selected background. Full - Show the full original frame without segmentation. Hide - Hide the original frame completely (only the background will be visible). Transparent - Process the segmentation and return the segmented person with a transparent background.
Set portrait lighting options.
Portrait lighting configuration
Set the segmentation mode. Segmentation mode allow to choose combination of quality and speed of segmentation. Balanced mode is enabled by default.
in string format. The values could be quality, balanced, speed, lightning.
Set the MediaStream object which will be the source of the video frames for processing.
IMPORTANT - Mode Isolation:
the source MediaStream object.
Optional resize: ResizeSettingsoptional resize settings to apply custom output dimensions.
Initiation of main SDK instance.