Custom Video Plugins

This feature is still in Beta. To know more or report any issues, feel free to reach out to us over Discord.

Custom video plugins allow you to hook into 100ms' video lifecycle and add your own video processing pipeline right before it gets sent to the other participants in the room. This allows for super cool things like building AR Filters, adding Virtual Background, Emojifying the streams and monitoring participant engagement. Checkout our Virtual Background docs to see an example of such a plugin. Note that this page is about creating custom new plugins than using existing ones.


  • Basic Knowledge about 100ms SDK, follow our quickstart guides.
  • Knowledge about HTML Canvas

Video Plugin Interface

The plugin needs to implement the below interface post which it can be added to the processing pipeline for the local peer's video.

interface HMSVideoPlugin { /** * The name is meant to uniquely specify a plugin instance. This will be used to track number of plugins * added to the track, and same name won't be allowed twice for the same video track. */ getName(): string; /** * This function will be called before the call to init, it is used to check whether the plugin supports current * OS and device or not. An error will be thrown back to the user if they try to use an unsupported plugin. */ checkSupport(): HMSPluginSupportResult; /** * @deprecated. Will be deleted in future updates. Use checkSupport instead. */ isSupported(): boolean; /** * This function will be called in the beginning for initialization which may include tasks like setting up * variables, loading ML models etc. This can be used by a plugin to ensure it's prepared at the time * processVideoFrame is called. */ init(): Promise<void>; /** * @see HMSVideoPluginType below */ getPluginType(): HMSVideoPluginType; /** * By default a 2d rendering context is used for the output canvas. if you want to change it to webgl use this. * @see HMSVideoPluginCanvasContextType below */ getContextType?(): HMSVideoPluginCanvasContextType; /** * This function will be called by the SDK for every video frame which the plugin needs to process. * PluginFrameRate - the rate at which the plugin is expected to process the video frames. This is not necessarily * equal to the capture frame rate. The user can specify this rate, and the sdk might also change it on basis of * device type, or CPU usage. * For an analyzing plugin, the below function will be called at plugin framerate. * For a transforming plugin, the sdk will pass in the input and output at real frame rate with an additional boolean * pass. The expectation is that the plugin should use results of previous runs instead of doing a complex processing * again when skipProcessing is set to true. This helps in maintaining the framerate of the video as well as bringing down * CPU usage in case of complex processing. * @param input input canvas containing the input frame * @param output the output canvas which should contain the output frame * @param skipProcessing use results from previous run if true */ processVideoFrame( input: HTMLCanvasElement, output?: HTMLCanvasElement, skipProcessing?: boolean ): Promise<void> | void; /** * The plugin can implement this function to dispose off its resources. It'll be called when the processor instance is * no longer needed and the plugin is removed. */ stop(): void; } /** * Specifies the type of the plugin, a transforming plugin will get an output canvas to give the resulting * transformation. While an analyzing plugin will only be passed the input canvas. */ export enum HMSVideoPluginType { TRANSFORM = 'TRANSFORM', ANALYZE = 'ANALYZE' } /** * Specifies the type of canvas rendering context. */ export enum HMSVideoPluginCanvasContextType { '2D' = '2d', WEBGL = 'webgl', 'WEBGL2' = 'webgl2' } export interface HMSPluginSupportResult { isSupported: boolean; errType?: HMSPluginUnsupportedTypes; errMsg?: string; } export enum HMSPluginUnsupportedTypes { PLATFORM_NOT_SUPPORTED = 'PLATFORM_NOT_SUPPORTED', DEVICE_NOT_SUPPORTED = 'DEVICE_NOT_SUPPORTED' }

Adding and Removing Plugins

Once a plugin implementation is ready, it can be added and removed from local peer's video track as below -

import { hmsActions } from './hms'; const myPlugin = new MyCustomPlugin(); const pluginSupport = hmsActions.validateAudioPluginSupport(myPlugin); if (myPlugin.isSupported) { // myPlugin.init(); // optional, recommended if plugin implements it, you can show a loader here in the UI const isPluginAdded = hmsStore.getState( selectIsLocalVideoPluginPresent(myPlugin.getName()); ) if (!isPluginAdded) { hmsActions.addPluginToVideoTrack(myPlugin); } else { hmsActions.removePluginFromVideoTrack(myPlugin); } } else{ const err = pluginSupport.errMsg; console.error(err); }

Implementation Example - Grayscale Filter

Below is a sample implementation of the above interface which converts the local video in grayscale.

class GrayscalePlugin { getName() { return "grayscale-plugin"; } checkSupport() { let result = {} as HMSPluginSupportResult; result.isSupported = true; return result; } async init() {} getPluginType() { return HMSVideoPluginType.TRANSFORM; } stop() {} /** * @param input {HTMLCanvasElement} * @param output {HTMLCanvasElement} */ processVideoFrame(input, output) { const width = input.width; const height = input.height; output.width = width; output.height = height; const inputCtx = input.getContext("2d"); const outputCtx = output.getContext("2d"); const imgData = inputCtx.getImageData(0, 0, width, height); const pixels =; for (let i = 0; i < pixels.length; i += 4) { const red = pixels[i]; const green = pixels[i + 1]; const blue = pixels[i + 2]; // const lightness = Math.floor(red * 0.299 + green * 0.587 + blue * 0.114); pixels[i] = pixels[i + 1] = pixels[i + 2] = lightness; } outputCtx.putImageData(imgData, 0, 0); } } // to add the plugin to local video hmsActions.addPluginToVideoTrack(new GrayscalePlugin());

To see a complete example of above plugin in use with our react quickstart, please check this repository or the Codesandbox.

Plugin Guidelines

  • Feel free to implement more methods outside the interface, options passed in plugin's constructor etc. as required for the users of the plugin to give more interaction points. For example, our Virtual background plugin exposes a method to change the background as required.
  • If your plugin involves a CPU intensive task such as initialising a ML model, ensure that it's done as part of the init method. As a bonus, ensure that the plugin remembers the model initialisation across instances, so in case user leaves and joins the room again without closing the tab they don't see a delay the second time.
  • The addPlugin method takes a second parameter to control the framerate. This can be used to reduce the amount of processing on low-end devices. CPU usage by the plugin would be proportional to the video resolution and frame rate.

Have a suggestion? Recommend changes ->

Was this helpful?