Custom Audio Plugins

This feature is still in Beta. To know more or report any issues, feel free to reach out to us over Discord.

Custom audio plugins allow you to hook into 100ms' video lifecycle and add your own audio processing pipeline right before it gets sent to the other participants in the room. This allows for various audio effects like noise suppression, reverb, echo, delay, chorus, robotic voice etc. Note that this page is about creating custom new plugins than using existing ones.

Prerequisites

  • Basic Knowledge about 100ms SDK, follow our quickstart guides.
  • Knowledge about Audio nodes

Audio Plugin Interface

The plugin needs to implement the below interface post which it can be added to the processing pipeline for the local peer's audio.

export interface HMSPluginSupportResult { isSupported: boolean; errType?: HMSPluginUnsupportedTypes; errMsg?: string; } export enum HMSPluginUnsupportedTypes { PLATFORM_NOT_SUPPORTED = 'PLATFORM_NOT_SUPPORTED', // when particular os or browser is not supported DEVICE_NOT_SUPPORTED = 'DEVICE_NOT_SUPPORTED' // when particular device is not supported, for example Bluetooth headphones } interface HMSAudioPlugin { /** * The name is meant to uniquely specify a plugin instance. This will be used to track number of plugins * added to the track, and same name won't be allowed twice for the same audio track. */ getName(): string; /** * This function will be called before the call to init, it is used to check whether the plugin supports current * OS, browser and audio device or not. An error object will be thrown back to the user if they try to use an unsupported plugin. */ checkSupport(ctx?: AudioContext): HMSPluginSupportResult; /** * This function will be called in the beginning for initialization which may include tasks like setting up * variables, loading ML models etc. This can be used by a plugin to ensure it's prepared at the time * processAudio is called. */ init(): Promise<void>; /** * @see HMSAudioPluginType below */ getPluginType(): HMSAudioPluginType; /** * This function will be called by the SDK for audio track which the plugin needs to process. * The reason audio context is also part of the interface is that it's recommeneded to reuse on audio context * instead of creating new for every use - https://developer.mozilla.org/en-US/docs/Web/API/AudioContext */ processAudioTrack(ctx: AudioContext, source: AudioNode): Promise<AudioNode>; /** * The plugin can implement this function to dispose off its resources. It'll be called when the processor instance is * no longer needed and the plugin is removed. */ stop(): void; } /** * Specifies the type of the plugin a transforming plugin will get an output audio node to give the resulting * transformation. While an analyzing plugin will only be passed the input node. * For analyse plugins, you can return the source node passed to plugin.processTrack to not modify anything */ export enum HMSAudioPluginType { TRANSFORM = 'TRANSFORM', ANALYZE = 'ANALYZE' }

Adding and Removing Plugins

Once a plugin implementation is ready, it can be added and removed from local peer's audio track as below -

const myPlugin = new MyCustomPlugin(); const checkSupport = myPlugin.checkSupport(); if (checkSupport.isSupported) { // myPlugin.init(); // optional, recommended if plugin implements it, you can show a loader here in the UI const isPluginAdded = hmsStore.getState( selectIsLocalAudioPluginPresent(myPlugin.getName()); ) if (!isPluginAdded) { hmsActions.addPluginToAudioTrack(myPlugin); } else { hmsActions.removePluginFromAudioTrack(myPlugin); } } else { console.log('plugin not supported', checkSupport.errMsg, checkSupport.errType) }

Implementation Example - Gain effect Plugin

Below is a sample implementation of the above interface which can be used to control the overall gain (or volume) of the audio.

class GainPlugin implements HMSAudioPlugin { private gainNode?: GainNode; private gainValue = 0.25; private name = 'gain-plugin'; constructor(gainValue?: number, name?: string) { if (gainValue !== undefined) { this.gainValue = gainValue; } if (name) { this.name = name; } } async processAudioTrack(ctx: AudioContext, source: AudioNode) { if (!ctx) { throw new Error('Audio context is not created'); } if (!source) { throw new Error('source is not defined'); } this.gainNode = ctx.createGain(); this.gainNode.gain.value = this.gainValue; source.connect(this.gainNode); return this.gainNode; } checkSupport() { // This is when plugin is supported. return { isSupported: true, } // if plugin is not supported in some browser, you can send error here. return { isSupported: false, errType: HMSPluginUnsupportedTypes.PLATFORM_NOT_SUPPORTED; errMsg: 'Error message you want to share' } } init() {} getName() { return this.name; } getPluginType() { return HMSAudioPluginType.TRANSFORM; } stop() { this.gainNode?.disconnect(); this.gainNode = undefined; } } // to add the plugin to local audio hmsActions.addPluginToAudioTrack(new GainPlugin());

Plugin Guidelines

  • If your plugin involves a CPU intensive task such as initialising a ML model, ensure that it's done as part of the init method. As a bonus, ensure that the plugin remembers the model initialisation across instances, so in case user leaves and joins the room again without closing the tab they don't see a delay the second time.

Have a suggestion? Recommend changes ->

Was this helpful?

1234