Home

 / Blog / 

How to moderate Chat in 100ms? - Part 1

How to moderate Chat in 100ms? - Part 1

April 15, 20245 min read

Share

Tensor - 1.png

When we create live audio-video experiences online, keeping the environment safe for everyone involved is important. Sometimes, personal info like email or address might accidentally get shared in the chat, or someone might use inappropriate language. While moderators are helpful, it's also good to have automatic moderation for larger groups or one-on-one calls to keep everyone safe.

In this article, we'll explore different ways to automatically manage text chats in video apps using 100ms. We'll start by adjusting the chat feature of 100ms Prebuilt to demonstrate an example.

Part 2 of this blog series covers using cloud services and LLMs to do the job automatically. Find that here.

Remember, the methods we discuss are just one way to help with moderation. They're meant to give developers ideas, but they're not perfect.

Getting started

We'll begin by setting up 100ms Prebuilt for this tutorial as it comes with Chat configured.

  • Fork your copy of the web-sdks repository from here.
  • Clone it to your system and open it in a code editor of your choice.
  • Run the command yarn to validate and resolve packages using the Yarn package manager.
  • Navigate to web-sdks/packages/roomkit-react/src/Prebuilt/components/Chat/ChatFooter.tsx

In the 100ms Prebuilt, the message is typed in the Chat Footer and before it is sent it is stored in the message variable as follows:

const message = inputRef?.current?.value;
  • Run the yarn build command to build the packages.
  • Next, navigate to the prebuilt-react-integration under the examples directory and run the yarn dev commands in the terminal, to run 100ms Prebuilt example locally.

Using Regular Expression

The first approach, we look at, is to use a regular expression to omit any personal information being shared in public chat. Before sending the message, we can run it through regular expression to check for personal information like emails, etc.

A simple regex like the one below can be added to the ChatFooter component:

const emailRegex = /\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b/;
const containsEmail = emailRegex.test(message);

if (containsEmail) {
  ToastManager.addToast({
    title: 'Message contains personal information. Please do not share personal information!',
  });
  message = message.replace(/([A-Za-z0-9._%+-]+)@/, 'xxxxx@');
}

The above would check if the message contains email and show a toast notifying the user that personal information should not be shared. As developers, we can completely omit the message or still allow it to be sent replacing the characters. In the above, code I have chosen to replace the characters before @ in the email with xxxxx for demonstration.

Run the prebuilt-react-integration in the examples directory to view the changes. When trying to send a message with an email, the user would see the following:

Using an external package

We can do something similar for phone numbers using regular expressions. However, it can require extensive effort to write code to tell if the number shared by the user is a phone number or just another number. To allow additional checks, instead of writing the code on our own, we can use an external package like the phone ****package to check for phone numbers.

In the package.json of the roomkit-react package, add the following under dependencies:

"phone": "3.1.42" //Please use the latest version

Run the yarn command again to validate and resolve the added packages. Return to the ChatFooter.tsx file to import the package:

import { phone } from 'phone';

And, add the following to check for phone numbers and replace them accordingly:

const { isValid } = phone(message);

if (isValid) {
  ToastManager.addToast({
    title: 'Message contains personal information. Please do not share personal information.',
  });
  message = message.replace(message, 'xxxxxxxxxx');
}

Phone Number moderation in 100ms

Using a pre-trained model

TensorFlow.js offers pre-trained models for common ML tasks to add to web and browser-based applications. One such model for text, that enables NLP in a web app, is ‘Text toxicity detection’.

The toxicity model detects whether the text contains toxic content such as threatening language, insults, obscenities, identity-based hate, or sexually explicit language.

Installation

The following dependency is required in the package.json of the roomkit-react package.

"@tensorflow-models/toxicity":"^1.2.2"

It can be added using either of the following methods.

  • Using yarn:
yarn add @tensorflow/tfjs @tensorflow-models/toxicity
  • Using npm:
npm install @tensorflow/tfjs @tensorflow-models/toxicity

Usage

In the ChatFooter.tsx file, import the package as:

import * as toxicity from '@tensorflow-models/toxicity';

We need to load the model

const model = await toxicity.load(0.9, []);

Add the following after the message is taken from the input field to run it through the model:

const predictions = await model.classify(message);
const isToxic = predictions.some(prediction => prediction.results[0].match);
if (isToxic) {
   ToastManager.addToast({
    title: 'Message contains toxic content',
   });
   return;
}    

Any toxic text now typed into the message field, would not be shared and the user notified of the same with a toast. Further action can be taken by notifying a moderator or changing the role of the user to prevent them from sending any further messages.

This is only scratching the surface of what is possible when it comes to chat moderation. In the next part, we will explore using a cloud service or a local LLM to moderate chat better.

Until then, if you have any questions regarding 100ms, feel free to contact the team on our Discord.

Engineering

Share

Related articles

See all articles