Chat moderation at scale just got easier — with the release of Stream’s Advanced Moderation feature, human moderators can harness the power of machine learning to stay one step ahead of bad actors.
Advanced Moderation reviews messages as they’re sent, automatically detecting illicit content and taking action to either flag messages for human review or block them entirely before they ever display in the chat channel. The release also includes a sophisticated karma system, going beyond individual message review to identify users with patterns of unwanted behavior. Watch the quick demo video above to learn how Advanced Moderation works and see it flag and block messages in real time.
Advanced Moderation sets Stream apart from other in-app chat providers, whose APIs and SDKs would require the addition of a costly third-party moderation tool to achieve similar results.
Advanced Chat Moderation Use Cases
This new premium feature is especially helpful when you have a high volume of users active concurrently, with so many messages being sent that it would be hard for a team of moderators to keep up. We see this often in virtual event chat use cases, when thousands or even millions of users join a single chat channel that accompanies a multimedia stream of a concert, a keynote address, or a similar broadcast.
Outside of virtual events, Advanced Moderation can help increase the speed and accuracy of human moderation for any app that sees high chat volume and needs to uphold a set of community standards, from virtual classrooms to social communities, and beyond.
How Advanced Chat Moderation Works
The machine learning model behind Advanced Moderation was trained on thousands of data sets made up of text from around the internet. Using context clues that go beyond word matches to include phrasing, perceived tone, and syntax, it assigns each new message a score on a scale from .001 - 1.0 in each of three categories: Spam, Explicit, and Toxic.
The Spam category covers unwanted sales and promotional content, Explicit covers outright profanity and adjacent vulgar terms, and Toxic includes harassment and similar rude or abusive language. The review process and associated actions all take place with imperceptible latency before the message posts to the channel.
Please note that although we’ve worked diligently to create a comprehensive base list, offensive content is contextual and constantly evolving, and we're not able to be the ultimate authority on what constitutes objectionable content.
Sensitivity & Calibration
As an admin, you can adjust the numerical thresholds that cause a message to be allowed, flagged, or blocked depending on your use case and intended audience. In an app used by adults, for example, you may want to allow most profanity but block toxic content like threats and bullying. In a marketplace app, you may want to allow content that would be inappropriately spammy elsewhere. The numerical ranges for allow, flag, and block are adjustable on a convenient UI slider as seen below.
Flagging vs. Blocking vs. Deleting a Message
All Stream Chat plans include access to our Moderation Dashboard, where flagged messages appear in an inbox-style UI from which moderators can manually review them and take action. App users can flag each other’s messages, causing them to appear in the dashboard, and messages automatically flagged by Advanced Moderation behave the same way.
A flagged message has necessarily already appeared in the chat channel, but can be deleted from the channel by a moderator. If a message is blocked by Advanced Moderation, on the other hand, the sender receives an error message and the message never posts to the channel in the first place.
User Karma System
The karma system expands Advanced Moderation’s capabilities, looking beyond individual messages to a user’s behavior and characteristics. Users who have repeatedly been flagged or muted (bad karma) get higher moderation scores, while users without any moderation incidents (good karma) receive lower moderation scores. Scores also improve with time, the longer a user exists without a moderation incident, which can improve the overall trust and safety rating of your platform. The karma system makes automatic message scoring more situationally aware, and results in better moderation decisions.
Additional Stream Chat Moderation Features
Advanced Moderation joins a substantial set of existing Stream Chat moderation tools designed to help app developers protect their users and maintain a positive brand identity. These moderation tools include flag message, mute user, ban user, mute channel, block list and pre-send message hook. You can explore all of these moderation tools and more in our chat moderation documentation.
Add Advanced Moderation to Your Stream Chat Plan
To activate Advanced Moderation for your existing Stream Chat app integration, please reach out to your Account Manager or simply fill out our contact form, and someone from our team will be in touch to talk through your specific use case and requirements.
Try the Stream Chat API, SDKs, & UI Kits Free
Built-in moderation features are just one reason more and more developers and product leaders are choosing to integrate component solutions from Stream instead of developing in-app chat from scratch. The Stream Chat API takes the guesswork out of building and scaling any type of chat functionality, with all of the engaging features today’s users expect and proven support for more than 5 million concurrent connections in a single chat channel. The API is easily integrated with your existing tech stack using Stream’s complete frontend chat SDKs, so you can launch a polished chat experience in days instead of months.
Ready to start prototyping your Stream Chat integration? Activate your 30-day free trial here.