AWS for M&E Blog

How to scale content moderation using AI from Spectrum Labs and Amazon IVS Chat

This blog post was co-authored by Hetal Bhatt, Writer/Researcher at Spectrum Labs, and Dave Matli, Chief Marketing Officer at Spectrum Labs.

Amazon Interactive Video Service Stream Chat (Amazon IVS Chat) enables live chat to accompany your live IVS video streams. It’s fully scalable and customizable.

Content moderation goes hand in hand with live chat implementation in video streams. While human moderators may suffice for small channels, AI is a more scalable way to monitor user behavior on large and fast-growing platforms.

Why use AI for content moderation?

As online communities grow, so does the scale of their content moderation needs. The volume of user-generated content from all corners of the globe rises exponentially as a platform expands worldwide. However, hiring more content moderators is not very effective at that scale—it’s a notoriously hazardous job for mental health, which leads to high turnover, and it slows overall company growth as bigger teams of human moderators require more resources to hire, train, and manage.

Conversely, AI content moderation can grow with a platform and may do so with far less overhead.

AI may allow online platforms to automate most of the content moderation process. It is especially useful since the bulk of toxic behavior is repetitive (e.g., frequently used slurs, re-posts of the same harmful or illegal content, etc.), which allows AI to act against it and delegate outlier cases to human moderators. Another benefit is that AI content moderation acts much faster than human moderators working through a queue, making for a more efficient operation.

 

What to look for in content moderation AI

Different types of AI solutions provide different levels of content moderation. Not every community needs the most advanced content moderation AI, but it’s beneficial to know some options as your platform expands and your requirements change:

  • Keyword-based moderation: This is a form of content moderation where users create a list of terms to keep off their platform, and the keyword filter will prevent the words from being posted.

Despite their simplicity, keyword-based solutions work quite well for detecting profanity and hateful slurs. They also can be fine-tuned to include variants in different languages and slang. However, keyword-based moderation falls short in detecting more complex behavior.

  • Contextual solutions: To monitor for more complex behaviors, contextual solutions are a must. Spectrum Labs has created Contextual AI that can recognize hard-to-detect behaviors like child grooming, drug or sex solicitation, and online radicalization.

While these behaviors are generally less frequent than typical spam or profanity, they carry an outsized risk to your users, business partners, and even your own legal liability.

Content moderation AI needs to be effective and accurate—the latter is where Contextual AI has an especially big advantage.

Image describing contextual AI

Image describing contextual AI

Instead of blindly blocking messages for using banned keywords, Contextual AI can parse nuances like account history, past conversations, and other attributes to find truly toxic content. This comes in handy for harmful behaviors that don’t necessarily use any flagged keywords.

Example phrase: “FINALLY schools over!!”

Example context:  Male, 22 years old, profile less than 30 days old at 3pm on a weekday messaging a female, 9 years old, in a private chat; no prior chat history between users.

 Other toxic content and harmful behavior that Contextual AI can detect include:

  • Child sexual abuse material (CSAM)
  • Bullying and sustained harassment
  • Hate speech
  • Self-harm or suicidal ideation
  • Extremism and calls to violence

Contextual AI provides comprehensive content moderation for your community. Even if simpler solutions are working for now, Contextual AI is something to keep in mind as your platform grows.

 

Using Contextual AI for content moderation in Amazon IVS Chat

Amazon IVS Chat is a great way for users on your platform to interact with each other.

For content moderation, Amazon IVS Chat offers a message review handler that allows you to review or modify messages before they are delivered to a room. This is done for each SendMessage request. The handler enforces your application’s business logic and determines whether to allow, deny, or modify a message.

Amazon IVS Chat interaction with AWS Lambda Message Handler

Amazon IVS Chat interaction with AWS Lambda Message Handler

When a client sends a message, Amazon IVS Chat invokes the AWS Lambda function with a JSON payload that contains fields like –  “Content”, “MessageId”, “RoomArn”, “UserID” – which can be used to build business logic for content moderation.

Amazon IVS chat then expects a response with the following syntax:

{
   "Content": "string",
   "ReviewResult": "string",
   "Attributes": {"string": "string"},
}

The value returned by the “ReviewResult” attribute indicates whether or not the message is delivered to users in the chat room. Valid values are “ALLOW” or “DENY”.

If allowed, the message is delivered to all users connected to the room. If denied, the message is not delivered to any user.

A sample implementation of the chat message review handler is available here.

Spectrum Labs claims Contextual AI users may see up to 5x more content coverage and a 10x increased rate of detecting toxic behavior in real time. Not only does Contextual AI identify more complex and elusive toxic behaviors, it also allows you to automate how you act upon them—among Spectrum Labs’ customers, Spectrum Labs claims automated actions alone reduced content moderators’ workload by 50%.

Implementing Contextual AI on your platform is a breeze. Once you complete the setup process, you can customize actions you’d like to take against different types of content and user behavior in your community:

How Spectrum labs delivers contextual AI

How Spectrum labs delivers contextual AI

Enforcement Actions Possible with Spectrum Labs API

Enforcement Actions Possible with Spectrum Labs API

You can leverage Contextual AI by using an AWS Lambda function to call the Spectrum Labs API at the api.dev.getspectrum.io/analyze endpoint. It’s pretty straightforward, and implementation can be as simple as passing this sample request body with the API request:

Sample Request Body

{
  "timestamp": "2020-07-15T15:14:27.987Z",
  "content": {
    "id": "some-id-for-the-message",
    "text": "Hey, don’t be a jerk!",
    "attributes": {
      "user-id": "some-id-for-the-user",
      "context-id": "some-id-for-the-context",
      "user-sign-up": "2020-03-15T15:14:27.987Z",
      "source": "some-name-for-the-source",
      "region": "na",
      "language": "en",
    }
  }
}
Architecture for interaction with Spectrum Labs API

Architecture for interaction with Spectrum Labs API

A JSON response similar to the following would then be returned. Based on the values returned by the behaviors attribute, you can decide whether to return an allow or deny response to Amazon IVS Chat via the “ReviewResult” attribute in the Lambda function.

Sample Response

{
  "contentId": "some-unique-content-id",
  "behaviors": {
    "profanity": false,
    "insult": true,
    "hate-speech": false,
    "sexual": false
  },
 "language": "eng",
  "confidences": {
    "profanity": "NotDetected",
    "insult": "High",
    "hate-speech": "NotDetected",
    "sexual": "NotDetected"

  },

In the response, you’ll see a list of behaviors accompanied by a “true” or “false” for whether it was detected. The response will also include confidence buckets of “NotDetected,” “High,” or “Low” for each behavior to indicate how sure the API is of its detection.

You’ll also notice that “language” is included in the response because the Spectrum Labs API operates on a multilingual model that can parse conversations across a multitude of languages and slang.

 

Use Contextual AI to get a 360° view of your community

Contextual AI doesn’t just detect toxic behavior; it can also recognize positive and healthy behaviors that make online communities a more enjoyable place for users. Healthy behavior is the secret sauce for boosting user retention and overall growth. Spectrum Labs claims users with a positive first-time experience on a platform are six times more likely to return.

By monitoring toxic and healthy content, you can get a full and comprehensive view of user behavior on your platform via Contextual AI’s 360° analytics:

Spectrum Labs - Contextual AI Analytics platform

Spectrum Labs – Contextual AI Analytics platform

User behaviors detected by Contextual AI are gauged across a range between ‘Health’ and ‘Risk’ scores. From there, users are assigned an overall reputation score that weighs their positive and negative conduct in the community. With these user scores, you can facilitate positive user interactions on your platform by connecting new or unengaged users to your most reputable users and rooms.

Health and Risk Score Dashboard from Spectrum labs

Health and Risk Score Dashboard from Spectrum labs

Along with recognizing and removing online toxicity, Contextual AI can identify and reward healthy behavior to shape better user experiences. By promoting your own custom-defined healthy behaviors, you can drive positive interactions on your platform that result in higher user engagement, retention, and revenue. Healthy behavior isn’t just good for the community—it’s also good for business.

To learn more about Contextual AI solutions from Spectrum Labs, visit its site here.

Tony Vu

Tony Vu

Tony Vu is a Senior Partner Engineer at Twitch. He specializes in assessing partner technology for integration with Amazon Interactive Video Service (Amazon IVS), aiming to develop and deliver comprehensive joint solutions to Amazon IVS customers.

Parth Shah

Parth Shah

Parth is a Sr. Startup Solutions Architect at Amazon Web Services. He enjoys working with startup customers in cloud adoption and business strategy as well as helping them design applications and services on AWS. Outside of work, he enjoys gaming, soccer, traveling, and spending time with his friends and family.