Posted On: Jul 26, 2023
Today, we are excited to announce Amazon Transcribe Toxicity Detection, an ML-powered, voice-based toxicity detection capability, which leverages both audio and text-based cues to identify and classify toxic content. Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for you to add speech-to-text capabilities to your applications. In addition to text, Transcribe Toxicity Detection uses speech cues, such as tone and pitch, to hone in on toxic intent in speech. Toxic content is flagged and classified across seven categories including sexual harassment, hate speech, threat, abuse, profanity, insult, and graphic. This allows moderators to take focused action rather than reviewing entire conversations.
Toxicity detection is used across industries and primarily in the online gaming and social media space. For example, online gaming uses toxicity detection to monitor spoken conversations between players, especially when there is an incident reported. Typically, human moderators review long recordings to pinpoint toxic content and take action. Now, using Amazon Transcribe Toxicity Detection, human moderators see exactly where in the conversation toxic content was spoken and the language that was used is categorized with toxicity scores. This cuts the time spent on listening to content by 95%, allowing moderators to cover more audio and take faster action when toxicity is detected.
Amazon Transcribe Toxicity Detection is available now for US English with batch processing. This feature is supported in the following AWS regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Sydney), Europe (Ireland), and Europe (London). You will incur additional charges as described in toxicity detection pricing. To learn more, see the “Flag harmful language in spoken conversations with Amazon Transcribe Toxicity detection” post and Amazon Transcribe documentation.