Amazon Rekognition Content Moderation
The daily volume of User Generated Content (UGC) and third-party content has been increasing substantially in industries like social media, e-commerce, online advertising, and photo sharing. Customers often want to review this content to ensure that their end-users are not exposed to potentially inappropriate or offensive material, such as nudity, violence, drug use, adult products, or disturbing images. In addition, broadcast and Video-On-Demand (VOD) media companies may be required to ensure that the content they create or license carries appropriate ratings as per compliance guidelines for various geographies or target audiences. Today, many companies employ teams of human moderators to review content, while others simply react to user complaints to take down offensive images, ads, or videos. However, human moderators alone cannot scale to meet these needs at sufficient quality or speed, leading to a poor user experience, high costs to achieve scale, or even a loss of brand reputation.
Amazon Rekognition Content Moderation enables you to streamline or automate your image and video moderation workflows using machine learning. Using fully managed image and video moderation APIs, you can proactively detect inappropriate, unwanted, or offensive content containing nudity, suggestiveness, violence, and other such categories. Amazon Rekognition returns a hierarchical taxonomy of moderation-related labels that make it easy for you to define granular business rules as per your own Standards and Practices (S&P), User Safety, or compliance guidelines - without requiring any machine learning experience. You can then use machine predictions to either automate certain moderation tasks completely, or to significantly reduce the review workload of trained human moderators, so that they can focus on higher-value work. In addition, Amazon Rekognition allows you to quickly review millions of images or thousands of videos using machine learning, and flag only a small subset of assets for further action. This ensures that you get comprehensive but cost-effective moderation coverage for all your content as your business scales, and your moderators can reduce the burden of looking at large volumes of disturbing content.
With Amazon Rekognition Content Moderation, you pay only for what you use. There are no minimum fees, licenses, or upfront commitments.
Image and Video moderation
With Amazon Rekognition you can detect explicit adult or suggestive content, violence, weapons, drugs, tobacco, alcohol, hate symbols, gambling, disturbing content, and rude gestures in both images and videos, and get back a confidence score for each detected label. For videos, Rekognition also returns the timestamps for each detection. Moderation labels are organized in a hierarchical taxonomy that provides both top-level categories, such as ‘Suggestive’, and nuanced second-level categories that identify the specific type of content, such as female swimwear or partial nudity. Using this information, you can create granular business rules for different geographies, target audiences, time of day, and so on.
Text and Audio moderation
You can use Amazon Rekognition Text Detection for images and videos to read text, and then check it against your own list of prohibited words or phrases. To detect profanities or hate speech in videos, you can use Amazon Transcribe to first convert speech to text, and then check it against a similar list. If you want to further analyze text using Natural Language Processing (NLP), you can use Amazon Comprehend.
For customers who have very specific or fast-changing moderation needs and access to their own training data, Amazon Rekognition offers Custom Labels to easily train and deploy your own moderation models with a few clicks or API calls. For example, if an e-commerce platform needs to take action on a new product carrying an offensive or politically sensitive message, or a broadcast network needs to detect and blur the logo of a specific brand for legal reasons, they can quickly create and operationalize new models with Custom Labels to address these scenarios.
Human review of machine predictions
For nuanced situations or scenarios where Rekognition returns low-confidence predictions, content moderation workflows still requires human reviewers to audit results and make final judgements. You can use Amazon Augmented AI (Amazon A2I) to easily implement human review and improve the confidence of predictions. A2I is directly integrated with Amazon Rekognition moderation APIs. Amazon A2I allows you to use in-house, private, or even third-party vendor work forces with a user-defined web interface that has instructions and tools to carry out review tasks.
Improves safety for users and brands
Using Amazon Rekognition content moderation APIs, you can review every image and video against a wide variety of pre-defined or custom unsafe categories at scale. This allows you to proactively ensure that your users and brand sponsors are not exposed to unwanted or inappropriate content.
Reduce human moderator efforts by up to 95% using Amazon Rekognition to flag potentially unsafe content first. Human reviewers only need to review a small subset of all images or videos. To seamlessly combine machine predictions with human review without the heavy lifting of building new tools and infrastructure, you can also use Amazon A2I.
Reliable and cost-effective
Amazon Rekognition enables you to create reliable, scalable, and repeatable cloud-based content moderation workflows without upfront commitments or expensive licenses. You simply pay based on the number of images or the duration of videos that you choose to process.
Reviewing User Generated Content
Platforms with large volumes of UGC, such as social media, photo sharing, short video, online gaming, video streaming, and online matchmaking can use Amazon Rekognition to proactively moderate very large volumes of user uploads to keep users safe from inappropriate or disturbing content.
Compliance for media and e-commerce
You can use Amazon Rekognition and Amazon Transcribe to identify potentially unsafe images, video, text and audio content, and leverage this metadata to assign the appropriate content ratings to media assets for different geographies or target audiences. Similarly, you can ensure that third-party product listings or classifieds do not violate the safety policies on your e-commerce or app platform.
Brands who advertise on your news, media, or e-commerce platforms may not want to be associated with certain types of content, such as alcohol or violence. You can identify and filter out such unwanted associations for each brand using the rich metadata from Amazon Rekognition content moderation.
SmugMug operates two very large online photo platforms, SmugMug and Flickr, enabling more than 100M members to safely store, search, share, and sell tens of billions of photos. Flickr is the world's largest photographer-focused community, empowering photographers around the world to find their inspiration, connect with each other, and share their passion with the world.
"As a large, global platform, unwanted content is extremely risky to the health of our community and can alienate photographers. We use Amazon Rekognition's content moderation feature to find and properly flag unwanted content, enabling a safe and welcoming experience for our community. At Flickr's huge scale, doing this without Amazon Rekognition is nearly impossible. Now, thanks to content moderation with Amazon Rekognition, our platform can automatically discover and highlight amazing photography that more closely matches our members' expectations, enabling our mission to inspire, connect, and share."
- Don MacAskill, Co-founder, CEO & Chief Geek
CBS Corporation is a mass media company that creates and distributes industry-leading content across a variety of platforms globally. CBS owns the most-watched television network in the U.S. and one of the world’s largest libraries of entertainment content, making its brand — “the Eye” — one of the most recognized in business.
"At CBS, we place significant efforts to ensure we moderate inappropriate content within our programming as to not offend our global viewers or violate government regulations. This is supported by investments in manual methods to execute near real-time screening and editing of hundreds of hours of content every month. To scale our internal processes, we are looking to Amazon Rekognition to automate the moderation of our video content while leveraging the new feature of Custom Labels to further refine moderation models. This will enable us to automate the tagging of sensitive content such as nudity, obscene gestures, and violence, and speed up processing from hours to minutes."
- Jamie Duemo, Senior Vice President, MultiPlatform Distribution - CBS Operations and Engineering
Mobisocial is a leading mobile software company, focused on building social networking and gaming apps. The company develops Omlet Arcade, a global community where tens of millions of mobile gaming live-streamers and esports players gather to share gameplay and meet new friends.
“In order to ensure that our gaming community is a safe environment to socialize and share entertaining content, we used machine learning to identify content that does not comply with our community standards. We created a workflow, leveraging Amazon Rekognition, to flag uploaded image and video content that contains non-compliant content. Amazon Rekognition’s Content Moderation API helps us achieve the accuracy and scale to manage a community of millions of gaming creators worldwide. Since implementing Amazon Rekognition, we've been able to reduce the amount of content manually reviewed by our operations team by 95%, while freeing up engineering resources to focus on our core business. We are looking forward to the latest Rekognition Content Moderation model update, which will improve accuracy and add new classes for moderation.”
- Zehong, Senior Architect at Mobisocial