Posted On: Oct 12, 2020

Amazon Rekognition content moderation is a deep learning-based service that can detect inappropriate, unwanted, or offensive images and videos, making it easier to find and remove such content at scale. Amazon Rekognition provides a detailed taxonomy of moderation categories such as 'Explicit Nudity', 'Suggestive', ‘Violence’, and 'Visually Disturbing'. Starting today, customers can detect 6 new categories - 'Drugs', ‘Tobacco’, ‘Alcohol’, ‘Gambling’, ‘Rude Gestures’, and ‘Hate Symbols’. In addition, customers also get improved detection rates for already supported categories. Using Amazon Rekognition moderation APIs, social media, broadcast media, advertising, and e-commerce customers can create a better user experience, provide brand safety assurances to advertisers, or comply with local and global regulations. 

Today, many companies employ teams of human moderators to review third-party or user generated content, while others simply react to user complaints to take down offensive or inappropriate images, ads, or videos. However, human moderators alone cannot meet moderation needs at Internet scale. This can lead to a poor user experience, prohibitive costs to achieve required coverage, or even a loss of brand reputation. By using Amazon Rekognition for image and video moderation, human moderators only have to review a much smaller set of content flagged by machine learning, typically 1-5% of the total volume. This allows them to focus on more valuable activities, reduce the burden of viewing disturbing content, and still achieve full coverage at a fraction of the cost of a completely manual process. Moreover, Amazon Rekognition provides a hierarchical set of top-level and second-level moderation categories. Using this taxonomy, customers can create varied business rules to handle different geographic and demographic requirements. For a full list of all supported categories, please see this page.  

Let’s look at a few common use cases for Amazon Rekognition content moderation:  

Social media and photo sharing platforms work with very large amounts of user generated photos and videos daily. To ensure that uploaded content does not violate community guidelines and societal standards, these customers can use Amazon Rekognition to flag and remove such content at scale even with small teams of human moderators. Detailed moderation labels also allow for creating more granular set of user filters. For example, a user might find images containing drinking or alcoholic beverages to be acceptable in a liquor ad, but may want to avoid ones showing drug products and drug use under any circumstances.  

Similarly, broadcast and Video-On-Demand (VOD) media companies have to ensure that they comply with the regulations of the markets and geographies in which they operate. For example, content that shows smoking needs to carry an onscreen health advisory warning in countries like India. Further, brands and advertisers want to prevent unsuitable associations when placing their ads in a video, for example, a toy brand for children may not want their ad to appear next to content showing consumption of alcoholic beverages. Media companies can now use the comprehensive set of categories available in Amazon Rekognition to flag the portions of a movie or TV show that require further action from their editors or ad traffic teams. This saves valuable time, improves brand safety for advertisers, and helps prevent costly compliance fines from regulators.  

Lastly, e-commerce and online classified platforms that allow third-party or user product listings want to promptly detect and delist illegal, inappropriate, or controversial products such as items displaying hate symbols, adult products, or tobacco and drug products. Amazon Rekognition’s new moderation categories helps streamline this process significantly by flagging potentially problematic listings for further review or action. 

New content moderation categories and accuracy improvements for both Amazon Rekognition Image and Video are now available in all AWS Regions supported by Amazon Rekognition. To get started, please read our blog for more details. To try moderation with your own content, you can use the Amazon Rekognition Console for images, or the Media Insights Engine for videos.