Posted On: Feb 1, 2024

Amazon Rekognition content moderation is a machine learning-based feature that can detect inappropriate, unwanted, and offensive content. Today, Amazon Rekognition has launched an enhanced machine learning model for content moderation for images. This update adds new labels, improves model accuracy, and introduces the new capability to identify animated and illustrated content.

Customers in various industries, such as social media, e-commerce, gaming, media, and advertising, use Amazon Rekognition content moderation to protect their brand reputation and foster safe user communities. The improved model adds 26 new moderation labels and expands the moderation label taxonomy from a two-tier to a three-tier label category. These new labels and the expanded taxonomy enable customers to detect fine-grained concepts on the content they want to moderate. Additionally, the updated model introduces a new capability to identify two new content types, animated and illustrated content. This allows customers to create granular rules for including or excluding such content types from their moderation workflow. With these new updates, customers can moderate content in accordance with their content policy with higher accuracy.

These updates are now available in all AWS regions supported for Amazon Rekognition Content Moderation. To get started, visit the Amazon Rekognition console for image moderation. To learn more, read the Amazon Rekognition Content Moderation documentation.