Posted On: May 3, 2023

Amazon Rekognition content moderation is a deep learning-based feature that can detect inappropriate, unwanted, or offensive images and videos, making it easier to find and remove such content at scale. Starting today, Amazon Rekognition content moderation comes with an improved model for image and video moderation that significantly improves the detection of explicit, violence, and suggestive content. Customers can now detect explicit and violence content with higher accuracy to improve the end-user experience, protect their brand identity, and ensure that all content complies with their industry regulation and policies.

This update is now available in all AWS regions supported for Amazon Rekognition Content Moderation. To try the new model, visit the Amazon Rekognition console for image and video moderation. To learn more, read the Amazon Rekognition Content Moderation documentation.