Posted On: Dec 21, 2022
Amazon Rekognition content moderation is a deep learning-based feature that can detect inappropriate, unwanted, or offensive images and videos, making it easier to find and remove such content at scale. Starting today, Amazon Rekognition content moderation comes with an improved model for image moderation that significantly reduces false positive rates for e-commerce, social media and online communities content, without reduction in detection rates for truly unsafe content. Lower false positive rates mean faster approvals for user uploaded content leading to a better end-user experience. Lower false positive rate also imply lower volumes of flagged images to be reviewed further, leading to a better experience for human moderators and more cost savings.
With the improved model, e-commerce and online marketplace customers such as 11STREET and DeNA can review product images with lower false detections and as a result, approve product listings faster. Similarly , social-media customers and online communities such as MobiSocial, Dream11, and Coffee Meets Bagel can review images and videos taken through selfie cameras at close angles with higher accuracy.
This update is now available in all AWS regions supported for Amazon Rekognition Content Moderation. To try the new model, visit the Amazon Rekognition console for image moderation. To learn more, read the Amazon Rekognition Content Moderation documentation.