Posted On: Oct 13, 2023

Amazon Rekognition content moderation is a deep learning-based feature that can detect inappropriate, unwanted, or offensive images and videos, making it easier to find and remove such content at scale. Customers across industries, such as social media, gaming, and advertising, use Rekognition’s content moderation capabilities to protect their brand reputation, and enable safe user communities. With Custom Moderation, customers can now enhance the accuracy of the moderation deep learning model on their business-specific data by training an adapter with as few as twenty annotated images in less than hour.

Customers can train a custom adapter to reduce the false-positives, i.e., images that are appropriate for businesses but are flagged by the model with a moderation label, or reduce the false-negatives, i.e., images that are inappropriate for businesses, but do not get flagged with a moderation label. These custom adapters extend the capabilities of the deep learning moderation model to detect images used for training with higher accuracy. Customers can provide the unique ID of the trained adapter to the existing DetectModerationLabels API operation to process images. With Amazon Rekognition Custom Moderation, customers can tailor the moderation deep learning model for improved performance on their specific moderation use-case, without any ML expertise.

This update is now available in all AWS regions supported for Amazon Rekognition Custom Labels. To try the new model, visit the Amazon Rekognition console for image moderation. To learn more, read the Amazon Rekognition Content Moderation documentation.