Amazon Rekognition now detects violence, weapons, and self-injury in images and videos; improves accuracy for nudity detection

Posted on: Aug 9, 2019

Amazon Rekognition is a deep learning-based image and video analysis service that can identify objects, people, text, scenes, as well as support content moderation by detecting unsafe content. Starting today, you can detect content related to 'Violence' and 'Visually Disturbing" themes, such as blood, wounds, weapons, self-injury, corpses, and more. Further, Amazon Rekognition's ability to identify 'Explicit Nudity' and 'Suggestive' content has been improved through a 68% lower false positive rate and a 36% lower false negative rate (on average). Additionally, Amazon Rekognition now supports detection of new categories of adult content, such as unsafe anime or illustrated content, adult toys, and sheer clothing. 

By using Amazon Rekognition for image and video moderation, human moderators can review a much smaller set of content flagged by AI. This allows them to focus on more valuable activities and still achieve full moderation coverage at a fraction of their existing cost. Moreover, Amazon Rekognition provides a hierarchical set of top-level and second-level moderation categories that can be used to create business rules to handle different geographic and demographic requirements. For a full list of all supported unsafe categories and their hierarchy, please see this page.  

Updated image and video moderation is now available in all AWS Regions supported by Amazon Rekognition at no additional cost. To get started, you can try the feature with your own content using the Amazon Rekognition Console