Amazon Comprehend Trust and Safety

Detect undesired content in text

Amazon Comprehend’s Trust and Safety features help organizations moderate text content. Amazon Comprehend toxicity detection API is an ML-powered capability that identifies toxic content by classifying user generated or machine generated text across seven categories including sexual harassment, hate speech, threat, abuse, profanity, insult, and graphic. Amazon Comprehend prompt safety classifier enables moderation of generative AI input prompts to prevent inappropriate use of generative AI applications. Lastly, Comprehend PII detect API can prevent PII data leak by redacting all personal information from generative AI output.

Benefits

Faster moderation

Quickly and accurately moderate large volume of text and keep your online platforms free from inappropriate content.

Customized to your moderation needs

Customize the moderation thresholds in API responses to suit your application needs .

Large Language Model (LLM) Trust and Safety

Deploy Comprehend APIs through Langchain to moderate input and output of LLMs .

Use cases

Detect toxicity across multiple categories

Amazon Comprehend toxicity detection classifies text content and provides a confidence score (0 to 1) for the following seven categories: sexual harassment, hate speech, violence/threat, abuse, profanity, insult, and graphic.

Moderate generative AI prompt

Prompt safety classifier provides a confidence score (0 to 1) for the input prompt to be safe or not.

Prevent PII data leaks

Comprehend PII detect can mask upto 22 universal PII entities like address, age, credit card number etc. and up to 14 country specific entities like US social security number, CA health number, Passport number etc.


How to get started

Learn more about pricing
Learn more about Amazon Comprehend pricing

Visit the pricing page.

Learn more 
Sign up for an AWS account
Connect with an expert

From development to enterprise-level programs, get the right support at the right time.

Sign in