Posted On: Apr 23, 2024

Today, we are announcing the general availability of Guardrails for Amazon Bedrock that enables customers to implement safeguards across large language models (LLMs) based on their use cases and responsible AI policies. Customers can create multiple guardrails tailored to different use cases and apply them on multiple LLMs, providing a consistent user experience and standardizing safety controls across generative AI applications.

While many FMs have built-in protections to filter harmful content, customers want to further tailor interactions to safeguard their generative AI applications for a relevant and safe user experience. Guardrails provides a comprehensive set of safety and privacy controls to manage user interactions in generative AI applications. First, customers can define a set of denied topics that are undesirable within the context of their application. Second, they can configure thresholds to filter content across harmful categories such as hate, insults, sexual, violence, misconduct (including criminal activity), and prompt attacks (jailbreak and prompt injections). Third, customers can define a set of offensive and inappropriate words to be blocked in their application. Finally, customers can filter user inputs containing sensitive information (e.g., personally identifiable information) or redact confidential information in model responses based on use cases.

Guardrails are supported for English content across all LLMs and fine-tuned models on Amazon Bedrock. Guardrails for Amazon Bedrock is available in the US East (N. Virginia) and US West (Oregon) regions.

To learn more about Guardrails for Amazon Bedrock, visit the feature page and read the news blog.