Guardrails for Amazon Bedrock

Implement safeguards customized to your application requirements and responsible AI policies

Build responsible AI applications with Guardrails for Amazon Bedrock

See demos on how to create and apply custom-tailored guardrails with foundation models (FMs) to implement responsible AI policies in your generative AI applications.

Bring a consistent level of AI safety across all your applications

Guardrails for Amazon Bedrock evaluates user inputs and FM responses based on use case specific policies, and provides an additional layer of safeguards regardless of the underlying FM. Guardrails can be applied across all large language models (LLMs) on Amazon Bedrock, including fine-tuned models. Customers can create multiple guardrails, each configured with a different combination of controls, and use these guardrails across different applications and use cases. Guardrails can also be integrated with Agents and Knowledge Bases for Amazon Bedrock to build generative AI applications aligned with your responsible AI policies.

UI Screenshot

Block undesirable topics in your generative AI applications

Organizations recognize the need to manage interactions within generative AI applications for a relevant and safe user experience. They want to further customize interactions to remain on topics relevant to their business and align with company policies. Using a short natural language description, Guardrails for Amazon Bedrock allows you to define a set of topics to avoid within the context of your application. Guardrails detects and blocks user inputs and FM responses that fall into the restricted topics. For example, a banking assistant can be designed to avoid topics related to investment advice.

guardrails for amazon bedrock content filters

Filter harmful content based on your responsible AI policies

Guardrails for Amazon Bedrock provides content filters with configurable thresholds to filter harmful content across hate, insults, sexual, violence, misconduct (including criminal activity), and prompt attack (prompt injection and jailbreak). Most FMs already provide built-in protections to prevent the generation of harmful responses. In addition to these protections, Guardrails lets you configure thresholds across the different categories to filter out harmful interactions. Increasing the strength of the filter increases the aggressiveness of the filtering. Guardrails automatically evaluate both user queries and FM responses to detect and help prevent content that falls into restricted categories. For example, an ecommerce site can design its online assistant to avoid using inappropriate language, such as hate speech or insults.

guardrails for amazon bedrock denied topics

Redact sensitive information (PII) to protect privacy

Guardrails for Amazon Bedrock allows you to detect sensitive content such as personally identifiable information (PII) in user inputs and FM responses. You can select from a list of predefined PII or define custom sensitive information type using regular expressions (RegEx). Based on the use case, you can selectively reject inputs containing sensitive information or redact them in FM responses. For example, you can redact users’ personal information while generating summaries from customer and agent conversation transcripts in a call center.

pseudonymisation and gdpr icon

Block inappropriate content with a custom word filter

Guardrails for Amazon Bedrock allow you to configure a set of custom words or phrases that you want to detect and block in the interaction between your users and generative AI applications. This will also allow you to detect and block profanity as well as specific custom words such as competitor names or other offensive words.

content filter screenshot