Skip to main content

Amazon Bedrock

Amazon Bedrock Guardrails

Implement safeguards customized to your application requirements and responsible AI policies

Build configurable safeguards

Amazon Bedrock Guardrails helps you safely build and deploy responsible generative AI applications with confidence. With industry-leading safety protections that block up to 88% of harmful content and deliver auditable, mathematically verifiable explanations for validation decisions with 99% accuracy, Guardrails provides configurable safeguards to help detect and filter harmful text and image content, redact sensitive information, detect model hallucinations, and more. Guardrails work consistently across any foundation model whether you're using models in Amazon Bedrock or self-hosted models including third-party models such as OpenAI and Google Gemini — giving you the same safety, privacy, and responsible AI controls across all your generative AI applications. 

Key capabilities

Comprehensive safeguards for every generative AI application

Configurable safeguards

Amazon Bedrock Guardrails provides six safeguard policies that you can configure for your generative AI applications based on your use cases and responsible AI policies. These safeguards include content moderation (content and word filters), prompt attack detection, topic classification (denied topics), personally identifiable information (PII) redaction (sensitive information filters), and hallucination detection (contextual grounding and Automated Reasoning checks).  With expanded capabilities for code-related use cases, these safeguards now help protect against harmful content within code elements, detect malicious code injection attempts, and prevent PII exposure in code structures. Customize using any combination to help protect both user inputs and model responses. 

Missing alt text value

Deterministic explainability

Automated Reasoning checks in Amazon Bedrock Guardrails is the first and only generative AI safeguard to use formal logic to help prevent factual errors from hallucinations. Using sound mathematical techniques to verify, correct, and explain AI-generated information, Automated Reasoning checks systematically validate correct model responses with up to 99% accuracy. You can build auditable generative AI applications with the mathematical certainty that AI responses comply with established policies and domain knowledge, especially important in regulated industries. 

Consistent level of safety

The ApplyGuardrail API allows you to use the configurable safeguards offered by Bedrock Guardrails with any foundation model whether hosted on Amazon Bedrock or self-hosted models, including third-party models such as OpenAI and Google Gemini. You can also use Guardrails with an agent framework such as Strands Agents, including agents deployed using Amazon Bedrock AgentCore. With the ApplyGuardrail API, you can assess content using preconfigured guardrails without invoking foundation models, enabling real-time content moderation. 

Missing alt text value

Apply across AI workflows

Amazon Bedrock Guardrails integrates seamlessly across your AI application stack, from individual model inference to complex multi-step workflows. Apply guardrails to foundation model interactions, associate them with agents using frameworks like Strands Agents for conversational AI, integrate with knowledge bases for retrieval-augmented generation, and embed within flows for sophisticated multi-node processes. This provides extensive safety protection across many use cases, from simple chatbots to complex enterprise workflows. Companies like Chime Financial, KONE, Panorama, Strava, Remitly, and PwC trust Bedrock Guardrails for their AI applications.

Missing alt text value

Did you find what you were looking for today?

Let us know so we can improve the quality of the content on our pages