Guardrails for Amazon Bedrock can now detect hallucinations & safeguard apps using any FM

Posted on: Jul 10, 2024

Guardrails for Amazon Bedrock enables customers to implement safeguards based on their application requirements and responsible AI policies. Today, guardrails adds contextual grounding checks and introduces a new ApplyGuardrail API to build trustworthy generative AI applications using any foundation model (FM).

Customers rely on the inherent capabilities of the FMs to generate grounded (credible) responses that are based on company’s source data. However, FMs can conflate multiple pieces of information, producing incorrect or new information - impacting the reliability of the application. With contextual grounding checks, Guardrails can now detect hallucinations in model responses for RAG (retrieval-augmented generation) and conversational applications. This safeguard helps detect and filter responses that are factually incorrect based on a reference source, and are irrelevant to the users’ query. Customers can configure confidence thresholds to filter responses with low confidence of grounding or relevance.

In addition, to support choice of safeguarding applications using different FMs, Guardrails now supports an ApplyGuardrail API to evaluate user inputs and model responses for any custom and third-party FM, in addition to FMs already supported in Amazon Bedrock. The ApplyGuardrail API now enables centralized safety and governance for all your generative AI applications.

Guardrails is the only offering from a major cloud provider to provide safety, privacy, and truthfulness protections in a single solution. Contextual grounding check and ApplyGuardrail API are supported in all AWS regions where Guardrails for Amazon Bedrock is supported.

To learn more about Guardrails for Amazon Bedrock, visit the feature page and read the news blog.