AWS Machine Learning Blog
Category: Responsible AI
Advancing AI trust with new responsible AI tools, capabilities, and resources
With trust as a cornerstone of AI adoption, we are excited to announce at AWS re:Invent 2024 new responsible AI tools, capabilities, and resources that enhance the safety, security, and transparency of our AI services and models and help support customers’ own responsible AI journeys.
Reducing hallucinations in large language models with custom intervention using Amazon Bedrock Agents
This post demonstrates how to use Amazon Bedrock Agents, Amazon Knowledge Bases, and the RAGAS evaluation metrics to build a custom hallucination detector and remediate it by using human-in-the-loop. The agentic workflow can be extended to custom use cases through different hallucination remediation techniques and offers the flexibility to detect and mitigate hallucinations using custom actions.
AWS achieves ISO/IEC 42001:2023 Artificial Intelligence Management System accredited certification
Amazon Web Services (AWS) is excited to be the first major cloud service provider to announce ISO/IEC 42001 accredited certification for the following AI services: Amazon Bedrock, Amazon Q Business, Amazon Textract, and Amazon Transcribe. ISO/IEC 42001 is an international management system standard that outlines requirements and controls for organizations to promote the responsible development and use of AI systems.
Improve factual consistency with LLM Debates
In this post, we demonstrate the potential of large language model (LLM) debates using a supervised dataset with ground truth. In this post, we navigate the LLM debating technique with persuasive LLMs having two expert debater LLMs (Anthropic Claude 3 Sonnet and Mixtral 8X7B) and one judge LLM (Mistral 7B v2 to measure, compare, and contrast its performance against other techniques like self-consistency (with naive and expert judges) and LLM consultancy.
Using responsible AI principles with Amazon Bedrock Batch Inference
In this post, we explore a practical, cost-effective approach for incorporating responsible AI guardrails into Amazon Bedrock Batch Inference workflows. Although we use a call center’s transcript summarization as our primary example, the methods we discuss are broadly applicable to a variety of batch inference use cases where responsible considerations and data protection are a top priority.
Automate building guardrails for Amazon Bedrock using test-driven development
Amazon Bedrock Guardrails helps implement safeguards for generative AI applications based on specific use cases and responsible AI policies. Amazon Bedrock Guardrails assists in controlling the interaction between users and foundation models (FMs) by detecting and filtering out undesirable and potentially harmful content, while maintaining safety and privacy. In this post, we explore a solution that automates building guardrails using a test-driven development approach.
Considerations for addressing the core dimensions of responsible AI for Amazon Bedrock applications
In this post, we introduce the core dimensions of responsible AI and explore considerations and strategies on how to address these dimensions for Amazon Bedrock applications.
Improve LLM application robustness with Amazon Bedrock Guardrails and Amazon Bedrock Agents
In this post, we demonstrate how Amazon Bedrock Guardrails can improve the robustness of the agent framework. We are able to stop our chatbot from responding to non-relevant queries and protect personal information from our customers, ultimately improving the robustness of our agentic implementation with Amazon Bedrock Agents.
A progress update on our commitment to safe, responsible generative AI
Responsible AI is a longstanding commitment at Amazon. From the outset, we have prioritized responsible AI innovation by embedding safety, fairness, robustness, security, and privacy into our development processes and educating our employees. We strive to make our customers’ lives better while also establishing and implementing the necessary safeguards to help protect them. Our practical […]
Build safe and responsible generative AI applications with guardrails
Large language models (LLMs) enable remarkably human-like conversations, allowing builders to create novel applications. LLMs find use in chatbots for customer service, virtual assistants, content generation, and much more. However, the implementation of LLMs without proper caution can lead to the dissemination of misinformation, manipulation of individuals, and the generation of undesirable outputs such as […]