Tools and resources to build AI responsibly

Supporting the end-to-end AI lifecycle

Services and tools

Foundation model (FM) evaluations

Model Evaluation on Amazon Bedrock helps customers evaluate, compare, and select the best FMs for their specific use case based on custom metrics, such as accuracy and safety. You can also use model evaluation in Amazon SageMaker Clarify.

Implementing safeguards in generative AI

Guardrails in Amazon Bedrock helps you implement safeguards tailored to your generative AI applications and aligned with your responsible AI policies. Guardrails lets you specify topics to be avoided and then automatically detects and prevents queries and responses that fall into restricted categories.

Detecting bias

Biases are imbalances in data or disparities in the performance of a model across different groups. Amazon SageMaker Clarify helps you mitigate bias by detecting potential bias during data preparation, after model training, and in your deployed model by examining specific attributes.

Explaining model predictions

Understanding a model’s behavior is important to develop more accurate models and make better decisions. Amazon SageMaker Clarify provides greater visibility into model behavior, so you can provide transparency to stakeholders, inform humans making decisions, and track whether a model is performing as intended.

Monitoring and human review

Monitoring is important to maintain high-quality ML models and ensure accurate predictions. Amazon SageMaker Model Monitor automatically detects and alerts you to inaccurate predictions from deployed models. And with Amazon Augmented AI, you can implement human review of ML predictions when human oversight is needed.

Improving governance

ML Governance from Amazon SageMaker provides purpose-built tools for improving governance of your ML projects by giving you tighter control and visibility over your ML models. You can easily capture and share model information and stay informed on model behavior, like bias, all in one place.

Services and tools

Detecting bias

Biases are imbalances in data or disparities in the performance of a model across different groups. Amazon SageMaker Clarify helps you mitigate bias by detecting potential bias during data preparation, after model training, and in your deployed model by examining specific attributes.4

Explaining model predictions

Understanding a model’s behavior is important to develop more accurate models and make better decisions. Amazon SageMaker Clarify provides greater visibility into model behavior, so you can provide transparency to stakeholders, inform humans making decisions, and track whether a model is performing as intended.

Monitoring and human review

Monitoring is important to maintain high-quality ML models and ensure accurate predictions. Amazon SageMaker Model Monitor automatically detects and alerts you to inaccurate predictions from deployed models. And with Amazon Augmented AI, you can implement human review of ML predictions when human oversight is needed.

Improving governance

ML Governance from Amazon SageMaker provides purpose-built tools for improving governance of your ML projects by giving you tighter control and visibility over your ML models. You can easily capture and share model information and stay informed on model behavior, like bias, all in one place.

Building generative AI responsibly

AWS builds foundation models (FMs) with responsible AI in mind at each stage of its comprehensive development process from design to development to deployment to operations. See how Amazon CodeWhisperer has built-in security scanning and Amazon Titan FMs are built to detect and remove harmful content.

AWS AI Service Cards

AI Service Cards provide transparency and document the intended use cases and fairness considerations for our AWS AI services. AI Service Cards provide a single place to find information on the intended use cases, responsible AI design choices, best practices, and performance for a set of AI service use cases.

AWS AI Service Cards

AI Service Cards provide transparency and document the intended use cases and fairness considerations for our AWS AI services. AI Service Cards provide a single place to find information on the intended use cases, responsible AI design choices, best practices, and performance for a set of AI service use cases.

Responsible use of Machine Learning guide

The Responsible Use of Machine Learning guide provides considerations and recommendations for responsibly developing and using ML systems across three major phases of their lifecycles: (1) design and development; (2) deployment; and (3) ongoing use.

Read the guide

Responsible use of Machine Learning guide

The Responsible Use of Machine Learning guide provides considerations and recommendations for responsibly developing and using ML systems across three major phases of their lifecycles: (1) design and development; (2) deployment; and (3) ongoing use.

Read the guide