AWS Public Sector Blog

A framework to mitigate bias and improve outcomes in the new age of AI

A framework to mitigate bias and improve outcomes in the new age of AI

Artificial intelligence (AI) and machine learning (ML) technologies are enabling our customers to transform solutions they deliver in nearly every industry, including the public sector. AI and ML empower healthcare and life science organizations to make sense of their data. These technologies enable government agencies to improve constituent experience and deliver cost-effective services. In education, ML is transforming teaching, learning, and research. Excitement in this space is growing due to these revolutionary advancements, though some organizations have challenges with wider adoption, including trust and transparency.

In this post, learn a high-level framework that can help you address these challenges. This framework includes methods to mitigate bias, provide transparency, and ultimately improve the wider adoption of these technologies.

What is bias in AI and ML and how is it introduced?

First, let’s clarify what bias is and how it can be introduced into a model. Bias is a situation in which the results are skewed in favor of or against an outcome for a particular class. Models are built to predict the probability of an outcome, based on finding patterns from historical data. Algorithms build these models by analyzing the values or features in historical data. These features are used to determine the patterns and used to predict the probability of a value on new unseen data. For example, agencies can use ML capabilities to help detect fraud with ML models that are trained on historical data to predict how likely a given financial transaction may be fraudulent.

Bias can occur when the model makes predictions based on the patterns found on the incoming features that aren’t correctly represented. These can be features such gender, age, location, or any other feature that might be sensitive in nature. This can happen in both balanced or imbalanced datasets as it relates to the variable that’s being predicted. Overall, the quality of these models depends on the underlying data quality and quantity.

Bias can also be introduced through the model lifecycle from training to ML operations in production. Responsible AI and ML includes building systems to detect bias in datasets and models, providing insights into model predictions, and monitoring and reviewing model predictions through automation and human oversight.

Framework to mitigate bias and improve trust and transparency with AI and ML

So how can you build a mechanism to detect and mitigate bias in ML models? Let’s dive in.

1. Focus on data quality and integrity

Selecting the right features is integral for producing models that minimize bias. Data scientists and business analysts should pay attention to the data and ML lifecycle tasks, including data processing, feature engineering, and model management with continuous feedback and learning mechanisms. You can develop mechanisms to evaluate whether the training data appropriately represents real world use cases. This can lead to the collection of additional data to address underrepresented features. In the fraud use case, the data that represent the fraud activity should span the input features for the incoming data, such as location, income, age, and other input features. If not, the model may learn an undesirable pattern that may be applied to new ,unseen incoming data and introduce bias.

Models trained on imbalance, or skewed target features, may also introduce bias for the over-represented target value. The historical data that you use to build the model should have enough representation of transactions. In an example use case to detect fraud, both legitimate and fraudulent predicted values represent an imbalance data because there are many more legitimate records than fraudulent ones. You can address data imbalance with various techniques, including under sampling and over sampling, as outlined in the blog post, “Balance your data for machine learning with Amazon SageMaker Data Wrangler.”

AWS offers several capabilities for mitigating bias on your training data. You can use Amazon SageMaker Data Wrangler for feature engineering and visualize distributions to quickly see the imbalance in data. AWS Glue Data Quality automatically computes statistics, recommends quality rules, and monitors, and alerts you when it detects that quality has deteriorated. Amazon SageMaker Feature Store allows you to store and discover features that have been vetted.

In AI and ML operations, bias can be introduced into a model through model drift, which refers to the degradation of a model’s prediction due to changes in real world environments. Proactive data assessment, monitoring, and auditing can be put in place to continuously track the model performance and quality of recommendations. You can use Amazon SageMaker Model Monitor to continuously monitor the quality of ML models in production. Model Monitor automatically detects and alerts you on inaccurate predictions from models deployed in production. It also incorporates process automation to continuously re-train models using the latest data to avoid model drift.

2. Use the power of human and machine

AI and ML systems can help accelerate and potentially automate decision-making processes where appropriate. In cases where human review is required, it’s important to use the power of human and machine to achieve the desired outcomes, enhancing the power of each when used together. AI systems use data to make inferences and may assign a confidence score to their prediction, which is typically a value expressing how likely a specific result may occur. Organizations can integrate the concept of “human in the loop” for AI systems to help direct predictions for further review as appropriate. Even for high confidence predictions, it may be important to incorporate human review for sampling the outcomes to support fairness and help improve end-user confidence in AI systems. You can implement human review of ML predictions, including review and verification, when using Amazon Augmented AI (A2I).

3. Improve trust in AI systems with transparency and explainability

It’s important that constituents and the public can trust the AI systems used by their governments and trust that the outcomes provided by these systems are fair and equitable. The AI Risk Management Framework (RMF) released by the National Institute of Standards and Technology (NIST) provides guidance on promoting trustworthy, responsible development and use of AI systems.

Trust in AI systems can be improved by making the predictions and use of AI systems more transparent through model explainability, which is the ability to explain and interpret ML model outcomes. This can be important to help understand why and how certain decisions are made – especially when AI is used for consequential decision-making in contexts such as hiring or loan approvals. Transparency helps build trust and refers to communicating information about an AI system so that stakeholders can make informed choices about their use of the system.

AWS offers capabilities to increase transparency and explainability. Amazon SageMaker Clarify provides greater visibility into data and models so that you can identify and limit bias, increase transparency, and help explain predictions. To further improve transparency, AWS recently launched AWS AI Service Cards, which explain common use cases for which the service is intended, how ML is used by the service, and key considerations in the responsible design and use of the service. The AI service cards are available for three AWS AI services including Amazon Rekognition, Amazon Textract, and Amazon Transcribe.

Trust in AI systems can also be improved by sending notices. For example, you can choose to notify the end-users that they’re interacting with a chatbot instead of a live human.

4. Develop AI operational excellence

It’s key for organizations to develop AI operational excellence strategies, which can include monitoring, auditing, logging, and reporting of AI systems to continually mitigate bias and improve trust and transparency. AI operational excellence can also include developing metrics and a test plan to measure system performance against production uses and ongoing tests against datasets that represent production data.

AWS offers several tools that can help with this, including the AWS Well-Architected ML lens, which provides a framework for developing operational excellence. Additionally, AWS provides Amazon SageMaker for MLOps to incorporate operational best practices.

Summary

AI and ML have had a positive impact on the public sector and many other industry verticals, including healthcare and education. However, opportunities remain to refine these technologies, such as improving confidence among consumers, earning public trust that AI will be used responsibly, mitigating potential bias, and improving transparency. This is especially true as generative AI continues to grow and evolve. At AWS, we are committed to building generative AI in a responsible way, taking an iterative approach across the AI lifecycle. You can learn more about the emerging challenges and risks of generative AI, and the steps being taken to mitigate and find new solutions in the blog post, “Responsible AI in the generative era.”

AWS can help you with the AI and ML lifecycle to help you efficiently and strategically support your constituents and communities. To learn more, read the Responsible use of ML guide and find more in the Responsible use of AI and ML and Innovate with Machine Learning hubs.

Read related stories on the AWS Public Sector Blog:

Subscribe to the AWS Public Sector Blog newsletter to get the latest in AWS tools, solutions, and innovations from the public sector delivered to your inbox, or contact us.

Please take a few minutes to share insights regarding your experience with the AWS Public Sector Blog in this survey, and we’ll use feedback from the survey to create more content aligned with the preferences of our readers.

Srinath Godavarthi

Srinath Godavarthi

Srinath Godavarthi is a principal solutions architect at Amazon Web Services (AWS), based in the Washington, D.C. area. In that role, he helps public sector customers achieve their mission objectives with well architected solutions on AWS. Prior to AWS, he worked with global systems integrators for over 20 years serving the Food and Drug Administration, the Department of Veterans Affairs, and the Centers for Medicare and Medicaid Services. He focuses on innovative healthcare solutions using artificial intelligence (AI) and machine learning (ML).

Ben Snively

Ben Snively

Ben Snively is a senior principal solutions architect in data sciences at Amazon Web Services (AWS), where he specializes in building systems and solutions leveraging big data, analytics, machine learning, and deep learning. Ben has over 20 years of experience in the analytics and machine learning space and helps bridge the gap between technology and business initiatives. Ben holds both a Master of Science in Computer Science from Georgia Institute of Technology and a Master of Science in Computer Engineering from University of Central Florida.