Posted On: Dec 8, 2020
Today we are introducing Amazon SageMaker Clarify to help machine learning developers achieve greater visibility into their training data and models so they can identify and limit bias and explain predictions.
Biases are imbalances in the accuracy of predictions across different groups, such as age or income bracket. Biases can result from the data or algorithm used to train your model. For instance, if an ML model is trained primarily on data from middle-aged individuals, it may be less accurate when making predictions involving younger and older people. The field of machine learning provides an opportunity to address biases by detecting them and measuring them in your data and model. You can also look at the importance of model inputs to explain why models make the predictions they do.
Amazon SageMaker Clarify detects potential bias during data preparation, after training, and in your deployed model by examining attributes you specify. For instance, you can check for bias related to age in your initial dataset or in your trained model and receive a detailed report that quantifies different types of possible bias. SageMaker Clarify also includes feature importance graphs that help you explain model predictions and produces reports that can be used to support internal presentations or to identify issues with your model that you can take steps to correct.
Amazon SageMaker Clarify is available in all regions where Amazon SageMaker is available and comes at no additional cost. Visit the SageMaker Clarify product page or documentation to learn more.