Posted On: Feb 8, 2022
Amazon SageMaker Autopilot automatically builds, trains, and tunes the best machine learning models based on your data, while allowing you to maintain full control and visibility. Starting today, SageMaker Autopilot provides new metrics and reports that provide you better visibility into model performance for classification problems. You can leverage these metrics to gather more insights about the best model in the Model leaderboard.
New metrics and reports include Confusion Matrix, Area under the receiver operating characteristic (AUC-ROC) curve and Area under the precision-recall curve (AUC-PR), that help in understanding false positives/false negatives, tradeoffs between true positives and false positives, tradeoffs between precision and recall to assess the performance characteristics of the best model. Specifically, Confusion matrix helps visualize model performance with respect to different classes/labels, Area under the receiver operating characteristic (AUC-ROC) is representative of the trade off between true positive and false positive rates, and Area under the precision-recall curve (AUC-PR) is representative of the tradeoff between Precision and Recall. These new metrics are available in a new “Performance” tab under “Model Details” for the best model candidate and can be downloaded into a pdf report. As previously available, additional scalar metrics such as F1, F1 macro, AUC, MSE and Accuracy are available for all model candidates in the leaderboard.
Starting today, these new model reports and insights for the best candidate are available in all regions where SageMaker Autopilot is available. To learn more see Autopilot Model Reports. To get started with SageMaker Autopilot, see the product page or access SageMaker Autopilot within SageMaker Studio.