Posted On: Sep 21, 2021

Amazon SageMaker Autopilot automatically builds, trains, and tunes the best machine learning models based on your data, while allowing you to maintain full control and visibility. Starting today, SageMaker Autopilot generates additional metrics, along with the objective metric, for all model candidates. For binary classification problems, Autopilot now generates F1 score (harmonic mean of the precision and recall), accuracy, and AUC (area under the curve) for all model candidates. For multi-class classification, Autopilot now generates both F1 macro and accuracy for all model candidates. As previously supported, you can select any of these metrics as the objective metric to be optimized by your Autopilot experiment. By viewing additional metrics along with the objective metric, you can now quickly assess and compare multiple candidates to build a model that best meets your needs.

The additional metrics are now generated in all AWS regions where SageMaker Autopilot is currently supported. For a complete list of metrics and default objective metric per problem type, please review documentation. To get started with SageMaker Autopilot, see the Getting Started or access Autopilot within SageMaker Studio.