Posted On: Jun 6, 2023

Amazon SageMaker Automatic Model Tuning is now able to automatically choose hyperparameter ranges, search strategy, maximum runtime of a tuning job, early stopping type for training jobs, number of times to retry a training job, and model convergence flag to stop a tuning job, based on the objective metric you provide. This minimizes the time required for you to kickstart your tuning process and increases the chances of finding more accurate models with a lower budget.

Choosing the correct hyperparameters requires experience with machine learning techniques and can drastically affect your model performance. Even with hyperparameter tuning, you still need to specify multiple tuning configurations, such as hyperparameter ranges, search strategy, and number of training jobs to launch. Correcting such a setting is intricate and typically requires multiple experiments, which may incur additional training costs.

Starting today, Amazon SageMaker Automatic Model Tuning provides autotune, a new configuration that eliminates the need to specify settings such as the hyperparameter ranges, tuning strategy, or the number of jobs that were required as part of the job definition. This accelerates your experimentation process and reduces wasted resources on evaluating suboptimal tuning configurations. You can also review and override any settings chosen automatically by autotune. The autotune option is available in the CreateHyperParameterTuningJob API and in the HyperparameterTuner SageMaker Python SDK.

The new functionality is now available for SageMaker Automatic Model Tuning in all commercial AWS Regions. To learn more, please visit the technical documentation, the API reference guide, the blog post or SageMaker Automatic Model Tuning web page.