Requires only a few lines of code to implement
Saves significant compute time and cost
Supports popular frameworks and bring your own algorithms
Amazon SageMaker Automatic Model Tuning (also known as hyperparameter tuning or hyperparameter optimization) finds the best version of your machine learning (ML) model by running multiple training jobs on your dataset using your specified algorithm and hyperparameter ranges. It then chooses the hyperparameter values that result in the best performing model, as determined by your chosen metric. You specify an ML model to tune, your objective metric, and the hyperparameters to search, and SageMaker Automatic Model Tuning finds a better version of the model in the most cost-effective way.
How it works
SageMaker Automatic Model Tuning works out of the box with default configuration settings to simplify implementation by removing the need to provision hardware, install the right software, and download the training data. You can save time and money by selecting the right compute infrastructure and leveraging experiment management tools to manage hyperparameter tuning training jobs.
SageMaker Automatic Model Tuning can scale to run multiple tuning jobs in parallel, use
distributed clusters of compute instances, and support large volumes of data. It includes a failure-resistant workflow with built-in retry mechanisms for robustness.
SageMaker Automatic Model Tuning can save compute time and cost with early stopping. This feature uses the
information from the previously evaluated configurations to predict whether a specific candidate is promising and, if it is not, stops the evaluation. Moreover, using SageMaker warm start, you can accelerate the hyperparameter tuning
process and reduce the cost for tuning models. You can start a new hyperparameter tuning job based on selected parent jobs so that training jobs conducted in those parent jobs can be reused as prior knowledge.
SageMaker’s warm start can help you run your tuning jobs iteratively and improve model accuracy
by seeding your tuning job with the hyperparameter evaluations from previous tuning tasks.
State-of-the-art search algorithms
SageMaker Automatic Model Tuning offers an intelligent version of hyperparameter tuning methods that is based on the Bayesian search theory and is designed to find the best model in the shortest time. It starts with a random search but then learns how the model is behaving with respect to hyperparameter values. In the subsequent steps, SageMaker Automatic Model Tuning uses this knowledge to try hyperparameters against model objective metrics. When choosing the best hyperparameters for the next training job, it considers everything that it knows about the problem so far and allows the algorithm to use the best-known results.
SageMaker Automatic Model Tuning now also supports Hyperband, a new search strategy that can find the optimal set of hyperparameters up to 3x faster than Bayesian search for large-scale models such as deep neural networks that address computer vision problems. Hyperband is a new multi-fidelity tuning strategy that uses both intermediate and final results of training jobs to dynamically re-allocate resources to promising hyperparameter configurations and automatically stops the underperforming training jobs.
Support for ML frameworks and models
SageMaker Automatic Model Tuning works seamlessly with the SageMaker built-in algorithms,
including tree-based models such as XGBoost, neural network–based forecasting models such as DeepAR, and scikit-learn models, as well as bring your own algorithms and deep learning neural network models.
Built-in integration with ML workflows
To save further development time and effort, you can add a model tuning step in your SageMaker Pipelines workflow that will automatically invoke a hyperparameter tuning job as part of the model building workflow, without requiring custom integration code.
Built-in integration with SageMaker Jumpstart and SageMaker Autopilot
SageMaker Automatic Model Tuning is integrated into SageMaker JumpStart, providing one-click fine tuning and deployment of a wide variety of pretrained models across ML tasks, algorithms, and solutions for common business problems. It is also integrated into SageMaker Autopilot to find the best version of a model using hyperparameter optimization (HPO) mode. HPO mode selects the algorithms that are most relevant to your dataset and selects the best range of hyperparameters to tune your models. To tune your models, HPO mode runs up to 100 trials to find the optimal hyperparameters settings within the selected range.