AWS Machine Learning Blog
Tag: Amazon SageMaker Automatic Model Tuning
Reduce inference time for BERT models using neural architecture search and SageMaker Automated Model Tuning
In this post, we demonstrate how to use neural architecture search (NAS) based structural pruning to compress a fine-tuned BERT model to improve model performance and reduce inference times. Pre-trained language models (PLMs) are undergoing rapid commercial and enterprise adoption in the areas of productivity tools, customer service, search and recommendations, business process automation, and […]
Tune ML models for additional objectives like fairness with SageMaker Automatic Model Tuning
Model tuning is the experimental process of finding the optimal parameters and configurations for a machine learning (ML) model that result in the best possible desired outcome with a validation dataset. Single objective optimization with a performance metric is the most common approach for tuning ML models. However, in addition to predictive performance, there may […]
Amazon SageMaker Automatic Model Tuning now supports three new completion criteria for hyperparameter optimization
Amazon SageMaker has announced the support of three new completion criteria for Amazon SageMaker automatic model tuning, providing you with an additional set of levers to control the stopping criteria of the tuning job when finding the best hyperparameter configuration for your model. In this post, we discuss these new completion criteria, when to use them, and […]