Posted On: Apr 4, 2019
Amazon SageMaker, a fully-managed service to build, train, and deploy machine learning models, now supports random search as a tuning strategy and multiple hyperparameter scaling options when using Automatic Model Tuning.
Using random search with Automatic Model Tuning allows customers achieve faster results by running all tuning trials concurrently through random selection of hyperparameter combinations in the search space rather than the iterative approach used by default. While both methods can produce a highly accurate model, random search may not produce the same level of accuracy as the default. Therefore, customers should implement random search when speed is more important than obtaining the highest possible level of accuracy.
Amazon SageMaker has also introduced the option to use log scaling and reverse log scaling hyperparameter scaling methods during Automatic Model Tuning. By default, SageMaker assumes a uniform distribution of hyperparameter values and uses linear scaling to select values in a search range. However, this may not be the most effective approach for some types of hyperparameters, such as a learning rate whose typical value spans multiple orders of magnitude and is not uniformly distributed. Customers can either rely on SageMaker to automatically determine the scaling method, or manually select it, for each hyperparameter to be tuned.
Random search and automatic scaling of hyperprameters in Automatic Model Tuning are available in all AWS regions where Amazon SageMaker is available today. For more information, please visit the related blog here.