AWS Machine Learning Blog

PyTorch 1.0 preview now available in Amazon SageMaker and the AWS Deep Learning AMIs

Amazon SageMaker and the AWS Deep Learning AMIs (DLAMI) now provide an easy way to evaluate the PyTorch 1.0 preview release. PyTorch 1.0 adds seamless research-to-production capabilities, while retaining the ease-of-use that has enabled PyTorch to rapidly gain popularity. The AWS Deep Learning AMI comes pre-built with PyTorch 1.0, Anaconda, and Python packages, with CUDA and MKL libraries to take advantage of accelerated compute instances. Amazon SageMaker is an end-to-end platform to quickly and easily build, train, tune, and deploy machine learning (ML) models at any scale. Now, Amazon SageMaker provides pre-configured environments with the PyTorch 1.0 preview that enables customers to leverage all SageMaker capabilities with PyTorch 1.0, including automatic model tuning.

PyTorch is an open source deep learning framework that’s well suited for research and experimentation. However, one of the biggest challenges for developers has been taking models created with PyTorch and running them in large-scale production environments. While PyTorch has rapidly gained popularity among developers for its ease of use, imperative style, simple API, and flexibility, transitioning from model exploration to production requires additional work, such as freezing graphs, that can be iterative and hence time-consuming. PyTorch 1.0 brings seamless research-to-production capabilities to the deep learning framework.

With the pre-configured PyTorch environment within Amazon SageMaker, developers and data scientists can specify their scripts using a single API call to train locally or to submit a distributed training job. With a second API call, developers can also now deploy PyTorch-trained models to a managed, highly available, online endpoint that can be automatically scaled up or down as demand requires. Developers can easily evaluate PyTorch 1.0 preview versus PyTorch 0.4, by changing a single parameter for a different version, just as easily as they are able to switch between TensorFlow, PyTorch, MXNet, and Chainer.PyTorch developers can take advantage of Amazon SageMaker features like automatic model tuning, Amazon CloudWatch integration, Amazon VPC support, and more. This allows developers to quickly connect to their data in a secure fashion, prepare the data, run local training with PyTorch in their notebook, scale it to the full dataset for distributed training in the cloud, use automatic tuning to optimize the model, and validate the results. After the model is validated after training and tuning, it can be deployed to a managed, highly available, online endpoint in Amazon SageMaker, with automatic scaling, integrated monitoring and logging, and high throughput capabilities.

Next steps:

  • Learn more about the vision and the road forward for PyTorch 1.0.
  • Attend the PyTorch sessions at AWS re:invent.
  • Check out the getting started blog for PyTorch on Amazon SageMaker here.
  • Go to Pytorch.org to learn more about the project, play with tutorials and learn about what’s new.
  • Spin up an instance with PyTorch 1.0 using the DLAMI here.

 


Kumar Venkateswar is a Product Manager in the AWS ML Platforms team, which includes Amazon SageMaker, Amazon Machine Learning, and the AWS Deep Learning AMIs. When not working, Kumar plays the violin and Magic: The Gathering.