AWS Deep Learning Containers with Amazon Elastic Inference for PyTorch 1.3.1


Release Date: August 28, 2020
Created On: August 31, 2020
Last Updated: August 31, 2020


The AWS Deep Learning Containers for Elastic Inference are updated today for PyTorch 1.3.1. You can launch the new version of this Deep Learning Containers on Amazon SageMaker, self-managed Kubernetes on Amazon EC2, and Amazon Elastic Container Service (Amazon ECS). For a complete list of packages and versions supported by these Deep Learning Containers, see the release notes below.

The AWS Deep Learning Containers with Amazon Elastic Inference (EI) with PyTorch allow you to run inference calls on PyTorch 1.3.1 on Elastic Inference Accelerators. Amazon EI allows you to attach low-cost GPU-powered acceleration to Amazon EC2 and Amazon SageMaker instances and Amazon ECS tasks to reduce the cost of running deep learning inference by up to 75%. These Docker images have been tested with Amazon SageMaker, EC2, and ECS. All software components in these images are scanned for security vulnerabilities and updated or patched in accordance with AWS Security best practices.

More details can be found in the marketplace, and a list of available containers can be found in our documentation. Get started quickly with the AWS Deep Learning Containers using the getting-started guides and beginner to advanced level tutorials in our developer guide. You can also subscribe to our discussion forum to get launch announcements and post your questions.

Release Notes

Security Advisory

Highlights of the Release

  1. Updated multi-model-server (was mxnet-model-server)
  2. Upgraded ECL to v.1.7.0
  3. Upgraded Sagemaker Inference to 1.5.2 and sagemaker-pytorch-inference to 1.5.1.post1
  4. Installed Emacs.

Pre-packaged deep learning Framework components

  • PyTorch: PyTorch is a deep learning framework that operates using dynamic computation graphs. It enables users to imperatively specify deep neural networks using idomatic Python code.
    • branch/tag used : v1.3.1
    • Justification : Stable and well tested

Bill of Materials: List of all software components in the Containers

  • PyTorch EIA Inference Container
    • sagemaker-inference==1.5.2
    • sagemaker-pytorch-inference==1.5.1.post1
    • torch==1.3.1
    • torch-eia==1.3.1
    • torchvision==0.4.2+cpu
    • PyYAML==5.3.1
    • Pillow==7.2.0
    • pandas==0.25.0
    • multi-model-server==1.1.2
    • numpy==1.19.1
    • awscli==1.18.125
    • ECL v1.7.0

Python Support

Python 3.6 is supported in the PyTorch Elastic Inference containers.

CPU Instance Type Support

The containers support CPU instance types.

AWS Regions support

Available in the following regions:

Region Code
US East (Ohio) us-east-2
US East (N. Virginia) us-east-1
US West (Oregon) us-west-2
Asia Pacific (Seoul) ap-northeast-2
Asia Pacific (Tokyo) ap-northeast-1
EU (Ireland) eu-west-1

Build and Test

  • Built on: c5.18xlarge
  • Tested on: c5.xlarge, g4dn.xlarge, m5.large, t2.2xlarge, c5.18xlarge
  • Tested on: eia1 and eia2 accelerator types
  • Tested on EC2, ECS AMI (Amazon Linux AMI 2.0.20190614) and Amazon SageMaker.

Known Issues

  • Issue: No known issues.