Skip to main content

AWS Deep Learning AMIs Developer Features

Overview

AWS Deep Learning AMIs (DLAMI) provides tools to accelerate deep learning in the cloud. The AMIs are preconfigured with popular frameworks, including TensorFlow, PyTorch, and Apache MXNet.

Supported Deep Learning AMIs

Frameworks:
PyTorch
TensorFlow
Operating systems:

Ubuntu Linux

Amazon Linux 2

Instances:

NVIDIA GPUs

AWS Trainium
AWS Inferentia
 
Platforms:

Amazon EC2

Amazon ECS

Amazon EKS

AWS Graviton

For details on the supported AMIs, review the release notes. For containerized AI/ML workloads, see AWS Deep Learning Containers.

Accelerate your model training

To expedite your development and model training, DLAMI includes the latest NVIDIA GPU acceleration through preconfigured CUDA and cuDNN drivers, as well as the Intel Math Kernel Library (MKL), popular Python packages, and the Anaconda Platform.

GPU instances - NVIDIA

P3 instances provide up to 14 times better performance than previous-generation Amazon EC2 GPU compute instances. With up to 8 NVIDIA Tesla V100 GPUs, P3 instances provide up to one petaflop of mixed-precision, 125 teraflops of single-precision, and 62 teraflops of double-precision floating point performance.

Powerful compute - Intel

C5 instances are powered by 3.0 GHz Intel Xeon Scalable processors and allow a single core to run up to 3.5 GHz using Intel Turbo Boost Technology. C5 instances offer a higher memory-to-vCPU ratio, deliver 25% improvement in price performance compared to C4 instances, and are ideal for demanding inference applications.

Python packages

DLAMI comes installed with Jupyter Notebook, loaded with Python 2.7 and Python 3.5 kernels and popular Python packages, including the AWS SDK for Python.

Anaconda platform

To simplify package management and deployment, DLAMI installs the Anaconda2 and Anaconda3 Data Science Platform for large-scale data processing, predictive analytics, and scientific computing.