The AWS Deep Learning AMIs provide machine learning practitioners and researchers with the infrastructure and tools to accelerate deep learning in the cloud, at any scale. You can quickly launch Amazon EC2 instances pre-installed with popular deep learning frameworks such as Apache MXNet and Gluon, TensorFlow, Microsoft Cognitive Toolkit, Caffe, Caffe2, Theano, Torch, PyTorch, Chainer, and Keras to train sophisticated, custom AI models, experiment with new algorithms, or to learn new skills and techniques.
Whether you need Amazon EC2 GPU or CPU instances, there is no additional charge for the Deep Learning AMIs – you only pay for the AWS resources needed to store and run your applications.
Choosing an AWS Deep Learning AMI
Even for experienced machine learning practitioners, getting started with deep learning can be time consuming and cumbersome. The three types of AMIs we offer support the various needs of developers. To help guide you through the getting started process, also visit the AMI selection guide and more deep learning resources.
For developers who want pre-installed pip packages of deep learning frameworks in separate virtual environments, the Deep Learning Conda-based AMI is available in Ubuntu, Amazon Linux and Windows 2016 versions.
For developers who want a clean slate to set up private deep learning engine repositories or custom builds of deep learning engines, the Deep Learning Base AMI is available in Ubuntu and Amazon Linux versions.
AMI with Source Code
For developers who want pre-installed deep learning frameworks and their source code in a shared Python environment, this Deep Learning AMI is available for P3 instances in CUDA 9 Ubuntu and Amazon Linux versions as well as for P2 instances in CUDA 8 Ubuntu and Amazon Linux versions.
Get started with this step-by-step guide.
Support for deep learning frameworks
The AWS Deep Learning AMIs support all the popular deep learning frameworks allowing you to define models and then train them at scale. Built for Amazon Linux and Ubuntu, the AMIs come pre-configured with Apache MXNet and Gluon, TensorFlow, Microsoft Cognitive Toolkit, Caffe, Caffe2, Theano, Torch, PyTorch, Chainer, and Keras, enabling you to quickly deploy and run any of these frameworks at scale.
Accelerate your model training
To expedite your development and model training, the AWS Deep Learning AMIs include the latest NVIDIA GPU-acceleration through pre-configured CUDA and cuDNN drivers, as well as the Intel Math Kernel Library (MKL), in addition to installing popular Python packages and the Anaconda Platform.
The AWS Deep Learning AMIs run on Amazon EC2 Intel-based C5 instances designed for inference. The AMis also come integrated with the Intel Math Kernel Library (MKL) to accelerate math processing and neural network routines.
The AMIs come installed with Jupyter notebooks loaded with Python 2.7 and Python 3.5 kernels, along with popular Python packages, including the AWS SDK for Python.
To simplify package management and deployment, the AWS Deep Learning AMIs install the Anaconda2 and Anaconda3 Data Science Platform, for large-scale data processing, predictive analytics, and scientific computing.
Amazon SageMaker for machine learning
Amazon SageMaker is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. Amazon SageMaker removes all the barriers that typically slow down developers who want to use machine learning.