The AWS Deep Learning AMIs provide machine learning practitioners and researchers with the infrastructure and tools to accelerate deep learning in the cloud, at any scale. You can quickly launch Amazon EC2 instances pre-installed with popular deep learning frameworks such as Apache MXNet and Gluon, TensorFlow, Microsoft Cognitive Toolkit, Caffe, Caffe2, Theano, Torch, PyTorch, Chainer, and Keras to train sophisticated, custom AI models, experiment with new algorithms, or to learn new skills and techniques.
Whether you need Amazon EC2 GPU or CPU instances, there is no additional charge for the Deep Learning AMIs – you only pay for the AWS resources needed to store and run your applications.
85% of TensorFlow projects in the cloud happen on AWS.
Choosing an AWS Deep Learning AMI
Even for experienced machine learning practitioners, getting started with deep learning can be time consuming and cumbersome. The AMIs we offer support the various needs of developers. To help guide you through the getting started process, also visit the AMI selection guide and more deep learning resources.
Support for deep learning frameworks
The AWS Deep Learning AMIs support all the popular deep learning frameworks allowing you to define models and then train them at scale. Built for Amazon Linux and Ubuntu, the AMIs come pre-configured with Apache MXNet and Gluon, TensorFlow, Microsoft Cognitive Toolkit, Caffe, Caffe2, Theano, Torch, PyTorch, Chainer, and Keras, enabling you to quickly deploy and run any of these frameworks at scale.
Accelerate your model training
To expedite your development and model training, the AWS Deep Learning AMIs include the latest NVIDIA GPU-acceleration through pre-configured CUDA and cuDNN drivers, as well as the Intel Math Kernel Library (MKL), in addition to installing popular Python packages and the Anaconda Platform.
P3 instances provide up to 14 times better performance than previous-generation Amazon EC2 GPU compute instances. With up to 8 NVIDIA Tesla V100 GPUs, P3 instances provide up to one petaflop of mixed-precision, 125 teraflops of single-precision, and 62 teraflops of double-precision floating point performance.
C5 instances are powered by 3.0 GHz Intel Xeon Scalable processors, and allow a single core to run up to 3.5 GHz using Intel Turbo Boost Technology. C5 instances offer higher memory to vCPU ratio and deliver 25% improvement in price/performance compared to C4 instances, and are ideal for demanding inference applications.
The AWS Deep Learning AMIs come installed with Jupyter notebooks loaded with Python 2.7 and Python 3.5 kernels, along with popular Python packages, including the AWS SDK for Python.
To simplify package management and deployment, the AWS Deep Learning AMIs install the Anaconda2 and Anaconda3 Data Science Platform, for large-scale data processing, predictive analytics, and scientific computing.
Amazon SageMaker for machine learning
Amazon SageMaker is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. Amazon SageMaker removes all the barriers that typically slow down developers who want to use machine learning.