The AWS Deep Learning AMIs contain pre-configured deep learning engines built for Amazon Linux, and Ubuntu, available on the AWS Makretplace, enabling you quickly deploy and run any of the major deep learning frameworks at any scale.  The Deep Learning AMIs contain all of the necessary pre-built packages, libraries and frameworks that you need to start building AI systems using deep learning.   The Deep Learning AMIs install dependencies, track library versions, and validate code compatibility.

The AMIs are provided and supported by Amazon Web Services, for use on Amazon EC2. There is no additional charge for the Deep Learning AMIs – you only pay for the AWS resources needed to store and run your applications.



The Deep Learning AMIs install key libraries that are prebuilt and preconfigured with all dependencies included.

python-logo

The Deep Learning AMIs install Jupyter notebooks with Python 2.7 and Python 3.4 kernel, awscli, matplotlib, scikit-image, cpplint, pylint, Python Data Analysis Library (pandas), graphviz, the AWS SDK for Python (boto and boto3), bokeh and seaborn python packages, as well as the Anaconda2 and Anaconda3 Data Science platform.
 

Intel Math Kernel Library (MKL)

Intel Math Kernel Library (Apache MXNet only)

NVIDIA

NVIDIA CUDA and NVIDIA CUDA Deep Neural Network library (cuDNN) with all frameworks supported

 


The deep learning AMIs include popular deep learning frameworks, including Apache MXNet, Caffe, Caffe2, TensorFlow, Theano, CNTK, Torch and Keras. Tutorials are included for each framework, supporting a single command bash script to run MNist training out-of-the-box illustrating proper installation, config and model accuracy, found in the following directories:

Ubuntu Linux: /home/ubuntu/src/bin

Amazon Linux: /home/ec2-user/src/bin

Apache MXNet

Apache MXNet is a flexible,efficient, portable and scalable open source library for deep learning. It supports declarative and imperative programming models, across a wide variety of programming languages, making it powerful yet simple to code deep learning applications. MXNet is efficient, inherently supporting automatic parallel scheduling of portions of source code that can be parallelized over a distributed environment. MXNet is also portable, using memory optimizations that allow it to run on mobile phones to full servers.

TensorFlow

TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API.

Caffe

Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Center (BVLC) and by community contributors.

 

caffe2_100

A new lightweight, modular, and scalable deep learning framework. Caffe2 aims to provide an easy and straightforward way for you to experiment with deep learning and leverage community contributions of new models and algorithms. You can bring your creations to scale using the power of GPUs in the cloud or to the masses on mobile with Caffe2's cross-platform libraries.

Keras

Keras is a high-level neural networks library, written in Python and capable of running on top of either TensorFlow or Theano. It was developed with a focus on enabling fast experimentation. 

Theano

Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. 

Torch

Torch is a scientific computing framework with wide support for machine learning algorithms that puts GPUs first. It is easy to use and efficient, thanks to an easy and fast scripting language, LuaJIT, and an underlying C/CUDA implementation.