AWS Machine Learning Blog

AWS Deep Learning AMIs now with optimized Chainer 4 and CNTK 2.5.1 to accelerate deep learning on Amazon EC2 instances

The AWS Deep Learning AMIs for Ubuntu and Amazon Linux now come with Chainer 4 and Microsoft Cognitive Toolkit (CNTK) 2.5.1 configured with optimizations for higher performance execution across Amazon EC2 instances. The AMIs are also available in five additional AWS Regions, expanding the coverage to 16 AWS Regions.

Accelerate deep learning with Chainer 4

The AMIs come with Chainer 4 configured with Intel’s Deep Learning Extension Package (iDeep) that accelerates deep learning operations such as convolution and rectified linear units (relu) routines on Intel Architecture powering Amazon’s Compute Optimized C instances.

For instance, developers can write code such as the following that will automatically use the optimized iDeep routine on CPU-only EC2 instances.

Step 1: Activate Chainer virtual environment

For Python 2

source activate chainer_p27

For Python 3

source activate chainer_p36

Step 2: Execute code that uses relu routine

import chainer as ch
import numpy as np
x = np.ones((3, 3), dtype='f')
y = ch.functions.relu(x)

The last line would automatically use the optimized relu routine.

Step 3: verify that the iDeep optimized routine is used


prints <class 'ideep4py.mdarray> instead of <class 'numpy.ndarray'>

The AMIs also come with Chainer 4 fully-configured with CuPy, NVIDIA CUDA 9, and cuDNN 7 to take advantage of mixed precision training on NVIDIA Volta V100 GPUs powering Amazon EC2 P3 instances. Chainer 4 provides improved support for Tensor Cores in Volta GPUs used in low precision computations.

Accelerate deep learning with Microsoft Cognitive Toolkit 2.5.1

The AMIs now deploy the CNTK 2.5.1 CPU-only build configured with Intel Math Kernel Library for Deep Neural Networks (Intel MKL-DNN) to optimize neural network routines on Amazon compute-optimized EC2 instances. The AMIs also deploy the CNTK 2.5.1 GPU build with NVIDIA CUDA 9 and cuDNN7 support for accelerated training on Amazon EC2 P3 instances.

Seamless deployment of optimized deep learning frameworks

The Deep Learning AMIs automatically deploy the high performance builds of deep learning frameworks optimized for the EC2 instance of your choice, when you activate the framework’s virtual environment for the first time. For example, use the following commands to activate the CNTK virtual environment:

For Python 2

source activate cntk_p27

For Python 3

source activate cntk_p36

This is similar to the way the AMIs deploy the optimized build of TensorFlow for Amazon EC2 instance families.

When you use SSH to connect to your EC2 instance running the Deep Learning AMI, the welcome screen shows the full list of commands for activating the virtual environment of any framework of your choice.

Adding five AWS Regions

The Deep Learning AMIs are now launched in 5 additional AWS Regions: US West (N. California), South America (Sao Paulo), Canada (Central), EU (London), and EU (Paris). The AMIs are now globally available in 16 AWS Regions.

Getting started with the Deep Learning AMIs

It’s fast and simple to get started with the AWS Deep Learning AMIs. Our latest AMIs are now available on the AWS Marketplace. You can also subscribe to our discussion forum to get new launch announcements and post your questions.


About the Author

Sumit Thakur is a Senior Product Manager for AWS Deep Learning. He works on products that make it easy for customers to get started with deep learning on cloud, with a specific focus on making it easy to use engines on Deep Learning AMI. In his spare time, he likes connecting with nature and watching sci-fi TV series.