Posted On: Apr 26, 2018
The AWS Deep Learning AMIs now include advanced optimizations for Chainer 4 and Microsoft Cognitive Toolkit (CNTK) 2.5.1 that are tailored to deliver higher-performance training across Amazon EC2 instances.
For GPU-based training, the AMIs come with Chainer 4 fully configured with CuPy, NVIDIA CUDA 9, and cuDNN 7 to take advantage of mixed-precision training on NVIDIA Volta V100 GPUs powering Amazon EC2 P3 instances. Chainer 4 also provides improved support for TensorCores in Volta GPUs used in low-precision computations. The AMIs also deploy the CNTK 2.5.1 GPU build with NVIDIA CUDA 9 and cuDNN7 support for accelerated training on Amazon EC2 P3 instances.
For CPU-based training, the AMIs come with Chainer 4 configured with Intel’s Deep Learning Extension Package (iDeep) which accelerates deep learning operations such as convolution on Intel architecture powering Amazon EC2 compute-optimized C5 and C4 instances. The AMIs now also deploy the CNTK 2.5.1 CPU-only build fully-configured with Intel Math Kernel Library for Deep Neural Networks (Intel MKL-DNN) to optimize neural network routines.
The Deep Learning AMIs automatically deploy these higher-performance builds of deep learning frameworks optimized for the EC2 instance of your choice, when you activate the framework’s virtual environment for the first time. This is similar to the way AMIs also deploy the optimized build of TensorFlow for Amazon EC2 instance families.
Get started with the AWS Deep Learning AMIs using the developer guide. You can also subscribe to our discussion forum to get launch announcements and to post your questions.