Posted On: Apr 10, 2018
The AWS Deep Learning AMIs for Ubuntu and Amazon Linux now come with advanced optimizations for TensorFlow 1.7 tailored to deliver higher-performance training across Amazon EC2 C5 and P3 instances.
For CPU-based training scenarios, the AMIs now include TensorFlow 1.7 built with Intel’s Advanced Vector Instructions (AVX), SSE, and FMA instruction sets, to accelerate vector and floating point computations. The AMIs are also fully configured with Intel MKL-DNN to accelerate math routines used in neural network training on Intel Xeon Platinum processors powering Amazon EC2 C5 instances. Training a ResNet-50 benchmark with the synthetic ImageNet dataset was 9.8X faster than training on the stock TensorFlow 1.7 binaries by using our optimized build on a c5.18xlarge instance type.
In addition, to improve training performance for GPU-based scenarios, the AMIs include an optimized build of TensorFlow 1.7 fully configured with NVIDIA CUDA 9 and cuDNN 7 to take advantage of mixed precision training on Volta V100 GPUs powering Amazon EC2 P3 instances.
Finally, this release also includes TensorBoard 1.7 to help you visualize and debug your model training, and TensorFlow Serving 1.6 to quickly prototype an inference endpoint for your trained models. The AMIs also include Microsoft Cognitive Toolkit 2.5 with performance improvements and bug fixes.
Get started with the AWS Deep Learning AMIs using the developer guide. You can also subscribe to our discussion forum to get launch announcements and post your questions.