Posted On: Nov 1, 2017
Apache MXNet version 0.12 is now available with two new important features—support for NVIDIA Volta GPUs, and support for Sparse Tensors.
Support for NVIDIA Volta GPU Architecture
The MXNet v0.12 release adds support for NVIDIA Volta V100 GPUs, enabling customers to train convolutional neural networks up to 3.5 times faster than on the Pascal GPUs. The Volta GPU architecture introduces Tensor Cores, which allow mixed-precision training. With mixed precision of Tensor Cores, users can achieve optimal training performance without sacrificing accuracy by using FP16 for most of the layers of a network, and higher precision data types only when necessary. You can take advantage of Volta Tensor Cores to enable FP16 training in MXNet by passing a simple command.
Recently we announced a new set of AWS Deep Learning AMIs, which come pre-installed with various deep learning frameworks including MXNet v0.12, optimized for the NVIDIA Volta V100 GPUs in the Amazon EC2 P3 instance family. You can start with just one click from the AWS Marketplace or follow this step-by-step guide to get started with your first notebook.
Sparse Tensor Support
MXNet v0.12 adds support for sparse tensors to efficiently store and compute tensors allowing developers to perform sparse matrix operations in a storage and compute-efficient manner and train deep learning models faster. This release supports two major sparse data formats–Compressed Sparse Row (CSR) and Row Sparse (RSP). The CSR format is optimized to represent matrices with large number of columns where each row has only a few non-zero elements. The RSP format is optimized to represent matrices with huge number of rows where most of the row slices are completely zeros. This release enables sparse support on CPU for most commonly used operators such as matrix dot product and element-wise operators. Sparse support for more operators will be added in future releases.
Follow these tutorials to learn how to use the new sparse operators in MXNet.