AWS Machine Learning Blog

Apache MXNet Release Candidate Introduces Support for Apple’s Core ML and Keras v1.2

Apache MXNet is an effort undergoing incubation at the Apache Software Foundation (ASF). Last week, the MXNet community introduced a release candidate for MXNet v0.11.0, its first as an incubating project, and the community is now voting on whether to accept this candidate as a release. It includes the following major feature enhancements:

  • A Core ML model converter that allows you to train deep learning models with MXNet and then deploy them easily to Apple devices
  • Support for Keras v1.2 that enables you to use the Keras interface with MXNet as the runtime backend when building deep learning models

The v0.11.0 release candidate also includes additional feature updates, performance enhancements, and fixes as outlined in the release notes.

Run MXNet models on Apple devices using Core ML (developer preview)

This release includes a tool that you can use to convert MXNet deep learning models to Apple’s Core ML format. Core ML is a framework that application developers can use for deploying machine learning models onto Apple devices with minimal memory footprint and power consumption. It uses the Swift programming language and is available on the Xcode integrated development environment (IDE). It allows developers to interact with machine learning models like any other Swift object class.

With this conversion tool, you now have a fast pipeline for your deep learning enabled applications. Move from scalable and efficient distributed model training in the cloud using MXNet to fast runtime inference on Apple devices. This developer preview of the Core ML model converter includes support for computer vision models. For more details about the converter, see the incubator-mxnet GitHub repo.

Multi-GPU performance for Keras v1.2

This release also adds support for Keras v1.2, which is the popular high-level Python library for developing deep learning models. Keras provides an easy-to-use interface that has high-level building blocks for modeling neural networks. Developers have the option to configure Keras to use other frameworks like TensorFlow, Theano, and now MXNet as the runtime backend for performing the underlying complex computations and model training.

With MXNet as a backend for Keras, developers can achieve high performance scaling across multiple GPUs. Previously with Keras, it was inefficient to train models at scale with more than one GPU. Keras users can now get near linear scaling when training across multiple GPUs. The following code snippet shows how you can set the number of GPUs in Keras when using MXNet as the backend:

# Prepare the list of GPUs to be used in training
NUM_GPU = 16 # or the number of GPUs available on your machine
gpu_list = []
for i in range(NUM_GPU): gpu_list.append('gpu(%d)' % i)

# Compile your model by setting the context to the list of GPUs to be used in training.
model.compile(loss='categorical_crossentropy',
 optimizer=opt,
 metrics=['accuracy'], 
 context=gpu_list)

Now, it is possible to take advantage of the Keras interface and also achieve performance across multiple GPUs. NVIDIA has conducted extensive research on the performance benchmarks of Keras with MXNet as the backend. You can also learn more about using MXNet as the backend for Keras by visiting the this GitHub repo.

Access to Release Candidate

You can get access to the release candidate by building MXNet from source or by performing a pip install with the following command:

pip install mxnet==0.11.0.rc2

If you have questions or suggestions, please comment below. Apache MXNet is an effort undergoing incubation at the Apache Software Foundation (ASF). For more information, visit

http://incubator.apache.org/projects/mxnet.html