Posted On: Dec 4, 2017
MXNet is now easier to use. Model serving capability for MXNet packages, runs, and serves deep learning models in seconds with just a few lines of code, making them accessible over the internet via an API endpoint and thus easy to integrate into applications. Learn more about the model server and view the source code, reference examples, and tutorials.
The 1.0 release includes an advanced indexing capability that enables users to perform matrix operations in a more intuitive manner. This release also includes cutting-edge features such as gradient compression which enables developers to train models up to five times faster, reducing communication bandwidth between compute nodes without loss in convergence rate or accuracy. There’s also a new tool for converting neural network code written with the Caffe framework to MXNet code, making it easier for developers to take advantage of MXNet’s scalability and performance.
Getting started with MXNet is simple. To learn more about the new Gluon interface for MXNet deep learning, you can reference this comprehensive set of tutorials, which cover everything from an introduction to deep learning to how to implement cutting-edge neural network models.