AWS Machine Learning Blog
Tag: Apache MXNet
The importance of hyperparameter tuning for scaling deep learning training to multiple GPUs
Parallel processing with multiple GPUs is an important step in scaling training of deep models. In each training iteration, typically a small subset of the dataset, called a mini-batch, is processed. When a single GPU is available, processing of the mini-batch in each training iteration is handled by this GPU. When training with multiple GPUs, […]
Apache MXNet (incubating) adds support for Keras 2
The Keras-MXNet deep learning backend is available now, thanks to contributors to the Keras and Apache MXNet (incubating) open source projects. Keras is a high-level neural network API written in Python. It’s popular for its fast and easy prototyping of CNNs and RNNs. Keras developers can now use the high-performance MXNet deep learning engine for […]
Apache MXNet Model Server adds optimized container images for Model Serving at scale
Today AWS released Apache MXNet Model Server (MMS) v0.3, which streamlines the deployment of model serving for production use cases. The release includes pre-built container images that are optimized for deep learning workloads on GPU and CPU. This enables engineers to set up a scalable serving infrastructure. To learn more about Apache MXNet Model Server […]
Model Server for Apache MXNet introduces ONNX support and Amazon CloudWatch integration
Today AWS released version 0.2 of Model Server for Apache MXNet (MMS), an open-source library that packages and serves deep learning models for making predictions at scale. Now you can serve models in Open Neural Network Exchange (ONNX) format and publish operational metrics directly to Amazon CloudWatch, where you can create dashboards and alarms. What […]
Speeding up Apache MXNet using the NNPACK library
Apache MXNet is an open source library developers can use to build, train, and re-use deep learning networks. In this blog post, I’ll show you to speed up inference by using the NNPACK library. Indeed, when GPU inference is not available, adding NNPACK to Apache MXNet might be a simple option to extract more performance […]
Updated AWS Deep Learning AMIs: New Versions of TensorFlow, Apache MXNet, Keras, and PyTorch
We’re excited to update the AWS Deep Learning AMIs with significantly faster training on NVIDIA Tesla V100 “Volta” GPUs across many frameworks, including TensorFlow, PyTorch, Keras, and the latest Apache MXNet 1.0 release. There are two main flavors of the AMIs available today. The Conda-based AWS Deep Learning AMI packages the latest point releases of […]
Introducing Model Server for Apache MXNet
Earlier this week, AWS announced the availability of Model Server for Apache MXNet, an open source component built on top of Apache MXNet for serving deep learning models. Apache MXNet is a fast and scalable training and inference framework with an easy-to-use, concise API for machine learning. With Model Server for Apache MXNet, engineers are […]
AWS Contributes to Milestone 1.0 Release of Apache MXNet Including the Addition of a New Model Serving Capability
Today AWS announced contributions to the milestone 1.0 release of the Apache MXNet deep learning engine and the introduction of a new model serving capability for MXNet. These new capabilities (1) simplify training and deploying deep learning models, (2) enable implementation of cutting-edge performance enhancements, and (3) provide easy interoperability between deep learning frameworks. In […]
Distributed Inference Using Apache MXNet and Apache Spark on Amazon EMR
In this blog post we demonstrate how to run distributed offline inference on large datasets using Apache MXNet (incubating) and Apache Spark on Amazon EMR. We explain how offline inference is useful, why it is challenging, and how you can leverage MXNet and Spark on Amazon EMR to overcome these challenges. Distributed inference on large […]
Run Deep Learning Frameworks with GPU Instance Types on Amazon EMR
Today, AWS is excited to announce support for Apache MXNet and new generation GPU instance types on Amazon EMR, which enables you to run distributed deep neural networks alongside your machine learning workflows and big data processing. Additionally, you can install and run custom deep learning libraries on your EMR clusters with GPU hardware. Through […]