AWS Neuron

SDK to optimize machine learning inference on AWS Inferentia chips

AWS Neuron is a software development kit (SDK) for running machine learning inference using AWS Inferentia chips. It consists of a compiler, run-time, and profiling tools that enable developers to run high-performance and low latency inference using AWS Inferentia-based Amazon EC2 Inf1 instances. The fastest and easiest way to get started with Inf1 instances is Amazon SageMaker, a fully managed service that enables data scientists and developers to build, train, and deploy machine learning models.

Developers who prefer to manage their own machine learning workflows will find AWS Neuron easy to integrate into their existing and future workflows as it is natively integrated with popular frameworks including TensorFlow, PyTorch, and MXNet. Neuron is pre-installed in AWS Deep Learning AMIs and can also be installed in your custom environment without a framework. Additionally, Neuron will soon be available pre-installed in AWS Deep Learning Containers

Site-Merch_Neuron-ML-SDK_Editorial

How it works

how-it-works-inf1

Getting Started

Tutorials / how-to guides / application notes, and documentation are available on GitHub
For further assistance, a developer's forum is available via the AWS console or at: https://forums.aws.amazon.com/forum.jspa?forumID=355