Amazon Elastic Inference

Add GPU acceleration to any Amazon EC2 instance for faster inference at much lower cost (up to 75% savings)

Amazon Elastic Inference allows you to attach low-cost GPU-powered acceleration to Amazon EC2 and Amazon SageMaker instances to reduce the cost of running deep learning inference by up to 75%. Amazon Elastic Inference supports TensorFlow, Apache MXNet, and ONNX models, with more frameworks coming soon.

In most deep learning applications, making predictions using a trained model—a process called inference—can drive as much as 90% of the compute costs of the application due to two factors. First, standalone GPU instances are designed for model training and are typically oversized for inference. While training jobs batch process hundreds of data samples in parallel, most inference happens on a single input in real time that consumes only a small amount of GPU compute. Even at peak load, a GPU's compute capacity may not be fully utilized, which is wasteful and costly. Second, different models need different amounts of GPU, CPU, and memory resources. Selecting a GPU instance type that is big enough to satisfy the requirements of the least used resource often results in under-utilization of the other resources and high costs.

Amazon Elastic Inference solves these problems by allowing you to attach just the right amount of GPU-powered inference acceleration to any EC2 or SageMaker instance type with no code changes. With Amazon Elastic Inference, you can now choose the instance type that is best suited to the overall CPU and memory needs of your application, and then separately configure the amount of inference acceleration that you need to use resources efficiently and to reduce the cost of running inference.

Introducing Amazon Elastic Inference

Benefits

Reduce inference costs by up to 75%

Amazon Elastic Inference allows you to choose the instance type that is best suited to the overall compute and memory needs of your application. You can then separately specify the amount of inference acceleration that you need. This reduces inference costs by up to 75% because you no longer need to over-provision GPU compute for inference.

Get exactly what you need

Amazon Elastic Inference can provide as little as a single-precision TFLOPS (trillion floating point operations per second) of inference acceleration or as much as 32 mixed-precision TFLOPS. This is a much more appropriate range of inference compute than the range of up to 1,000 TFLOPS provided by a standalone Amazon EC2 P3 instance. For example, a simple language processing model might require only one TFLOPS to run inference well, while a sophisticated computer vision model might need up to 32 TFLOPS.

Respond to changes in demand

You can easily scale the amount of inference acceleration up and down using Amazon EC2 Auto Scaling groups to meet the demands of your application without over-provisioning capacity. When EC2 Auto Scaling increases your EC2 instances to meet increasing demand, it also automatically scales up the attached accelerator for each instance. Similarly, when it reduces your EC2 instances as demand goes down, it also automatically scales down the attached accelerator for each instance. This helps you pay only for what you need when you need it.

Support for Popular Frameworks

Amazon Elastic Inference supports TensorFlow and Apache MXNet models, with additional frameworks coming soon.

tensorflow_logo_200px
mxnet_150x50
Blog: Amazon Elastic Inference – GPU-Powered Inference Acceleration
Nov 28, 2018
 
Product-Page_Standard-Icons_01_Product-Features_SqInk
Check out the product features

Learn more about Amazon Elastic Inference features.

Learn more 
Product-Page_Standard-Icons_02_Sign-Up_SqInk
Sign up for a free account

Instantly get access to the AWS Free Tier. 

Sign up 
Product-Page_Standard-Icons_03_Start-Building_SqInk
Start building in the console

Get started with Amazon Elastic Inference on Amazon SageMaker or Amazon EC2.

Sign in