Amazon Elastic Inference

Lower machine learning inference costs by up to 75%

 Important Update

Thank you for your interest in Amazon Elastic Inference. Amazon Elastic Inference is no longer available to new customers. You can get better performance at lower cost for your machine learning inference workloads by using other hardware acceleration options such as AWS Inferentia. If you are currently using Amazon Elastic Inference, please consider migrating your workload to these alternatives. To learn more, visit AWS Machine Learning Infrastructure page.

Amazon Elastic Inference allows you to attach low-cost GPU-powered acceleration to Amazon EC2 and SageMaker instances or Amazon ECS tasks, to reduce the cost of running deep learning inference by up to 75%. Amazon Elastic Inference supports TensorFlow, Apache MXNet, PyTorch and ONNX models.

Inference is the process of making predictions using a trained model. In deep learning applications, inference accounts for up to 90% of total operational costs for two reasons. Firstly, standalone GPU instances are typically designed for model training - not for inference. While training jobs batch process hundreds of data samples in parallel, inference jobs usually process a single input in real time, and thus consume a small amount of GPU compute. This makes standalone GPU inference cost-inefficient. On the other hand, standalone CPU instances are not specialized for matrix operations, and thus are often too slow for deep learning inference. Secondly, different models have different CPU, GPU, and memory requirements. Optimizing for one resource can lead to underutilization of other resources and higher costs.

Amazon Elastic Inference solves these problems by allowing you to attach just the right amount of GPU-powered inference acceleration to any EC2 or SageMaker instance type or ECS task, with no code changes. With Amazon Elastic Inference, you can choose any CPU instance in AWS that is best suited to the overall compute and memory needs of your application, and then separately configure the right amount of GPU-powered inference acceleration, allowing you to efficiently utilize resources and reduce costs.


Reduce inference costs by up to 75%

Amazon Elastic Inference allows you to choose the instance type that is best suited to the overall compute and memory needs of your application. You can then separately specify the amount of inference acceleration that you need. This reduces inference costs by up to 75% because you no longer need to over-provision GPU compute for inference.

Get exactly what you need

Amazon Elastic Inference can provide as little as a single-precision TFLOPS (trillion floating point operations per second) of inference acceleration or as much as 32 mixed-precision TFLOPS. This is a much more appropriate range of inference compute than the range of up to 1,000 TFLOPS provided by a standalone Amazon EC2 P3 instance. For example, a simple language processing model might require only one TFLOPS to run inference well, while a sophisticated computer vision model might need up to 32 TFLOPS.

Respond to changes in demand

You can easily scale the amount of inference acceleration up and down using Amazon EC2 Auto Scaling groups to meet the demands of your application without over-provisioning capacity. When EC2 Auto Scaling increases your EC2 instances to meet increasing demand, it also automatically scales up the attached accelerator for each instance. Similarly, when it reduces your EC2 instances as demand goes down, it also automatically scales down the attached accelerator for each instance. This helps you pay only for what you need when you need it.

Support for popular frameworks

Amazon Elastic Inference supports TensorFlow and Apache MXNet models, with additional frameworks coming soon.

Blog: Amazon Elastic Inference – GPU-Powered Inference Acceleration
Nov 28, 2018