Amazon EC2 Inf1 Instances

High-performance and low-cost machine learning inference

Why Amazon EC2 Inf1 Instances?

Businesses across a diverse set of industries are looking at artificial intelligence (AI)–powered transformation to drive business innovation and improve customer experience and process improvements. Machine learning (ML) models that power AI applications are becoming increasingly complex, resulting in rising underlying compute infrastructure costs. Up to 90% of the infrastructure spend for developing and running ML applications is often on inference. Customers are looking for cost-effective infrastructure solutions for deploying their ML applications in production.

Amazon EC2 Inf1 instances deliver high-performance and low-cost ML inference. They deliver up to 2.3x higher throughput and up to 70% lower cost per inference than comparable Amazon EC2 instances. Inf1 instances are built from the ground up to support ML inference applications. They feature up to 16 AWS Inferentia chips, high-performance ML inference chips designed and built by AWS. Additionally, Inf1 instances include 2nd Generation Intel Xeon Scalable processors and up to 100 Gbps networking to deliver high throughput inference.

Customers can use Inf1 instances to run large-scale ML inference applications such as search, recommendation engines, computer vision, speech recognition, natural language processing (NLP), personalization, and fraud detection.

Developers can deploy their ML models to Inf1 instances by using the AWS Neuron SDK, which is integrated with popular ML frameworks such as TensorFlow, PyTorch, and Apache MXNet. They can continue using the same ML workflows and seamlessly migrate applications onto Inf1 instances with minimal code changes and with no tie-in to vendor-specific solutions.

Get started easily with Inf1 instances using Amazon SageMaker, AWS Deep Learning AMIs (DLAMI) that come preconfigured with Neuron SDK, or Amazon Elastic Container Service (Amazon ECS) or Amazon Elastic Kubernetes Service (Amazon EKS) for containerized ML applications.

Amazon EC2 Inf1 Instances

Benefits

Using Inf1, developers can significantly reduce the cost of their ML production deployments. The combination of low instance cost and high throughput of Inf1 instances delivers up to 70% lower cost per inference than comparable Amazon EC2 instances.

Neuron SDK is integrated with common ML frameworks such as TensorFlow, PyTorch, and MXNet. Developers can continue using the same ML workflows and seamlessly migrate their application on to Inf1 instances with minimal code changes. This gives them the freedom to use the ML framework of choice, the compute platform that best meets their requirements, and the latest technologies without being tied to vendor-specific solutions.

Inf1 instances deliver up to 2.3x higher throughput than comparable Amazon EC2 instances. AWS Inferentia chips that power Inf1 instances are optimized for inference performance for small batch sizes, enabling real-time applications to maximize throughput and meet latency requirements.

AWS Inferentia chips are equipped with large on-chip memory that enables caching of ML models directly on the chip itself. You can deploy your models using capabilities like the NeuronCore Pipeline that eliminates the need to access outside memory resources. With Inf1 instances, you can deploy real-time inference applications at near real-time latencies without impacting bandwidth.

Inf1 instances support many commonly used ML model architectures such as SSD, VGG, and ResNext for image recognition/classification, as well as Transformer and BERT for NLP. Additionally, support for HuggingFace model repository in Neuron provides customers with the ability to easily compile and run inference using pretrained or fine-tuned models by changing just a single line of code. Multiple data types including BF16 and FP16 with mixed precision are also supported for various models and performance needs.

Features

AWS Inferentia is an ML chip purpose built by AWS to deliver high-performance inference at low cost. Each AWS Inferentia chip has four first-generation NeuronCores and provides up to 128 tera operations per second (TOPS) of performance and support for FP16, BF16, and INT8 data types. AWS Inferentia chips also feature a large amount of on-chip memory that can be used for caching large models, which is especially beneficial for models that require frequent memory access.

The AWS Neuron SDK consists of a compiler, runtime driver, and profiling tools. It enables deployment of complex neural net models, created and trained in popular frameworks such as TensorFlow, PyTorch, and MXNet, to be executed using Inf1 instances. With NeuronCore Pipeline, you can split large models for execution across multiple Inferentia chips using a high-speed physical chip-to-chip interconnect, delivering high inference throughput and lower inference costs.

Inf1 instances offer up to 100 Gbps of networking throughput for applications that require access to high-speed networking. Next-generation Elastic Network Adapter (ENA) and NVM Express (NVMe) technology provide Inf1 instances with high-throughput, low-latency interfaces for networking and Amazon Elastic Block Store (Amazon EBS).

The AWS Nitro System is a rich collection of building blocks that offloads many of the traditional virtualization functions to dedicated hardware and software to deliver high performance, high availability, and high security while also reducing virtualization overhead.

Customer and Partner testimonials

Here are some examples of how customers and partners have achieved their business goals with Amazon EC2 Inf1 instances.

  • Snap Inc.

    We incorporate ML into many aspects of Snapchat, and exploring innovation in this field is a key priority. Once we heard about Inferentia, we started collaborating with AWS to adopt Inf1/Inferentia instances to help us with ML deployment, including around performance and cost. We started with our recommendation models and look forward to adopting more models with the Inf1 instances in the future.

    Nima Khajehnouri, VP Engineering at Snap Inc.
  • Sprinklr

    Sprinklr's AI-driven unified customer experience management (Unified-CXM) platform enables companies to gather and translate real-time customer feedback across multiple channels into actionable insights—resulting in proactive issue resolution, enhanced product development, improved content marketing, better customer service, and more. Using Amazon EC2 Inf1, we were able to significantly improve the performance of one of our NLP models and improve the performance of one of our computer vision models. We're looking forward to continuing to use Amazon EC2 Inf1 to better serve our global customers.

    Vasant Srinivasan, Senior Vice President of Product Engineering at Sprinklr
  • Finch Computing

    Our state-of-the-art NLP product, Finch for Text, offers users the ability to extract, disambiguate, and enrich multiple types of entities in huge volumes of text. Finch for Text requires significant computing resources to provide our clients with low-latency enrichments on global data feeds. We are now using AWS Inf1 instances in our PyTorch NLP, translation, and entity disambiguation models. We were able to reduce our inference costs by over 80% (over GPUs) with minimal optimizations while maintaining our inference speed and performance. This improvement allows our customers to enrich their French, Spanish, German, and Dutch language text in real time on streaming data feeds and at global scale—something that’s critical for our financial services, data aggregator, and public sector customers.

    Scott Lightner, Chief Technology Officer at Finch Computing
  • Dataminr

    We alert on many types of events all over the world in many languages, in different formats (images, video, audio, text sensors, combinations of all these types) from hundreds of thousands of sources. Optimizing for speed and cost given that scale is absolutely critical for our business. With AWS Inferentia, we have lowered model latency and achieved up to 9x better throughput per dollar. This has allowed us to increase model accuracy and grow our platform's capabilities by deploying more sophisticated DL models and processing 5x more data volume while keeping our costs under control.

    Alex Jaimes, Chief Scientist and Senior Vice President of AI at Dataminr
  • Autodesk

    Autodesk is advancing the cognitive technology of our AI-powered virtual assistant, Autodesk Virtual Agent (AVA), by using Inferentia. AVA answers over 100,000 customer questions per month by applying natural language understanding (NLU) and deep learning (DL) techniques to extract the context, intent, and meaning behind inquiries. Piloting Inferentia, we are able to obtain a 4.9x higher throughput over G4dn for our NLU models, and look forward to running more workloads on the Inferentia-based Inf1 instances.

    Binghui Ouyang, Sr. Data Scientist at Autodesk
  • Screening Eagle Technologies

    The use of ground-penetrating radar and detection of visual defects is typically the domain of expert surveyors. An AWS microservices-based architecture enables us to process videos captured by automated inspection vehicles and inspectors. By migrating our in-house–built models from traditional GPU-based instances to Inferentia, we were able to reduce costs by 50%. Moreover, we were able to see performance gains when comparing the times with a G4dn GPU instance. Our team is looking forward to running more workloads on the Inferentia-based Inf1 instances.

    Jesús Hormigo, Chief of Cloud and AI Officer at Screening Eagle Technologies
  • NTT PC Communications

    NTT PC Communications, a network service and communication solution provider in Japan, is a telco leader in introducing new innovative products in the information and communication technology market.

    NTT PC developed AnyMotion, a motion analysis API platform service based on advanced posture estimation ML models. We deployed our AnyMotion platform on Amazon EC2 Inf1 instances using Amazon ECS for a fully managed container orchestration service. By deploying our AnyMotion containers on Amazon EC2 Inf1, we saw 4.5x higher throughout, a 25% lower inference latency, and 90% lower cost compared to current-generation GPU-based EC2 instances. These superior results will help to improve the quality of the AnyMotion service at scale.

    Toshiki Yanagisawa, Software Engineer at NTT PC Communications Inc.
  • Anthem

    Anthem is one of the nation's leading health benefits companies, serving the healthcare needs of 40+ million members across dozens of states. 

    The market of digital health platforms is growing at a remarkable rate. Gathering intelligence on this market is a challenging task due to the vast amounts of customer opinions data and its unstructured nature. Our application automates the generation of actionable insights from customer opinions via DL natural language models (Transformers). Our application is computationally intensive and needs to be deployed in a highly performant manner. We seamlessly deployed our DL inferencing workload onto Amazon EC2 Inf1 instances powered by the AWS Inferentia processor. The new Inf1 instances provide 2x higher throughput to GPU-based instances and allowed us to streamline our inference workloads.

    Numan Laanait and Miro Mihaylov, PhDs, Principal AI/Data Scientists at Anthem
  • Condé Nast

    Condé Nast's global portfolio encompasses over 20 leading media brands, including WiredVogue, and Vanity Fair. Within a few weeks, our team was able to integrate our recommendation engine with AWS Inferentia chips. This union enables multiple runtime optimizations for state-of-the-art natural language models on SageMaker's Inf1 instances. As a result, we observed a 72% reduction in cost than the previously deployed GPU instances.

    Paul Fryzel, Principal Engineer, AI Infrastructure at Condé Nast
  • Ciao Inc.

    Ciao is evolving conventional security cameras into high-performance analysis cameras equivalent to the capability of a human eye. Our application is advancing disaster prevention, monitoring environmental conditions using cloud-based AI camera solutions to alert before it becomes a disaster. Such alert enables reacting to the situation beforehand. Based on the object detection, we can also provide insight by estimating the number of incoming guests without staff from videos in brick and mortar stores. Ciao Camera commercially adopted AWS Inferentia-based Inf1 instances with 40% better price performance than G4dn with YOLOv4. We are looking forward to more of our services with Inf1 leveraging its significant cost efficiency.

    Shinji Matsumoto, Software Engineer at Ciao Inc.
  • The Asahi Shimbun Company

    The Asahi Shimbun is one of the most popular daily newspapers in Japan. Media Lab, established as one of our company's departments, has the missions to research the latest technology, especially AI, and connect the cutting-edge technologies for new businesses. With the launch of AWS Inferentia based Amazon EC2 Inf1 instances in Tokyo, we tested our PyTorch based text summarization AI application on these instances. This application processes a large amount of text and generates headlines and summary sentences trained on articles from the last 30 years. Using Inferentia, we lowered costs by an order of magnitude over CPU-based instances. This dramatic reduction in costs will enable us to deploy our most complex models at scale, which we previously believed was not economically feasible."

    Hideaki Tamori, PhD, Senior Administrator, Media Lab at The Asahi Shimbun Company
  • CS Disco

    CS Disco is reinventing legal technology as a leading provider of AI solutions for e-discovery developed by lawyers for lawyers. Disco AI accelerates the thankless task of combing through terabytes of data, speeding up review times and improving review accuracy by leveraging complex NLP models, which are computationally expensive and cost-prohibitive. Disco has found that AWS Inferentia-based Inf1 instances reduce the cost of inference in Disco AI by at least 35% as compared with today's GPU instances. Based on this positive experience with Inf1 instances, CS Disco will explore opportunities for migration into Inferentia.

    Alan Lockett, Sr. Director of Research at CS Disco
  • Talroo

    At Talroo, we provide our customers with a data-driven platform that enables them to attract unique job candidates so they can make hires. We are constantly exploring new technologies to ensure we offer the best products and services to our customers. Using Inferentia, we extract insights from a corpus of text data to enhance our AI-powered search-and-match technology. Talroo leverages Amazon EC2 Inf1 instances to create high-throughput NLU models with SageMaker. Talroo’s initial testing shows that the Amazon EC2 Inf1 instances deliver 40% lower inference latency and 2x higher throughput compared to G4dn GPU-based instances. Based on these results, Talroo looks forward to using Amazon EC2 Inf1 instances as part of its AWS infrastructure.

    Janet Hu, Software Engineer at Talroo
  • Digital Media Professionals

    Digital Media Professionals (DMP) visualizes the future with a ZIA™ platform based on AI. DMP’s efficient computer vision classification technologies are used to build insight on large amounts of real-time image data, such as condition observation, crime prevention, and accident prevention. We recognized that our image segmentation models run four times faster on AWS Inferentia based Inf1 instances compared to GPU-based G4 instances. Due to this higher throughput and lower cost, Inferentia enables us to deploy our AI workloads, such as applications for car dashcams, at scale.

    Hiroyuki Umeda, Director & General Manager, Sales & Marketing Group at Digital Media Professionals
  • Hotpot.ai

    Hotpot.ai empowers non-designers to create attractive graphics and helps professional designers automate rote tasks. 

    Since ML is core to our strategy, we were excited to try AWS Inferentia-based Inf1 instances. We found the Inf1 instances easy to integrate into our research and development pipeline. Most importantly, we observed impressive performance gains compared to the G4dn GPU-based instances. With our first model, the Inf1 instances yielded about 45% higher throughput and decreased cost per inference by almost 50%. We intend to work closely with the AWS team to port other models and shift most of our ML inference infrastructure to AWS Inferentia.

    Clarence Hu, Founder at Hotpot.ai
  • SkyWatch

    SkyWatch processes hundreds of trillions of pixels of earth observation data, captured from space every day. Adopting the new AWS Inferentia-based Inf1 instances using Amazon SageMaker for real-time cloud detection and image quality scoring was quick and easy. It was all a matter of switching the instance type in our deployment configuration. By switching instance types to Inferentia-based Inf1, we improved performance by 40% and decreased overall costs by 23%. This is a big win. It has enabled us to lower our overall operational costs while continuing to deliver high-quality satellite imagery to our customers, with minimal engineering overhead. We are looking forward to transitioning all of our inference endpoints and batch ML processes to use Inf1 instances to further improve our data reliability and customer experience.

    Adler Santos, Engineering Manager at SkyWatch
  • Money Forward Inc.

    Money Forward Inc. serves businesses and individuals with an open and fair financial platform. As part of this platform, HiTTO Inc., a Money Forward group company, offers an AI chatbot service that uses tailored NLP models to address the diverse needs of their corporate customers.

    Migrating our AI chatbot service to Amazon EC2 Inf1 instances was straightforward. We completed the migration within two months and launched a large-scale service on the Inf1 instances using Amazon ECS. We were able to reduce our inference latency by 97% and our inference costs by over 50% (over comparable GPU-based instances) by serving multiple models per Inf1 instance. We look forward to running more workloads on the Inferentia-based Inf1 instances.

    Kento Adachi, Technical lead, CTO office at Money Forward Inc.
  • Amazon Advertising

    Amazon Advertising helps businesses of all sizes connect with customers at every stage of their shopping journey. Millions of ads, including text and images, are moderated, classified, and served for the optimal customer experience every single day.

    Read the news blog

    For our text ad processing, we deploy PyTorch based BERT models globally on AWS Inferentia based Inf1 instances. By moving to Inferentia from GPUs, we were able to lower our cost by 69% with comparable performance. Compiling and testing our models for AWS Inferentia took less than three weeks. Using Amazon SageMaker to deploy our models to Inf1 instances ensured our deployment was scalable and easy to manage. When I first analyzed the compiled models, the performance with AWS Inferentia was so impressive that I actually had to re-run the benchmarks to make sure they were correct! Going forward, we plan to migrate our image ad processing models to Inferentia. We have already benchmarked 30% lower latency and 71% cost savings over comparable GPU-based instances for these models.

    Yashal Kanungo, Applied Scientist at Amazon Advertising
  • Amazon Alexa

    Amazon Alexa’s AI- and ML-based intelligence, powered by AWS, is available on more than 100 million devices today—and our promise to customers is that Alexa is always becoming smarter, more conversational, more proactive, and even more delightful. Delivering on that promise requires continuous improvements in response times and ML infrastructure costs, which is why we are excited to use Amazon EC2 Inf1 to lower inference latency and cost per inference on Alexa text-to-speech. With Amazon EC2 Inf1, we’ll be able to make the service even better for the tens of millions of customers who use Alexa each month.

    Tom Taylor, Senior Vice President at Amazon Alexa
  • Amazon Prime Video

    Amazon Prime Video uses computer vision ML models to analyze video quality of live events to ensure an optimal viewer experience for Prime Video members. We deployed our image classification ML models on EC2 Inf1 instances and were able to see 4x improvement in performance and up to 40% savings in cost. We are now looking to leverage these cost savings to innovate and build advanced models that can detect more complex defects, such as synchronization gaps between audio and video files, to deliver more enhanced viewing experience for Prime Video members.

    Victor Antonino, Solutions Architect at Amazon Prime Video
  • Amazon Rekognition and Video

    Amazon Rekognition is a simple and easy image and video analysis application that helps customers identify objects, people, text, and activities. Amazon Rekognition needs high-performance DL infrastructure that can analyze billions of images and videos daily for our customers. With AWS Inferentia-based Inf1 instances, running Amazon Rekognition models such as object classification resulted in 8x lower latency and 2x the throughput compared to running these models on GPUs. Based on these results, we are moving Amazon Rekognition to Inf1, enabling our customers to get accurate results faster.

    Rajneesh Singh, Director, SW Engineering at Amazon Rekognition and Video

Product details

* Prices shown are for US East (Northern Virginia) AWS Region. Prices for 1-year and 3-year reserved instances are for "Partial Upfront" payment options or "No Upfront" for instances without the Partial Upfront option.

Amazon EC2 Inf1 instances are available in the US East (N. Virginia), US West (Oregon) AWS Regions as On-Demand, Reserved, or Spot Instances.