Posted On: Aug 13, 2020
AWS has expanded the availability of Amazon EC2 Inf1 instances in US East (Ohio), Europe (Frankfurt, Ireland) and Asia Pacific (Sydney, Tokyo). Inf1 instances are powered by AWS Inferentia chips, which Amazon custom-designed to provide customers with the lowest cost-per-inference in the cloud and lower barriers for everyday developers to use machine learning at scale.
Inf1 instances deliver up to 30% higher throughput and up to 45% lower cost per inference than comparable GPU-based instances and are ideal for applications such as image recognition, natural language processing, personalization and anomaly detection. Developers can deploy their machine learning models to Inf1 instances using the AWS Neuron SDK, which is integrated with popular machine learning frameworks such as TensorFlow, PyTorch, and MXNet. It consists of a compiler, a run-time, and profiling tools to optimize the inference performance on AWS Inferentia.
With these additional regional launches, Inf1 instances are now available in the US East (N. Virginia, Ohio), US West (Oregon), Europe (Frankfurt, Ireland), and Asia Pacific (Sydney, Tokyo) AWS Regions. Inf1 instances are available in four sizes with up to 16 Inferentia chips providing up to 2,000 Tera Operations per Second (TOPS) throughput and up to 100 Gbps network bandwidth. They are purchasable On-Demand, as Reserved instances, as Spot instances, or as part of Savings Plans.
The easiest and quickest way to get started with Inf1 instances is via Amazon SageMaker, a fully managed service for building, training, and deploy machine learning models. Developers who prefer to manage their own machine learning application development platforms, can get started by either launching Inf1 instances with AWS Deep Learning AMIs, which include the Neuron SDK, or use Inf1 instances via Amazon Elastic Kubernetes Service (EKS) or Amazon Elastic Container Service (ECS) for containerized ML applications.
To learn more visit the Amazon EC2 Inf1 instance page.