Amazon EC2 P3 Instances

Accelerate machine learning and high performance computing applications with powerful GPUs

Amazon EC2 P3 instances deliver high performance compute in the cloud with up to 8 NVIDIA® V100 Tensor Core GPUs and up to 100 Gbps of networking throughput for machine learning and HPC applications. These instances deliver up to one petaflop of mixed-precision performance per instance to significantly accelerate machine learning and high performance computing applications. Amazon EC2 P3 instances have been proven to reduce machine learning training times from days to minutes, as well as increase the number of simulations completed for high performance computing by 3-4x.

With up to 4x the network bandwidth of P3.16xlarge instances, Amazon EC2 P3dn.24xlarge instances are the latest addition to the P3 family, optimized for distributed machine learning and HPC applications. These instances provide up to 100 Gbps of networking throughput, 96 custom Intel® Xeon® Scalable (Skylake) vCPUs, 8 NVIDIA® V100 Tensor Core GPUs with 32 GB of memory each, and 1.8 TB of local NVMe-based SSD storage. P3dn.24xlarge instances also support Elastic Fabric Adapter (EFA) which accelerates distributed machine learning applications that use NVIDIA Collective Communications Library (NCCL). EFA can scale to thousands of GPUs, significantly improving the throughput and scalability of deep learning training models, which leads to faster results.

See how Amazon EC2 P3 instances can help you with your machine learning training

EC2_P3_Thumb

Introducing Amazon EC2 P3dn.24xlarge - the most powerful P3 instance yet

Optimized for distributed machine learning training and high performance computing

ReInvent_HA_P3_EDITORIAL

Benefits

REDUCE MACHINE LEARNING TRAINING TIME FROM DAYS TO MINUTES

For data scientists, researchers, and developers who need to speed up ML applications, Amazon EC2 P3 instances are the fastest in the cloud for ML training. Amazon EC2 P3 instances feature up to eight latest-generation NVIDIA V100 Tensor Core GPUs and deliver up to one petaflop of mixed-precision performance to significantly accelerate ML workloads. Faster model training can enable data scientists and machine learning engineers to iterate faster, train more models, and increase accuracy.

THE INDUSTRY'S MOST COST-EFFECTIVE SOLUTION FOR ML TRAINING

One of the most powerful GPU instances in the cloud combined with flexible pricing plans results in an exceptionally cost-effective solution for machine learning training. As with Amazon EC2 instances in general, P3 instances are available as On-Demand Instances, Reserved Instances, or Spot Instances. Spot Instances take advantage of unused EC2 instance capacity and can lower your Amazon EC2 costs significantly for up to a 70% discount from On-Demand prices.

FLEXIBLE, POWERFUL HIGH PERFORMANCE COMPUTING

Unlike on-premises systems, running high performance computing on Amazon EC2 P3 instances offers virtually unlimited capacity to scale out your infrastructure, and the flexibility to change resources easily and as often as your workload demands. You can configure your resources to meet the demands of your application and launch an HPC cluster in minutes, paying for only what you use.

Start Building Immediately

Use pre-packaged Docker images to deploy deep learning environments in minutes. The images contain the required deep learning framework libraries (currently TensorFlow and Apache MXNet) and tools and are fully tested. You can easily add your own libraries and tools on top of these images for a higher degree of control over monitoring, compliance, and data processing. In addition, Amazon EC2 P3 instances work seamlessly together with Amazon SageMaker to provide a powerful and intuitive complete machine learning platform. Amazon SageMaker is a fully-managed machine learning platform that enables you to quickly and easily build, train, and deploy machine learning models. Furthermore, Amazon EC2 P3 instances can be integrated with AWS Deep Learning Amazon Machine Images (AMIs) that are pre-installed with popular deep learning frameworks. This makes it faster and easier to get started with machine learning training and inference.

Scalable Multi-Node Machine Learning Training

You can use multiple Amazon EC2 P3 instances with up to 100 Gbps of networking throughput to rapidly train machine learning models. Higher networking throughput enables developers to remove data transfer bottlenecks and efficiently scale out their model training jobs across multiple P3 instances. Customers have been able to train ResNet-50, a common image classification model, to industry standard accuracy in just 18 minutes using 16 P3 instances. This level of performance was previously unattainable by the vast majority of ML customers as it required a large CapEx investment to build out on-premises GPU clusters. With P3 instances and their availability via an On-Demand usage model, this level of performance is now accessible to all developers and machine learning engineers. In addition, P3dn.24xlarge instances support Elastic Fabric Adapter (EFA) that uses the NVIDIA Collective Communications Library (NCCL) to scale to thousands of GPUs.

SUPPORT FOR ALL MAJOR MACHINE LEARNING FRAMEWORKS

Amazon EC2 P3 instances support all major machine learning frameworks including TensorFlow, PyTorch, Apache MXNet, Caffe, Caffe2, Microsoft Cognitive Toolkit (CNTK), Chainer, Theano, Keras, Gluon, and Torch. You have the flexibility to choose the framework that works best for your application.

Customer Stories

200x100_AirBNB_Logo

Airbnb is using machine learning to optimize search recommendations and improve dynamic pricing guidance for hosts, both of which translate to increased booking conversions. With Amazon EC2 P3 instances, Airbnb can run training workloads faster, go through more iterations, build better machine learning models and reduce costs.

salesforce_logo_200x100

Salesforce is using machine learning to power Einstein Vision, enabling developers to harness the power of image recognition for use cases such as visual search, brand detection, and product identification. Amazon EC2 P3 instances enable developers to train deep learning models much faster so that they can achieve their machine learning goals quickly.

western-digital_200x100

Western Digital uses HPC to run tens of thousands of simulations for materials sciences, heat flows, magnetics and data transfer to improve disk drive and storage solution performance and quality. Based on early testing, P3 instances allow engineering teams to run simulations at least three times faster than previously deployed solutions.  

schrodinger-200x100

Schrodinger uses high performance computing (HPC) to develop predictive models to extend the scale of discovery and optimization and give their customers the ability to bring lifesaving drugs to market more quickly. Amazon EC2 P3 instances allows Schrodinger to perform four times as many simulations in a day as they could with P2 instances.  

Amazon EC2 P3 Instances and Amazon SageMaker

The Fastest Way to Train and Run Machine Learning Models

Amazon SageMaker is a fully-managed service for building, training, and deploying machine learning models. When used together with Amazon EC2 P3 instances, customers can easily scale to tens, hundreds, or thousands of GPUs to train a model quickly at any scale without worrying about setting up clusters and data pipelines. You can also easily access Amazon Virtual Private Cloud (Amazon VPC) resources for training and hosting workflows in Amazon SageMaker. With this feature, you can use Amazon Simple Storage Service (Amazon S3) buckets that are only accessible through your VPC to store training data, as well as storing and hosting the model artifacts derived from the training process. In addition to S3, models can access all other AWS resources contained within the VPC. Learn more.

Build

Amazon SageMaker makes it easy to build machine learning models and get them ready for training. It provides everything that you need to quickly connect to your training data, and to select and optimize the best algorithm and framework for your application. Amazon SageMaker includes hosted Jupyter notebooks that make it easy to explore and visualize your training data stored in Amazon S3.  You can also use the notebook instance to write code to create model training jobs, deploy models to Amazon SageMaker hosting, and test or validate your models.

Train

You can begin training your model with a single click in the console or with an API call. Amazon SageMaker is pre-configured with the latest versions of TensorFlow and Apache MXNet, and with CUDA9 library support for optimal performance with NVIDIA GPUs. In addition, hyper-parameter optimization can automatically tune your model by intelligently adjusting different combinations of model parameters to quickly arrive at the most accurate predictions. For larger scale needs, you can scale to tens of instances to support faster model building.

Deploy

After training, you can use one-click to deploy your model on auto-scaling Amazon EC2 instances across multiple Availability Zones. In production, Amazon SageMaker manages the compute infrastructure on your behalf to perform health checks, apply security patches, and conduct other routine maintenance, all with built-in Amazon CloudWatch monitoring and logging.

 

Amazon EC2 P3 Instances and AWS Deep Learning AMIs

Pre-configured development environments to quickly start building deep learning applications

An alternative to Amazon SageMaker for developers who have more customized requirements, the AWS Deep Learning AMIs provide machine learning practitioners and researchers with the infrastructure and tools to accelerate deep learning in the cloud, at any scale. You can quickly launch Amazon EC2 P3 instances pre-installed with popular deep learning frameworks such as TensorFlow, PyTorch, Apache MXNet, Microsoft Cognitive Toolkit, Caffe, Caffe2, Theano, Torch, Chainer, Gluon, and Keras to train sophisticated, custom AI models, experiment with new algorithms, or learn new skills and techniques. Learn more

Amazon EC2 P3 Instances and High Performance Computing

Solve large computational problems and gain new insights using the power of HPC on AWS

Amazon EC2 P3 instances are an ideal platform to run engineering simulations, computational finance, seismic analysis, molecular modeling, genomics, rendering, and other GPU compute workloads. High performance computing (HPC) allows scientists and engineers to solve these complex, compute-intensive problems. HPC applications often require high network performance, fast storage, large amounts of memory, high compute capabilities, or all of the above. AWS enables you to increase the speed of research and reduce time-to-results by running HPC in the cloud and scaling to larger numbers of parallel tasks than would be practical in most on-premises environments. For example, P3dn.24xlarge instances support Elastic Fabric Adapter (EFA) that enables HPC applications using the Message Passing Interface (MPI) to scale to thousands of GPUs. AWS helps to reduce costs by providing solutions optimized for specific applications, and without the need for large capital investments. Learn more

Support for NVIDIA Quadro Virtual Workstation

NVIDIA Quadro Virtual Workstation AMIs deliver high graphics performance using powerful P3 instances with NVIDIA Volta V100 GPUs running in the AWS cloud. These AMIs have the latest NVIDIA GPU graphics software preinstalled along with the latest Quadro drivers and Quadro ISV certifications with support for up to four 4K desktop resolutions. P3 instances with NVIDIA V100 GPUs combined with Quadro vWS deliver a high performance workstation in the cloud with up to 32 GB of GPU memory, fast ray tracing, and AI-powered rendering.

The new AMIs are available on the AWS Marketplace with support for Windows Server 2016 and Windows Server 2019.

Amazon EC2 P3dn.24xlarge Instances

New Faster, More Powerful and Larger Instance Size Optimized for Distributed Machine Learning and High Performance Computing

Amazon EC2 P3dn.24xlarge instances are the fastest, most powerful, and largest P3 instance size available and provide up to 100 Gbps of networking throughput, 8 NVIDIA® V100 Tensor Core GPUs with 32 GB of memory each, 96 custom Intel® Xeon® Scalable (Skylake) vCPUs, and 1.8 TB of local NVMe-based SSD storage. The faster networking, new processors, doubling of GPU memory, and additional vCPUs enable developers to significantly lower the time to train their ML models or run more HPC simulations by scaling out their jobs across several instances (e.g., 16, 32 or 64 instances). Machine learning models require a large amount of data for training and, in addition to increasing the throughput of passing data between instances, the additional network throughput of P3dn.24xlarge instances can also be used to speed up access to large amounts of training data by connecting to Amazon S3 or shared file systems solutions such as Amazon EFS.

REMOVE BOTTLENECKS AND REDUCE MACHINE LEARNING TRAINING TIME

With 100 Gbps of networking throughput, developers can efficiently use a large number of P3dn.24xlarge instances (e.g., 16, 32 or 64 instances) for distributed training and significantly lower the time to train their models. The 96vCPUs of AWS-custom Intel Skylake processors with AVX-512 instructions operating at 2.5GHz help optimize the pre-processing of data. In addition, P3dn.24xlarge instances use the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor, which delivers practically all of the compute and memory resources of the host hardware to your instances. P3dn.24xlarge instances also support Elastic Fabric Adapter that enables ML applications using the NVIDIA Collective Communications Library (NCCL) to scale to thousands of GPUs.

Lower TCO by optimizing GPU utilization

Enhanced networking using the latest version of the Elastic Network Adapter with up to 100 Gbps of aggregate network bandwidth can be used not only to share data across several P3dn.24xlarge instances, but also for high-throughput data access via Amazon S3 or shared file systems solution such as Amazon EFS. High throughput data access is crucial to optimize the utilization of GPUs and deliver maximum performance from the compute instances.

Support larger and more complex models

P3dn.24xlarge instances offer NVIDIA V100 Tensor Core GPUs with 32GB of memory that deliver the flexibility to train more advanced and larger machine learning models as well as process larger batches of data such as 4k images for image classification and object detection systems.

Amazon EC2 P3 Instance Product Details

Instance Size GPUs - Tesla V100 GPU Peer to Peer GPU Memory (GB) vCPUs Memory (GB) Network Bandwidth EBS Bandwidth On-Demand Price/hr* 1-yr Reserved Instance Effective Hourly* 3-yr Reserved Instance Effective Hourly*
p3.2xlarge 1 N/A 16 8 61 Up to 10 Gbps 1.5 Gbps $3.06 $1.99 $1.05
p3.8xlarge 4
NVLink 64 32 244 10 Gbps 7 Gbps $12.24 $7.96 $4.19
p3.16xlarge 8 NVLink 128 64 488 25 Gbps 14 Gbps $24.48 $15.91 $8.39
p3dn.24xlarge 8 NVLink 256 96 768 100 Gbps 14 Gbps $31.218 $18.30 $9.64

* - Prices shown are for Linux/Unix in the US East (Northern Virginia) AWS Region and rounded to the nearest cent. For full pricing details, see the Amazon EC2 pricing page.

Customers can purchase P3 instances as On-Demand Instances, Reserved Instances, Spot Instances, and Dedicated Hosts.

BILLING BY THE SECOND

One of the many advantages of cloud computing is the elastic nature of provisioning or deprovisioning resources as you need them. By billing usage down to the second, we enable customers to level up their elasticity, save money, and enable them to optimize allocation of resources toward achieving their machine learning goals.

RESERVED INSTANCE PRICING

Reserved Instances provide you with a significant discount (up to 75%) compared to On-Demand Instance pricing. In addition, when Reserved Instances are assigned to a specific Availability Zone, they provide a capacity reservation, giving you additional confidence in your ability to launch instances when you need them.

SPOT PRICING

With Spot Instances, you pay the Spot price that's in effect for the time period that your instances are running. Spot Instance prices are set by Amazon EC2 and adjust gradually based on long-term trends in supply and demand for Spot Instance capacity. Spot Instances are available at a discount of up to 90% off compared to On-Demand pricing.

The Broadest Global Availability

4667-P3Instances-1024x543

Amazon EC2 P3.2xlarge, P3.8xlarge and P3.16xlarge instances are available in 14 AWS Regions so that customers have the flexibility to train and deploy their machine learning models wherever their data is stored. Available regions for P3 are the US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), Europe (Ireland), Europe (Frankfurt), Europe (London), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Singapore), China (Beijing), China (Ningxia), and GovCloud (US-West) AWS Regions.

P3dn.24xlarge instances are available in the Asia Pacific (Tokyo), Europe (Ireland), US East (N. Virginia), US West (Oregon), GovCloud (US-West), and GovCloud (US-East) AWS regions.

Get started with Amazon EC2 P3 instances for Machine Learning

To get started within minutes, learn more about Amazon SageMaker or use the AWS Deep Learning AMI, pre-installed with popular deep learning frameworks such as Caffe2 and MXNet. Alternatively, you can also use the NVIDIA AMI with GPU driver and CUDA toolkit pre-installed.

Blogs, Articles, and Webinars

amrraga
 
Amr Ragab, Chetan Kapoor, Rahul Huilgol, Jarvis Lee, Tyler Mullenbach, and Yong Wu
July 20, 2018
aaron-markham-100x100-gray
 
Aaron Markham
December 17, 2018
 
Webinar-thumb1

Broadcast Date: December 19, 2018

Level: 200

Computer vision deals with how computers can be trained to gain a high-level understanding from digital images or videos. The history of computer vision dates back to the 1960’s, but recent advancements in processing technology have enabled applications such as navigation of autonomous vehicles. This tech talk will review the different steps required to build, train, and deploy a machine learning model for computer vision. We will compare and contrast the training of computer vision models using different Amazon EC2 instances and highlight how significant time savings can be achieved by using Amazon EC2 P3 instances.

Webinar-thumb2

Broadcast Date: July 31, 2018

Level 200

Organizations are tackling exponentially complex questions across advanced scientific, energy, high tech, and medical fields. Machine learning (ML) makes it possible to quickly explore the multitude of scenarios and generate the best answers, ranging from image, video, and speech recognition to autonomous vehicle systems and weather prediction. For data scientists, researchers, and developers who want to speed up development of their ML applications, Amazon EC2 P3 instances are the most powerful, cost effective and versatile GPU compute instances available in the cloud.

About Amazon SageMaker

Click here to learn more

About Deep Learning on AWS

Click here to learn more

About High Performance Computing (HPC)

Click here to learn more
Ready to get started?
Sign up
Have more questions?
Contact us