Amazon EC2 P3 Instances

Powerful and high performance GPU compute instances

Amazon EC2 P3 instances are the next generation of Amazon EC2 GPU compute instances that are powerful and scalable to provide GPU-based parallel compute capabilities.  

P3 instances are ideal for computationally challenging applications, including machine learning, high-performance computing, computational fluid dynamics, computational finance, seismic analysis, molecular modeling, genomics, and development of autonomous vehicle systems.

AWS Free Tier includes 750 hours of Linux and Windows t2.micro instances each month for one year. To stay within the Free Tier, use only EC2 Micro instances.

Features

Powerful Performance

P3 instances allow you to build and deploy advanced applications with up to 14 times better performance than previous-generation Amazon EC2 GPU compute instances. With up to 8 NVIDIA Tesla V100 GPUs, P3 instances provide up to one petaflop of mixed-precision, 125 teraflops of single-precision, and 62 teraflops of double-precision floating point performance. P3 instances also feature up to 64 vCPUs based on custom Intel Xeon E5 (Broadwell) processors and 488 GB of DRAM.

Scalability

For P3 instances with multiple GPUs, a 300 GB/s next-generation NVIDIA NVLink interconnect enables high-speed, low-latency GPU-to-GPU communication. This combined with Amazon EC2 ENA-based Enhanced Networking that support up to 25 Gbps network bandwidth, applications can benefit from multiple GPUs to scale-up and scale-out as needed. P3 instances are well-suited for distributed deep learning frameworks, such as MXNet, that scale out with near perfect efficiency.

Product Details

Instance Size GPUs - Tesla V100 GPU Peer to Peer GPU Memory (GB) vCPUs Memory (GB) Network Bandwidth EBS Bandwidth
p3.2xlarge 1 N/A 16 8 61 Up to 10Gbps 1.5Gbps
p3.8xlarge 4 NVLink 64 32 244 10Gbps 7Gbps
p3.16xlarge 8 NVLink 128 64 488 25Gbps 14Gbps

Benefits

Speed

Whether it’s machine learning (training, inference), HPC workloads, or any other floating point sensitive workload, you will realize tremendous gains in processing time and throughput by using the cutting-edge performance of the NVIDIA Tesla V100 GPUs.

 

Agility

With P3 instances, you can take full advantage of hyper-scale cloud infrastructure to deploy GPU resources in a matter of minutes. Coupled with a pay-as-you-go usage model and AWS’s rapid pace of innovation, engineering teams can bring new innovations to market faster, while optimizing their total operational costs.

Lower Cost

With the economics of hyper-scale cloud computing of AWS applied to GPU-based instances, P3 instances are fundamentally disrupting how organizations typically consume computing hardware for artifical intelligence/machine learning/high-performance computing services. You might find it more effective to leverage EC2 pricing models such as On-Demand, Spot or Reserved Instances versus building-out on-premises GPU compute clusters.

Get started with P3 instances

To get started within minutes, use the Amazon Deep Learning AMI, pre-installed with popular deep learning frameworks such as Caffe2 and Mxnet. Alternatively, you can also use the NVIDIA AMI with GPU driver and CUDA toolkit pre-installed.

Blog posts & articles

Get started with AWS

icon1

Sign up for an AWS account

Instantly get access to the AWS Free Tier.
icon2

Learn with 10-minute Tutorials

Explore and learn with simple tutorials.
icon3

Start building with AWS

Begin building with step-by-step guides to help you launch your AWS project.

Learn more about Deep Learning on AWS

Click here to learn more

Learn more about High Performance Computing (HPC)

Click here to learn more
Ready to get started?
Sign up
Have more questions?
Contact us