Posted On: Oct 25, 2017
We are excited to announce the availability of Amazon EC2 P3 instances, the next-generation of EC2 compute-optimized GPU instances. P3 instances are powered by up to 8 of the latest-generation NVIDIA Tesla V100 GPUs and are ideal for computationally advanced workloads such as machine learning (ML), high performance computing (HPC), data compression, and cryptography. They are also ideal for specific industry applications for scientific computing and simulations, financial analytics, and image and video processing.
P3 instances provide a powerful platform for ML and HPC by also leveraging 64 vCPUs using the custom Intel Xeon E5 processors, 488 GB of RAM, and up to 25 Gbps of aggregate network bandwidth leveraging Elastic Network Adapter technology.
Based on NVIDIA’s latest Volta architecture, each Tesla V100 GPUs provide 125 TFLOPS of mixed-precision performance, 15.7 TFLOPS of single precision (FP32) performance and 7.8 TFLOPS of double precision (FP64) performance. This is possible because each Tesla V100 GPUs contains 5,120 CUDA Cores and 640 Tensor Cores. A 300 GB/s NVLink hyper-mesh interconnect allows GPU-to-GPU communication at high speed and low latency.
For ML applications, P3 instances offer up to 14x performance improvement over P2 instances, allowing developers to train their machine learning models in hours instead of days, and bring their innovations to market faster.
P3 instances are available in three instance sizes, p3.2xlarge with 1 GPU, p3.8xlarge with 4 GPUs and p3.16xlarge with 8 GPUs. They are available in US East (N. Virginia), US West (Oregon), EU West (Ireland) and Asia Pacific (Tokyo) regions. Customers can purchase P3 instances as On-Demand Instances, Reserved Instances, Spot Instances, and Dedicated Hosts.
To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the Amazon EC2 P3 instance page.