Posted On: Apr 13, 2017
We are excited to announce that Amazon EC2 P2 instances are now available in the Asia Pacific (Seoul) and US East (Ohio) Regions.
P2 instances are ideal for compute-intensive applications that require high-performance GPU coprocessors and massive parallel floating point performance such as deep learning, computational fluid dynamics, computational finance, seismic analysis, molecular modeling, genomics, and rendering workloads.
P2 instances offer up to 16 NVIDIA Tesla® K80 GPUs, 192GB of total video memory, 40 thousand parallel processing cores yielding 70 teraflops of single precision floating point performance and over 23 teraflops of double precision floating point performance. P2 instance GPUs are connected to the same PCI fabric, reducing the latency of GPU to GPU transfers by up to 70%, and enabling GPUDirect™ (peer-to-peer GPU communication) among up to 16 GPUs in a virtualized environment for the first time. P2 instances feature 732GB of host memory, up to 64 vCPUs using custom Intel Xeon® E5-2686 v4 ( Broadwell ) processors, and enhanced networking with the Amazon EC2 Elastic Network Adaptor featuring up to 20Gbps of aggregate network bandwidth within a Placement Group.
To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), AWS SDKs, and third-party libraries. To learn more, visit the Amazon EC2 page.
Visit the Amazon EC2 Pricing page for more details.