Amazon EC2 P2 Instances are powerful, scalable instances that provide GPU-based parallel compute capabilities. For customers with graphics requirements, see G2 instances for more information.

P2 instances, designed for general-purpose GPU compute applications using CUDA and OpenCL, are ideally suited for machine learning, high performance databases, computational fluid dynamics, computational finance, seismic analysis, molecular modeling, genomics, rendering, and other server-side workloads requiring massive parallel floating point processing power.

Use the Amazon Linux AMI, pre-installed with popular deep learning frameworks such as Caffe and Mxnet, so you can get started quickly. You can also use the NVIDIA AMI with GPU driver and CUDA toolkit pre-installed for rapid onboarding.


Powerful Performance

P2 instances provide up to 16 NVIDIA K80 GPUs, 64 vCPUs and 732 GiB of host memory, with a combined 192 GB of GPU memory, 40 thousand parallel processing cores, 70 teraflops of single precision floating point performance, and over 23 teraflops of double precision floating point performance. P2 instances also offer GPUDirect™ (peer-to-peer GPU communication) capabilities for up to 16 GPUs, so that multiple GPUs can work together within a single host.

Scalable

Cluster P2 instances in a scale-out fashion with Amazon EC2 ENA-based Enhanced Networking, so you can run high-performance, low-latency compute grid. P2 is well-suited for distributed deep learning frameworks, such as MXNet, that scale out with near perfect efficiency.

Name

GPUs

vCPUs

RAM (GiB)     

Network
Bandwidth

Price/Hour*

RI Price / Hour**

p2.xlarge

1

4

61

High

$0.900

$0.425

p2.8xlarge

8

32

488

10 Gbps

$7.200

$3.400

p2.16xlarge

16

64

732

20 Gbps

$14.400

$6.800

* Pricing for US East (N. Virginia) and US West (Oregon)
**3-Year Partial Upfront RI

Get Started with T2 for Free

Create a Free Account

AWS Free Tier includes 750 hours of both Linux and Windows t2.micro instances each month for one year for new AWS customers.  To stay within the Free Tier, use only t2.micro instances.

View AWS Free Tier Details