EC2 Instance Type Update – T2, R4, F1, Elastic GPUs, I3, C5
Earlier today, AWS CEO Andy Jassy announced the next wave of updates to the EC2 instance roadmap. We are making updates to our high I/O, compute-optimized, memory-optimized instances, expanding the range of burstable instances, and expanding into new areas of hardware acceleration including FPGA-based computing. This blog post summarizes today’s announcements and links to a couple of other posts that contain additional information.
As part of our planning process for these new instances, we have spent a lot of time talking to our customers in order to make sure that we have a good understanding of the challenges they face and the workloads that they plan to run on EC2 going forward. While the responses were diverse, in-memory analytics, multimedia processing, machine learning (aided by the newest AVX-512 instructions), and large-scale, storage-intensive ERP (Enterprise Resource Planning) applications, were frequently mentioned.
These instances are available today:
New F1 Instances – The F1 instances give you access to game-changing programmable hardware known as a Field-Programmable Gate Array or FPGA. You can write code that runs on the FPGA and speeds up many types of genomics, seismic analysis, financial risk analysis, big data search, and encryption algorithms by up to 30 times. Today we are launching a developer preview of the F1 instances and a Hardware Development Kit, and are also giving you the ability to build FPGA-powered applications and services and sell them in AWS Marketplace. To learn more, read Developer Preview – EC2 Instances (F1) with Programmable Hardware.
New R4 Instances – The R4 instances are designed for today’s memory-intensive Business Intelligence, in-memory caching, and database applications and offer up to 488 GiB of memory. The R4 instances improve on the popular R3 instances with a larger L3 cache and higher memory speeds. On the network side, the R4 instances support up to 20 Gbps of ENA-powered network bandwidth when used within a Placement Group, along with 12 Gbps of dedicated throughput to EBS. Instances are available in six sizes, with up to 64 vCPUs and 488 GiB of memory. To learn more, read New – Next Generation (R4) Memory-Optimized EC2 Instances.
Expanded T2 Instances – The T2 instances offer great performance for workloads that do not need to use the full CPU on a consistent basis. Our customers use them for general purpose workloads such as application servers, web servers, development environments, continuous integration servers, and small databases. We’ll be adding the t2.xlarge (16 GiB of memory) and the t2.2xlarge (32 GiB of memory). Like the existing T2 instances, the new sizes will be offer a generous amount of baseline performance (up to 4x that of the existing instances), along with the ability to burst to entire core when you need more compute power. To learn more, read New T2.Xlarge and T2.2Xlarge Instances.
And these are in the works:
New Elastic GPUs – You will soon be able to add high performance graphics acceleration to existing EC2 instance types, with your choice of 1 GiB to 8 GiB of GPU memory and compute power to match. The Amazon-optimized OpenGL library will automatically detect and make use of Elastic GPUs. We are launching this new EC2 feature in preview form today, along with the AWS Graphics Certification Program. To learn more about both items, read In the Works – Amazon EC2 Elastic GPUs.
New I3 Instances – I3 instances will be equipped with fast, low-latency, Non Volatile Memory Express (NVMe) based Solid State Drives. They’ll deliver up to to 3.3 million random IOPS at a 4 KB block size and up to 16 GB/second of disk throughput. These instances are designed to meet the needs of the most demanding I/O intensive relational & NoSQL databases, transactional, and analytics workloads. I3 instances will be available in six sizes, with up to 64 vCPUs, 488 GiB of memory, and 15.2 TB of storage (perfect for those ERP applications). The instances will support the new Elastic Network Adapter (ENA).
New C5 Instances – C5 instances will be based on Intel’s brand new Xeon “Skylake” processor, running faster than the processors in any other EC2 instance. As the successor to Broadwell, Skylake supports AVX-512 for machine learning, multimedia, scientific, and financial operations which require top-notch support for floating point calculations. Instances will be available in six sizes, with up to 72 vCPUs and 144 GiB of memory. On the network side, they’ll support ENA and will be EBS-optimized by by default.
I’ll be sharing more information about each of these instances as soon as it becomes available, so stay tuned!
Update! We have a webinar coming up on January 19th, where you can learn more. Sign up for it here.