Posted On: Nov 1, 2023

Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) Capacity Blocks for ML. You can use EC2 Capacity Blocks to reserve GPU instances in an Amazon EC2 UltraCluster for a future date for the amount of time that you require to run your machine learning (ML) workloads. This is an innovative way to reserve capacity where you can schedule GPU instances to be available on a future date for just the amount of time that you require those instances. 

EC2 Capacity Blocks provide you with assured and predictable access to GPU instances for your ML workloads. And with EC2 Capacity Blocks for ML, you get low-latency, high-throughput connectivity through colocation in Amazon EC2 UltraClusters for distributed training. You can reserve GPU capacity for between one and 14 days and in cluster sizes of one to 64 instances (512 GPUs), giving you the flexibility to run a broad range of ML workloads. This includes training and fine-tuning ML models, rapid prototyping, and handling surges in future demand. EC2 Capacity Blocks can be reserved up to eight weeks in advance.

EC2 Capacity Blocks are now available to reserve Amazon EC2 P5 instances, powered by the latest NVIDIA H100 Tensor Core GPUs, in the US East (Ohio) AWS Region.

To get started, visit the AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs. To learn more, see Amazon EC2 Capacity Blocks for ML and the documentation.