Amazon EC2 C7i and C7i-flex instances

Compute optimized instances powered by 4th Generation Intel Xeon Scalable processors

Amazon Elastic Compute Cloud (Amazon EC2) C7i-flex and C7i instances are next-generation compute optimized instances powered by custom 4th Generation Intel Xeon Scalable processors (code named Sapphire Rapids) and feature a 2:1 ratio of memory to vCPU. EC2 instances powered by these custom processors, available only on AWS, offer the best performance among comparable Intel processors in the cloud – up to 15% better performance than Intel processors utilized by other cloud providers.

C7i-flex instances provide the easiest way for you to get price performance benefits for a majority of compute-intensive workloads. They deliver up to 19% better price performance compared to C6i instances. C7i-flex instances offer the most common sizes, from large to 8xlarge, with up to 32 vCPUs, 64 GiB memory, and are a great first choice for applications that don't fully utilize all compute resources. C7i-flex instances are designed to seamlessly run the most common compute-intensive workloads, including web and application servers, databases, caches, Apache Kafka, and Elasticsearch.

C7i instances offer price performance benefits for workloads that need larger instance sizes (up to 192 vCPUs and 384 GiB memory) or continuous high CPU usage. C7i instances are ideal for workloads including batch processing, distributed analytics, high performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding. C7i instances deliver up to 15% better price performance compared to C6i instances.

Introducing Amazon EC2 Flex instances (1:24)

Cost optimize with new Amazon EC2 Flex instances

Many customers do not fully utilize all the compute resources of an EC2 instance. Those customers are therefore paying for performance that they don’t need. Amazon EC2 C7i-flex instances offer the easiest way to achieve improved price performance for a majority of compute-intensive workloads. Amazon EC2 Flex instances efficiently use compute resources with the ability to scale up to full compute performance a majority of the time. Flex instances are purpose-built to optimize cost and performance.

Benefits

Optimized costs

C7i-flex instances offer the easiest way to optimize costs for a majority of compute-intensive workloads. They deliver up to 19% better price performance compared to C6i instances. C7i instances offer 15% better price performance compared to C6i instances. C7i provides additional larger instance sizes that enable consolidation and the ability to run more demanding and larger-sized workloads.

Flexibility and choice

C7i-flex and C7i instances add to the broadest and deepest selection of EC2 instances on AWS. C7i-flex offers the 5 most common sizes from large to 8xlarge. C7i provides 11 sizes (including 2 bare-metal sizes: c7i.metal-24xl and c7i.metal-48xl) with varying amounts of vCPU,  memory, networking, and storage.

Maximized resource efficiency

C7i-flex and C7i instances are built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor. Nitro delivers practically all the compute and memory resources of the host hardware to your instances, for better overall performance and security. EC2 instances built on the Nitro System can deliver over 15% higher throughput performance on workloads versus other cloud providers running the same CPU.

Features

Powered by 4th Generation Intel Xeon Scalable processors

C7i-flex and C7i instances are powered by custom 4th Generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.2 GHz (max core turbo frequency of 3.8 GHz). These custom processors are only available on AWS and offer the best performance among comparable Intel processors. Both instances include support for always-on memory encryption using Intel Total Memory Encryption (TME).

High-performance interfaces

C7i-flex and C7i instances use DDR5 memory that provides higher bandwidth compared to M6i instances. C7i-flex instances support up to 10 Gbps bandwidth to Amazon Elastic Block Store (Amazon EBS) and up to 12.5 Gbps of networking bandwidth. C7i instances support up to 40 Gbps bandwidth to Amazon EBS and up to 50 Gbps of networking bandwidth. Additionally, with C7i you can attach up to 128 EBS volumes to an instance (compared to C6i which allowed up to 28 EBS volume attachments to an instance). C7i instances also support Elastic Fabric Adapter (EFA) in the 48xlarge and metal-48xl sizes.

New accelerators

4th Generation Intel Xeon Scalable processors offer 4 new built-in accelerators. Advance Matrix Extensions (AMX)—available on both C7i-flex and C7i instances—accelerate matrix multiplication operations for applications such as CPU-based ML. Data Streaming Accelerator (DSA), In-Memory Analytics Accelerator (IAA), and QuickAssist Technology (QAT)—available only on C7i bare metal sizes—enable efficient offload and acceleration of data operations that help in optimizing performance for databases, encryption and compression, and queue management workloads.

Built on the Nitro System

The AWS Nitro System can be assembled in many different ways, allowing AWS to flexibly design and rapidly deliver EC2 instance types with an ever-broadening selection of compute, storage, memory, and networking options. Nitro Cards offload and accelerate I/O for functions, increasing overall system performance.

Product details

  • C7i-flex
  • Amazon EC2 C7i-flex instances, powered by 4th Generation Intel Xeon Scalable processors, deliver up to 19% better price performance compared to C6i instances. C7i-flex instances are a great first choice for applications that don't fully utilize all compute resources. C7i-flex instances efficiently use compute resources to deliver a baseline level of performance with the ability to scale up to the full compute performance a majority of the time.

    Instance Size vCPU Memory (GiB) Baseline Performance /vCPU Instance Storage (GB) Network Bandwidth (Gbps) EBS Bandwidth (Gbps)

    c7i-flex.large

    2

    4

    40%

    EBS-Only

    Up to 12.5

    Up to 10

    c7i-flex.xlarge

    4

    8

    40%

    EBS-Only

    Up to 12.5

    Up to 10

    c7i-flex.2xlarge

    8

    16

    40%

    EBS-Only

    Up to 12.5

    Up to 10

    c7i-flex.4xlarge

    16

    32

    40%

    EBS-Only

    Up to 12.5

    Up to 10

    c7i-flex.8xlarge

    32

    64

    40%

    EBS-Only

    Up to 12.5

    Up to 10

     

  • C7i
  • Amazon EC2 C7i instances, powered by 4th Generation Intel Xeon Scalable processors, deliver up to 15% better price performance compared to C6i instances.

    Instance size

    vCPU

    Memory (GiB)

    Instance Storage (GB)

    Network Bandwidth (Gbps) EBS Bandwidth (Gbps)

    c7i.large

    2

    4

    EBS-Only

    Up to 12.5

    Up to 10

    c7i.xlarge

    4

    8

    EBS-Only

    Up to 12.5

    Up to 10

    c7i.2xlarge

    8

    16

    EBS-Only

    Up to 12.5

    Up to 10

    c7i.4xlarge

    16

    32

    EBS-Only

    Up to 12.5

    Up to 10

    c7i.8xlarge

    32

    64

    EBS-Only

    12.5

    10

    c7i.12xlarge

    48

    96

    EBS-Only

    18.75

    15

    c7i.16xlarge

    64

    128

    EBS-Only

    25

    20

    c7i.24xlarge

    96

    192

    EBS-Only

    37.5

    30

    c7i.48xlarge

    192

    384

    EBS-Only

    50

    40

    c7i.metal-24xl

    96

    192

    EBS-Only

    37.5

    30

    c7i.metal-48xl

    192

    384

    EBS-Only

    50

    40

Customer testimonials

Hugging Face’s logo

The Hugging Face Hub works as a central place where anyone can share, explore, discover, and experiment with open-source ML.

"At Hugging Face, we are proud of our work with Intel to accelerate the latest models on the latest generation of hardware, from Intel Xeon CPUs to Habana Gaudi AI accelerators, for the millions of AI builders using Hugging Face.

The new acceleration capabilities of Intel Xeon 4th Gen, readily available on Amazon EC2, introduce bfloat16 and INT8 support for transformers training and inference, thanks to Advanced Matrix Extensions (AMX).

By integrating Intel Extension for Pytorch (IPEX) into our Optimum-Intel library, we make it super easy for Hugging Face users to get the acceleration benefits with minimal code changes. Using the custom EC2 Gen 7 instances (such as Amazon EC2 R7iz and other instances), we reached an 8x speedup fine-tuning DistilBERT and were able to run inference 3x faster on the same transformers model. Likewise, we achieved a 6.5x speedup when generating images with a Stable Diffusion model.”

Ella Charlaix, ML Engineer, Hugging Face