Amazon EC2 Instance types
Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give you the flexibility to choose the appropriate mix of resources for your applications. Each instance type includes one or more instance sizes, allowing you to scale your resources to the requirements of your target workload.
General Purpose
General purpose instances provide a balance of compute, memory and networking resources, and can be used for a variety of diverse workloads. These instances are ideal for applications that use these resources in equal proportions such as web servers and code repositories.
-
M8g
-
M7g
-
M7i
-
M7i-flex
-
M7a
-
Mac
-
M6g
-
M6i
-
M6in
-
M6a
-
M5
-
M5n
-
M5zn
-
M5a
-
M4
-
T4g
-
T3
-
T3a
-
T2
-
M8g
-
Amazon EC2 M8g instances are powered by AWS Graviton4 processors. They deliver the best price performance in Amazon EC2 for general purpose workloads.
Features:
- Powered by custom-built AWS Graviton4 processors
- Larger instance sizes with up to 3x more vCPUs and memory than M7g instances
- Features the latest DDR5-5600 memory
- Optimized for Amazon EBS by default
- Supports Elastic Fabric Adapter (EFA) on m8g.24xlarge, m8g.48xlarge, m8g.metal-24xl, and m8g.metal-48xl
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance size vCPU Memory (GiB) Instance storage (GB) Network bandwidth (Gbps) Amazon EBS bandwidth (Gbps) m8g.medium
1
4
EBS-only
Up to 12.5
Up to 10
m8g.large
2
8
EBS-only
Up to 12.5
Up to 10
m8g.xlarge
4
16
EBS-only
Up to 12.5
Up to 10
m8g.2xlarge
8
32
EBS-only
Up to 15
Up to 10
m8g.4xlarge
16
64
EBS-only
Up to 15
Up to 10
m8g.8xlarge
32
128
EBS-only
15
10
m8g.12xlarge
48
192
EBS-only
22.5
15
m8g.16xlarge
64
256
EBS-only
30
20
m8g.24xlarge
96
384
EBS-only
40
30
m8g.48xlarge
192
768
EBS-only
50
40
m8g.metal-24xl
96
384
EBS-only
40
30
m8g.metal-48xl
192
768
EBS-only
50
40
All instances have the following specs:
Use cases
Applications built on open source software such as application servers, microservices, gaming servers, midsize data stores, and caching fleets.
-
M7g
-
Amazon EC2 M7g instances are powered by Arm-based AWS Graviton3 processors. They are ideal for general purpose applications.
Features:
- Powered by custom-built AWS Graviton3 processors
- Features the latest DDR5 memory that offers 50% more bandwidth compared to DDR4
- 20% higher enhanced networking bandwidth compared to M6g instances
- EBS-optimized by default
- Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
- With M7gd instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the instance
- Supports Elastic Fabric Adapter (EFA) on m7g.16xlarge, m7g.metal, m7gd.16xlarge, and m7gd.metal
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps) EBS Bandwidth (Gbps) m7g.medium
1
4
EBS-Only
Up to 12.5
Up to 10
m7g.large
2
8
EBS-Only
Up to 12.5
Up to 10
m7g.xlarge
4
16
EBS-Only
Up to 12.5
Up to 10
m7g.2xlarge
8
32
EBS-Only
Up to 15
Up to 10
m7g.4xlarge
16
64
EBS-Only
Up to 15
Up to 10
m7g.8xlarge
32
128
EBS-Only
15
10
m7g.12xlarge
48
192
EBS-Only
22.5
15
m7g.16xlarge
64
256
EBS-Only
30
20
m7g.metal
64
256
EBS-Only
30
20
m7gd.medium
1
4
1 x 59 NVMe SSD
Up to 12.5
Up to 10
m7gd.large
2
8
1 x 118 NVMe SSD
Up to 12.5
Up to 10
m7gd.xlarge
4
16
1 x 237 NVMe SSD
Up to 12.5
Up to 10
m7gd.2xlarge
8
32
1 x 474 NVMe SSD
Up to 15
Up to 10
m7gd.4xlarge
16
64
1 x 950 NVMe SSD
Up to 15
Up to 10
m7gd.8xlarge
32
128
1 x 1900 NVMe SSD
15
10
m7gd.12xlarge
48
192
2 x 1425 NVMe SSD
22.5
15
m7gd.16xlarge
64
256
2 x 1900 NVMe SSD
30
20
m7gd.metal
64
256
2 x 1900 NVMe SSD
30
20
All instances have the following specs:
- Custom-built AWS Graviton3 processor with 64-bit Arm cores
- EBS-optimized
- Enhanced networking†
Use cases
Applications built on open-source software such as application servers, microservices, gaming servers, midsize data stores, and caching fleets.
-
M7i
-
Amazon EC2 M7i instances are powered by 4th Generation Intel Xeon Scalable processors and deliver 15% better price performance than M6i instances.
Features:
- Up to 3.2 GHz 4th Generation Intel Xeon Scalable processor (Sapphire Rapids 8488C)
- New Advance Matrix Extensions (AMX) accelerate matrix multiplication operations
- 2 metal sizes: m7i.metal-24xl and m7i.metal-48xl
- Discrete built-in accelerators (available on M7i bare metal sizes only)—Data Streaming Accelerator (DSA), In-Memory Analytics Accelerator (IAA), and QuickAssist Technology (QAT)—enable efficient offload and acceleration of data operations that help optimize performance for databases, encryption and compression, and queue management workloads
- Latest DDR5 memory, which offers more bandwidth compared to DDR4
- Support for always-on memory encryption using Intel Total Memory Encryption (TME)
- Support for up to 128 EBS volume attachments per instance
- Up to 192 vCPUs and 768 GiB memory
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps) EBS Bandwidth (Gbps) m7i.large
2
8
EBS-Only
Up to 12.5
Up to 10
m7i.xlarge
4
16
EBS-Only
Up to 12.5
Up to 10
m7i.2xlarge
8
32
EBS-Only
Up to 12.5
Up to 10
m7i.4xlarge
16
64
EBS-Only
Up to 12.5
Up to 10
m7i.8xlarge
32
128
EBS-Only
12.5
10
m7i.12xlarge
48
192
EBS-Only
18.75
15
m7i.16xlarge
64
256
EBS-Only
25
20
m7i.24xlarge
96
384
EBS-Only
37.5
30
m7i.48xlarge
192
768
EBS-Only
50
40
m7i.metal-24xl
96
384
EBS-Only
37.5
30
m7i.metal-48xl
192
768
EBS-Only
50
40
Use cases
M7i instances are ideal for general-purpose workloads, especially those that need larger sizes or high continuous CPU usage, including large application servers, large databases, gaming servers, CPU-based machine learning, and video streaming. -
M7i-flex
-
Amazon EC2 M7i-flex instances are powered by 4th Generation Intel Xeon Scalable processors and deliver 19% better price performance than M6i instances.
Features:
- Easiest way for you to achieve price performance and cost benefits in the cloud for a majority of your general-purpose workloads
- Up to 3.2 GHz 4th Generation Intel Xeon Scalable processor (Sapphire Rapids 8488C)
- New Advance Matrix Extensions (AMX) accelerate matrix multiplication operations
- Latest DDR5 memory, which offers more bandwidth compared to DDR4
- EBS-optimized by default
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps) EBS Bandwidth (Gbps) m7i-flex.large
2
8
EBS-Only
Up to 12.5
Up to 10
m7i-flex.xlarge
4
16
EBS-Only
Up to 12.5
Up to 10
m7i-flex.2xlarge
8
32
EBS-Only
Up to 12.5
Up to 10
m7i-flex.4xlarge
16
64
EBS-Only
Up to 12.5
Up to 10
m7i-flex.8xlarge
32
128
EBS-Only
Up to 12.5
Up to 10
Use cases
M7i-flex instances are a great first choice to seamlessly run a majority of general-purpose workloads, including web and application servers, virtual desktops, batch processing, microservices, databases, and enterprise applications. -
M7a
-
Amazon EC2 M7a instances, powered by 4th Generation AMD EPYC processors, deliver up to 50% higher performance compared to M6a instances.
Features:
- Up to 3.7 GHz 4th generation AMD EPYC processors (AMD EPYC 9R14)
- Up to 50 Gbps of networking bandwidth
- Up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS)
- Instance sizes with up to 192 vCPUs and 768 GiB of memory
- SAP-certified instances
- Built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for always-on memory encryption using AMD secure memory encryption (SME)
- Support for new processor capabilities such as AVX3-512, VNNI, and bfloat16.
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) m7a.medium
1
4
EBS-Only
Up to 12.5
Up to 10
m7a.large
2
8
EBS-Only
Up to 12.5
Up to 10
m7a.xlarge
4
16
EBS-Only
Up to 12.5
Up to 10
m7a.2xlarge
8
32
EBS-Only
Up to 12.5
Up to 10
m7a.4xlarge
16
64
EBS-Only
Up to 12.5
Up to 10
m7a.8xlarge
32
128
EBS-Only
12.5
10
m7a.12xlarge
48
192
EBS-Only
18.75
15
m7a.16xlarge
64
256
EBS-Only
25
20
m7a.24xlarge
96
384
EBS-Only
37.5
30
m7a.32xlarge
128
512
EBS-Only
50
40
m7a.48xlarge
192
768
EBS-Only
50
40
m7a.metal-48xl
192
768
EBS-Only
50
40
Use cases
Applications that benefit from high performance and high throughput such as financial applications, application servers, simulation modeling, gaming, mid-size data stores, application development environments, and caching fleets.
-
Mac
-
Amazon EC2 Mac instances allow you to run on-demand macOS workloads in the cloud, extending the flexibility, scalability, and cost benefits of AWS to all Apple developers. By using EC2 Mac instances, you can create apps for the iPhone, iPad, Mac, Vision Pro, Apple Watch, Apple TV, and Safari. These instances give developers access to macOS so they can develop, build, test, and sign applications that require the Xcode IDE. EC2 Mac instances are dedicated, bare-metal instances which are accessible in the EC2 console and via the AWS Command Line Interface as Dedicated Hosts.
x86-based EC2 Mac instances are powered by a combination of Mac mini computers—featuring:
- Intel’s 8th generation 3.2 GHz (4.6 GHz turbo) Core i7 processors
- 6 physical and 12 logical cores
- 32 GiB of memory
- Instance storage is available through Amazon Elastic Block Store (EBS)
Instance Size vCPU Memory (GiB) Instance Storage Network Bandwidth (Gbps) EBS Bandwidth (Gbps) mac1.metal 12 32 EBS-Only 10 8 EC2 M1 Mac instances are powered by a combination of Apple silicon Mac mini computers—featuring:
- Apple M1 chip with 8 CPU cores
- 8 GPU cores
- 16 GiB of memory
- 16-core Apple Neural Engine
- Instance storage is available through Amazon Elastic Block Store (EBS)
Instance Size vCPU Memory (GiB) Instance Storage Network Bandwidth (Gbps) EBS Bandwidth (Gbps) mac2.metal 8 16 EBS-Only 10 8 EC2 M1 Ultra Mac instances are powered by a combination of Apple silicon Mac Studio computers—featuring:
- Apple M1 Ultra chip with 20 CPU cores
- 64 GPU cores
- 128 GiB of memory
- 32-core Apple Neural Engine
- Instance storage is available through Amazon Elastic Block Store (EBS)
Instance Size vCPU Memory (GiB) Instance Storage Network Bandwidth (Gbps) EBS Bandwidth (Gbps) mac2-m1ultra.metal 20 128 EBS-Only 10 8 EC2 M2 Mac instances are powered by a combination of Apple silicon Mac mini computers—featuring:
- Apple M2 chip with 8 CPU cores
- 10 GPU cores
- 24 GiB of memory
- 16-core Apple Neural Engine
- Instance storage is available through Amazon Elastic Block Store (EBS)
Instance Size vCPU Memory (GiB) Instance Storage Network Bandwidth (Gbps) EBS Bandwidth (Gbps) mac2-m2.metal 8 24 EBS-Only 10 8 EC2 M2 Pro Mac instances are powered by a combination of Apple silicon Mac mini computers—featuring:
- Apple M2 Pro chip with 12 CPU cores
- 19 GPU cores
- 32 GiB of memory
- 16-core Apple Neural Engine
- Instance storage is available through Amazon Elastic Block Store (EBS)
Instance Size vCPU Memory (GiB) Instance Storage Network Bandwidth (Gbps) EBS Bandwidth (Gbps) mac2-m2pro.metal 12 32 EBS-Only 10 8 Use Cases
Developing, building, testing, and signing iOS, iPadOS, macOS, visionOS, WatchOS, and tvOS applications on the Xcode IDE
-
M6g
-
Amazon EC2 M6g instances are powered by Arm-based AWS Graviton2 processors. They deliver up to 40% better price/performance over current generation M5 instances and offer a balance of compute, memory, and networking resources for a broad set of workloads.
Features:
- Custom built AWS Graviton2 Processor with 64-bit Arm Neoverse cores
- Support for Enhanced Networking with Up to 25 Gbps of Network bandwidth
- EBS-optimized by default
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
- With M6gd instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the instance
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Mbps) m6g.medium 1 4 EBS-Only Up to 10 Up to 4,750 m6g.large 2 8 EBS-Only Up to 10 Up to 4,750 m6g.xlarge 4 16 EBS-Only Up to 10 Up to 4,750 m6g.2xlarge 8 32 EBS-Only Up to 10 Up to 4,750 m6g.4xlarge 16 64 EBS-Only Up to 10 4,750 m6g.8xlarge 32 128 EBS-Only 12 9,000 m6g.12xlarge 48 192 EBS-Only 20 13,500 m6g.16xlarge 64 256 EBS-Only 25 19,000 m6g.metal 64 256 EBS-Only 25 19,000 m6gd.medium 1 4 1 x 59 NVMe SSD Up to 10 Up to 4,750 m6gd.large 2 8 1 x 118 NVMe SSD Up to 10 Up to 4,750 m6gd.xlarge 4 16 1 x 237 NVMe SSD Up to 10 Up to 4,750 m6gd.2xlarge 8 32 1 x 474 NVMe SSD Up to 10 Up to 4,750 m6gd.4xlarge 16 64 1 x 950 NVMe SSD Up to 10 4,750 m6gd.8xlarge 32 128 1 x 1900 NVMe SSD 12 9,000 m6gd.12xlarge 48 192 2 x 1425 NVMe SSD 20 13,500 m6gd.16xlarge 64 256 2 x 1900 NVMe SSD 25 19,000 m6gd.metal 64 256 2 x 1900 NVMe SSD 25 19,000 All instances have the following specs:
- Custom built AWS Graviton2 Processor with 64-bit Arm cores
- EBS Optimized
- Enhanced Networking
Use Cases
Applications built on open-source software such as application servers, microservices, gaming servers, mid-size data stores, and caching fleets.
-
M6i
-
Amazon EC2 M6i instances are powered by 3rd Generation Intel Xeon Scalable processors (Ice Lake). This family provides a balance of compute, memory, and network resources, and is a good choice for many applications.
Features:
- Up to 3.5 GHz 3rd Generation Intel Xeon Scalable processors (Ice Lake 8375C)
- Up to 15% better compute price performance over M5 instances
- Up to 20% higher memory bandwidth per vCPU compared to M5 instances
- Up to 50 Gbps of networking speed
- Up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (EBS)
- A new instance size (32xlarge) with 128 vCPUs and 512 GiB of memory
- Supports Elastic Fabric Adapter on the 32xlarge and metal sizes
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for always-on memory encryption using Intel Total Memory Encryption (TME)
- Support for new Intel Advanced Vector Extensions (AVX 512) instructions for faster processing of cryptographic algorithms
- With M6id instances, up to 7.6 TB of local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the M6i instance
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps) EBS Bandwidth (Gbps) m6i.large 2 8 EBS-Only Up to 12.5 Up to 10 m6i.xlarge 4 16 EBS-Only Up to 12.5 Up to 10 m6i.2xlarge 8 32 EBS-Only Up to 12.5 Up to 10 m6i.4xlarge 16 64 EBS-Only Up to 12.5 Up to 10 m6i.8xlarge 32 128 EBS-Only 12.5 10 m6i.12xlarge 48 192 EBS-Only 18.75 15 m6i.16xlarge 64 256 EBS-Only 25 20 m6i.24xlarge 96 384 EBS-Only 37.5 30 m6i.32xlarge 128 512 EBS-Only 50 40 m6i.metal 128 512 EBS-Only 50 40 m6id.large 2 8 1x118 NVMe SSD Up to 12.5 Up to 10 m6id.xlarge 4 16 1x237 NVMe SSD Up to 12.5 Up to 10 m6id.2xlarge 8 32 1x474 NVMe SSD Up to 12.5 Up to 10 m6id.4xlarge 16 64 1x950 NVMe SSD Up to 12.5 Up to 10 m6id.8xlarge 32 128 1x1900 NVMe SSD 12.5 10 m6id.12xlarge 48 192 2x1425 NVMe SSD 18.75 15 m6id.16xlarge 64 256 2x1900 NVMe SSD 25 20 m6id.24xlarge 96 384 4x1425 NVMe SSD 37.5 30 m6id.32xlarge 128 512 4x1900 NVMe SSD 50 40 m6id.metal 128 512 4x1900 NVMe SSD 50 40 Use Cases
These instances are SAP-Certified and are ideal for workloads such as backend servers supporting enterprise applications (for example Microsoft Exchange and SharePoint, SAP Business Suite, MySQL, Microsoft SQL Server, and PostgreSQL databases), gaming servers, caching fleets, and application development environments.
- Up to 3.5 GHz 3rd Generation Intel Xeon Scalable processors (Ice Lake 8375C)
-
M6in
-
Amazon EC2 M6in and M6idn instances are ideal for network-intensive workloads such as backend servers, enterprise, gaming servers, and caching fleets applications. Powered by 3rd Generation Intel Xeon Scalable processors (Ice Lake) with an all-core turbo frequency of 3.5 GHz, they offer up to 200 Gbps of network bandwidth and up to 80 Gbps Amazon EBS bandwidth.
Features:
- Up to 3.5 GHz 3rd Generation Intel Xeon Scalable processors (Ice Lake 8375C)
- Up to 20% higher memory bandwidth per vCPU compared to M5n and M5dn instances
- Up to 200 Gbps of networking speed, which is up to 2x compared to M5n and M5dn instances
- Up to 100 Gbps of EBS bandwidth, which is up to 5.2x compared to M5n and M5dn instances
- EFA support on the 32xlarge and metal sizes
- Support for always-on memory encryption using Intel Total Memory Encryption (TME)
- Support for new Intel Advanced Vector Extensions (AVX-512) instructions for faster processing of cryptographic algorithms
- With M6idn instances, up to 7.6 TB of local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the M6idn instance
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) m6in.large 2 8 EBS-Only
Up to 25 Up to 25 m6in.xlarge 4 16 EBS-Only Up to 30 Up to 25 m6in.2xlarge 8 32 EBS-Only Up to 40 Up to 25 m6in.4xlarge 16 64 EBS-Only Up to 50 Up to 25 m6in.8xlarge 32 128 EBS-Only 50 25 m6in.12xlarge 48 192 EBS-Only 75 37.5 m6in.16xlarge 64 256 EBS-Only 100 50 m6in.24xlarge 96 384 EBS-Only 150 75 m6in.32xlarge 128 512 EBS-Only 200**** 100 m6in.metal 128 512 EBS-Only 200**** 100 m6idn.large 2 8 1x118 NVMe SSD
Up to 25 Up to 25 m6idn.xlarge 4 16 1x237 NVMe SSD Up to 30 Up to 25 m6idn.2xlarge 8 32 1x474 NVMe SSD Up to 40 Up to 25 m6idn.4xlarge 16 64 1x950 NVMe SSD
Up to 50 Up to 25 m6idn.8xlarge 32 128 1x1900 NVMe SSD
50 25 m6idn.12xlarge 48 192 2x1425 NVMe SSD 75 37.5 m6idn.16xlarge 64 256 2x1900 NVMe SSD 100 50 m6idn.24xlarge 96 384 4x1425 NVMe SSD 150 75 m6idn.32xlarge 128 512 4x1900 NVMe SSD 200**** 100 m6idn.metal 128 512 4x1900 NVMe SSD 200**** 100 ****For 32xlarge and metal sizes, at least two elastic network interfaces, with each attached to a different network card, are required on the instance to achieve 200 Gbps throughput. Each network interface attached to a network card can achieve a maximum of 170 Gbps. For more information, see Network cards
All instances have the following specs:
- Up to 3.5 GHz 3rd Generation Intel Xeon Scalable processors
- EBS-optimized
- Enhanced Networking†
Use Cases:
These instances are SAP-Certified and ideal for workloads that can take advantage of high networking throughput. Workloads include high-performance file systems, distributed web scale in-memory caches, caching fleets, real-time big data analytics, Telco applications, such as 5G User Plane Function (UPF), as well as and application development environments.
-
M6a
-
Amazon EC2 M6a instances are powered by 3rd generation AMD EPYC processors and are an ideal fit for general purpose workloads.
Features:
- Up to 3.6 GHz 3rd generation AMD EPYC processors (AMD EPYC 7R13)
- Up to 35% better compute price performance over M5a instances
- Up to 50 Gbps of networking speed
- Up to 40 Gbps of bandwidth to the Amazon Elastic Block Store
- Instance size with up to 192 vCPUs and 768 GiB of memory
- SAP-Certified instances
- Supports Elastic Fabric Adapter on the 48xlarge size
- Built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for always-on memory encryption using AMD Transparent Single Key Memory Encryption (TSME)
- Support for new AMD Advanced Vector Extensions (AVX-2) instructions for faster execution of cryptographic algorithms
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) m6a.large 2 8 EBS-Only Up to 12.5 Up to 10 m6a.xlarge 4 16 EBS-Only Up to 12.5 Up to 10 m6a.2xlarge 8 32 EBS-Only Up to 12.5 Up to 10 m6a.4xlarge 16 64 EBS-Only Up to 12.5 Up to 10 m6a.8xlarge 32 128 EBS-Only 12.5 10 m6a.12xlarge 48 192 EBS-Only 18.75 15 m6a.16xlarge 64 256 EBS-Only 25 20 m6a.24xlarge 96 384 EBS-Only 37.5 30 m6a.32xlarge 128 512 EBS-Only 50 40 m6a.48xlarge 192 768 EBS-Only 50 40 m6a.metal 192 768 EBS-Only 50 40 All instances have the following specs:
- Up to 3.6 GHz 3rd generation AMD EPYC processors
- EBS Optimized
- Enhanced Networking†
Use Cases
These instances are SAP-Certified and are ideal for workloads such as backend servers supporting enterprise applications (e.g. Microsoft Exchange and SharePoint, SAP Business Suite, MySQL, Microsoft SQL Server, and PostgreSQL databases), multi-player gaming servers, caching fleets, as well as for application development environments.
- Up to 3.6 GHz 3rd generation AMD EPYC processors (AMD EPYC 7R13)
-
M5
-
Amazon EC2 M5 instances are the latest generation of General Purpose Instances powered by Intel Xeon® Platinum 8175M or 8259CL processors. These instances provide a balance of compute, memory, and network resources, and is a good choice for many applications.
Features:
- Up to 3.1 GHz Intel Xeon Scalable processor (Skylake 8175M or Cascade Lake 8259CL) with new Intel Advanced Vector Extension (AVX-512) instruction set
- New larger instance size, m5.24xlarge, offering 96 vCPUs and 384 GiB of memory
- Up to 25 Gbps network bandwidth using Enhanced Networking
- Requires HVM AMIs that include drivers for ENA and NVMe
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
- With M5d instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the M5 instance
- New 8xlarge and 16xlarge sizes now available.
Instance Size
vCPU Memory (GiB) Instance Storage
(GB)Network Bandwidth (Gbps)***
EBS Bandwidth (Mbps) m5.large 2 8 EBS-Only Up to 10 Up to 4,750 m5.xlarge 4 16 EBS-Only Up to 10 Up to 4,750 m5.2xlarge 8 32 EBS-Only Up to 10 Up to 4,750 m5.4xlarge 16 64 EBS-Only Up to 10 4,750 m5.8xlarge 32 128 EBS Only 10 6,800 m5.12xlarge 48 192 EBS-Only 12 9,500 m5.16xlarge 64 256 EBS Only 20 13,600 m5.24xlarge 96 384 EBS-Only 25 19,000 m5.metal 96* 384 EBS-Only 25 19,000 m5d.large 2 8 1 x 75 NVMe SSD Up to 10 Up to 4,750 m5d.xlarge 4 16 1 x 150 NVMe SSD Up to 10 Up to 4,750 m5d.2xlarge 8 32 1 x 300 NVMe SSD Up to 10 Up to 4,750 m5d.4xlarge 16 64 2 x 300 NVMe SSD Up to 10 4,750 m5d.8xlarge 32 128 2 x 600 NVMe SSD 10 6,800 m5d.12xlarge 48 192 2 x 900 NVMe SSD 12 9,500 m5d.16xlarge 64 256 4 x 600 NVMe SSD 20 13,600 m5d.24xlarge 96 384 4 x 900 NVMe SSD 25 19,000 m5d.metal 96* 384 4 x 900 NVMe SSD 25 19,000 * m5.metal and m5d.metal provide 96 logical processors on 48 physical cores; they run on single servers with two physical Intel sockets
All instances have the following specs:
- Up to 3.1 GHz Intel Xeon Platinum Processor
- Intel AVX†, Intel AVX2†, Intel AVX-512, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
Small and mid-size databases, data processing tasks that require additional memory, caching fleets, and for running backend servers for SAP, Microsoft SharePoint, cluster computing, and other enterprise applications
-
M5n
-
Amazon EC2 M5 instances are ideal for workloads that require a balance of compute, memory, and networking resources including web and application servers, small and mid-sized databases, cluster computing, gaming servers, and caching fleet. The higher bandwidth, M5n and M5dn, instance variants are ideal for applications that can take advantage of improved network throughput and packet rate performance.
Feature:- 2nd generation Intel Xeon Scalable Processors (Cascade Lake 8259CL) with a sustained all-core Turbo CPU frequency of 3.1 GHz and maximum single core turbo frequency of 3.5 GHz
- Support for the new Intel Vector Neural Network Instructions (AVX-512 VNNI) which will help speed up typical machine learning operations like convolution, and automatically improve inference performance over a wide range of deep learning workloads
- 25 Gbps of peak bandwidth on smaller instance sizes
- 100 Gbps of network bandwidth on the largest instance size
- Requires HVM AMIs that include drivers for ENA and NVMe
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
- With M5dn instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the M5 instance
Instance Size
vCPU Memory (GiB) Instance Storage
(GB)Network Bandwidth (Gbps)***
EBS Bandwidth (Mbps) m5n.large 2 8 EBS-Only Up to 25 Up to 4,750 m5n.xlarge 4 16 EBS-Only Up to 25 Up to 4,750 m5n.2xlarge 8 32 EBS-Only Up to 25 Up to 4,750 m5n.4xlarge 16 64 EBS-Only Up to 25 4,750 m5n.8xlarge 32 128 EBS Only 25 6,800
m5n.12xlarge 48 192 EBS-Only 50 9,500 m5n.16xlarge 64 256 EBS Only 75 13,600 m5n.24xlarge 96 384 EBS-Only 100 19,000 m5n.metal 96* 384 EBS-Only 100 19,000 m5dn.large 2 8 1 x 75 NVMe SSD Up to 25 Up to 4,750 m5dn.xlarge 4 16 1 x 150 NVMe SSD Up to 25 Up to 4,750 m5dn.2xlarge 8 32 1 x 300 NVMe SSD Up to 25 Up to 4,750 m5dn.4xlarge 16 64 2 x 300 NVMe SSD Up to 25 4,750 m5dn.8xlarge 32 128 2 x 600 NVMe SSD 25 6,800 m5dn.12xlarge 48 192 2 x 900 NVMe SSD 50 9,500 m5dn.16xlarge 64 256 4 x 600 NVMe SSD 75 13,600 m5dn.24xlarge 96 384 4 x 900 NVMe SSD 100 19,000 m5dn.metal 96* 384 4 x 900 NVMe SSD 100 19,000 * m5n.metal and m5dn.metal provide 96 logical processors on 48 physical cores.
All instances have the following specs:
- Custom 2nd generation Intel Xeon Scalable Processors (Cascade Lake) with a sustained all-core Turbo CPU frequency of 3.1 GHz and maximum single core turbo frequency of 3.5 GHz
- Intel AVX†, Intel AVX2†, Intel AVX-512, Intel Turbo, Intel DL Boost
- EBS Optimized
- Enhanced Networking†
Use Cases
Web and application servers, small and mid-sized databases, cluster computing, gaming servers, caching fleets, and other enterprise applications
-
M5zn
-
Amazon EC2 M5zn instances deliver the fastest Intel Xeon Scalable processors in the cloud, with an all-core turbo frequency up to 4.5 GHz.
Features:
- 2nd Generation Intel Xeon Scalable Processors (Cascade Lake 8252C) with an all-core turbo frequency up to 4.5 GHz
- Up to 100 Gbps of network bandwidth on the largest instance size and bare metal variant
- Up to 19 Gbps to the Amazon Elastic Block Store
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- 12x and metal sizes of M5zn instances leverage the latest generation of the Elastic Network Adapter and enable consistent low latency with Elastic Fabric Adapter
Instance Size vCPU Memory (GiB)
Instance Storage
(GB)
Network Bandwidth (Gbps)*** EBS Bandwidth (Mbps) m5zn.large 2 8 EBS-Only Up to 25 Up to 3,170 m5zn.xlarge 4 16 EBS-Only Up to 25 Up to 3,170 m5zn.2xlarge 8 32 EBS-Only Up to 25 3,170 m5zn.3xlarge 12 48 EBS-Only Up to 25 4,750 m5zn.6xlarge 24 96 EBS Only 50 9,500 m5zn.12xlarge 48 192 EBS-Only 100 19,000 m5zn.metal 48 192 EBS-Only 100 19,000 2 x 300 NVMe SSD 2 x 300 NVMe SSD Use Cases
M5zn instances are an ideal fit for applications that benefit from extremely high single-thread performance and high throughput, low latency networking, such as gaming, High Performance Computing, and simulation modeling for the automotive, aerospace, energy, and telecommunication industries.
-
M5a
-
Amazon EC2 M5a instances are the latest generation of General Purpose Instances powered by AMD EPYC 7000 series processors. M5a instances deliver up to 10% cost savings over comparable instance types. With M5ad instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the instance.
Features:
- AMD EPYC 7000 series processors (AMD EPYC 7571) with an all core turbo clock speed of 2.5 GHz
- Up to 20 Gbps network bandwidth using Enhanced Networking
- Requires HVM AMIs that include drivers for ENA and NVMe
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
- With M5ad instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the M5a instance
Instance Size vCPU Memory (GiB)
Instance Storage
(GB)
Network Bandwidth (Gbps)*** EBS Bandwidth (Mbps) m5a.large 2 8 EBS-Only Up to 10 Up to 2,880 m5a.xlarge 4 16 EBS-Only Up to 10 Up to 2,880 m5a.2xlarge 8 32 EBS-Only Up to 10 Up to 2,880 m5a.4xlarge 16 64 EBS-Only Up to 10 2,880 m5a.8xlarge 32 128 EBS Only Up to 10 4,750 m5a.12xlarge 48 192 EBS-Only 10 6,780 m5a.16xlarge 64 256 EBS Only 12 9,500 m5a.24xlarge 96 384 EBS-Only 20 13,570 m5ad.large 2 8 1 x 75 NVMe SSD Up to 10 Up to 2,880 m5ad.xlarge 4 16 1 x 150 NVMe SSD Up to 10 Up to 2,880 m5ad.2xlarge 8 32 1 x 300 NVMe SSD Up to 10 Up to 2,880 m5ad.4xlarge 16 64 2 x 300 NVMe SSD Up to 10 2,880 m5ad.8xlarge 32 128 2 x 600 NVMe SSD Up to 10 4,750 m5ad.12xlarge 48 192 2 x 900 NVMe SSD 10 6,870 m5ad.16xlarge 64 256 4 x 600 NVMe SSD 12 9,500 m5ad.24xlarge 96 384 4 x 900 NVMe SSD 20 13,570 All instances have the following specs:
- 2.5 GHz AMD EPYC 7000 series processors
- EBS Optimized
- Enhanced Networking†
Use Cases
Small and mid-size databases, data processing tasks that require additional memory, caching fleets, and for running backend servers for SAP, Microsoft SharePoint, cluster computing, and other enterprise applications
- AMD EPYC 7000 series processors (AMD EPYC 7571) with an all core turbo clock speed of 2.5 GHz
-
M4
-
Amazon EC2 M4 instances provide a balance of compute, memory, and network resources, and it is a good choice for many applications.
Features:
- Up to 2.4 GHz Intel Xeon Scalable Processor (Broadwell E5-2686 v4 or Haswell E5-2676 v3)
- EBS-optimized by default at no additional cost
- Support for Enhanced Networking
- Balance of compute, memory, and network resources
Instance vCPU* Mem (GiB) Storage Dedicated EBS Bandwidth (Mbps) Network Performance*** m4.large 2 8 EBS-only 450 Moderate m4.xlarge 4 16 EBS-only 750 High m4.2xlarge 8 32 EBS-only 1,000 High m4.4xlarge 16 64 EBS-only 2,000 High m4.10xlarge 40 160 EBS-only 4,000 10 Gigabit m4.16xlarge 64 256 EBS-only 10,000 25 Gigabit All instances have the following specs:
- 2.4 GHz Intel Xeon E5-2676 v3** Processor
- Intel AVX†, Intel AVX2†, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
Small and mid-size databases, data processing tasks that require additional memory, caching fleets, and for running backend servers for SAP, Microsoft SharePoint, cluster computing, and other enterprise applications.
-
T4g
-
Amazon EC2 T4g instances are powered by Arm-based custom built AWS Graviton2 processors and deliver up to 40% better price performance over T3 instances for a broad set of burstable general purpose workloads.
T4g instances accumulate CPU credits when a workload is operating below baseline threshold. Each earned CPU credit provides the T4g instance the opportunity to burst with the performance of a full CPU core for one minute when needed. T4g instances can burst at any time for as long as required in Unlimited mode.
Features:
- Free trial for t4g.small instances for up to 750 hours per month until December 31st, 2024. Refer to the FAQ for details.
- Burstable CPU, governed by CPU Credits, and consistent baseline performance
- Unlimited mode by default to ensure performance during peak periods and Standard mode option for a predictable monthly cost
- Custom built AWS Graviton2 Processor with 64-bit Arm Neoverse cores
- EBS-optimized by default
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance Size vCPU Memory (GiB) Baseline Performance / vCPU CPU Credits Earned / Hr Network Burst Bandwidth (Gbps)*** EBS Burst Bandwidth (Mbps) t4g.nano 2 0.5 5% 6 Up to 5 Up to 2,085 t4g.micro 2 1 10% 12 Up to 5 Up to 2,085 t4g.small 2 2 20% 24 Up to 5 Up to 2,085 t4g.medium 2 4 20% 24 Up to 5 Up to 2,085 t4g.large 2 8 30% 36 Up to 5 Up to 2,780 t4g.xlarge 4 16 40% 96 Up to 5 Up to 2,780 t4g.2xlarge 8 32 40% 192 Up to 5 Up to 2,780 All instances have the following specs:
- Custom built AWS Graviton2 Processor with 64-bit Arm cores
- EBS Optimized
- Enhanced Networking
Use Cases:
Micro-services, low-latency interactive applications, small and medium databases, virtual desktops, development environments, code repositories, and business-critical applications.
- Free trial for t4g.small instances for up to 750 hours per month until December 31st, 2024. Refer to the FAQ for details.
-
T3
-
Amazon EC2 T3 instances are the next generation burstable general-purpose instance type that provide a baseline level of CPU performance with the ability to burst CPU usage at any time for as long as required. T3 instances offer a balance of compute, memory, and network resources and are designed for applications with moderate CPU usage that experience temporary spikes in use.
T3 instances accumulate CPU credits when a workload is operating below baseline threshold. Each earned CPU credit provides the T3 instance the opportunity to burst with the performance of a full CPU core for one minute when needed. T3 instances can burst at any time for as long as required in Unlimited mode.
Features:
- Up to 3.1 GHz Intel Xeon Scalable processor (Skylake 8175M or Cascade Lake 8259CL)
- Burstable CPU, governed by CPU Credits, and consistent baseline performance
- Unlimited mode by default to ensure performance during peak periods and Standard mode option for a predictable monthly cost
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- AWS Nitro System and high frequency Intel Xeon Scalable processors result in up to a 30% price performance improvement over T2 instances
Instance vCPU* CPU Credits/hour Mem (GiB) Storage Network Performance (Gbps)*** t3.nano 2
6
0.5 EBS-Only Up to 5 t3.micro 2
12
1 EBS-Only
Up to 5 t3.small 2
24
2 EBS-Only
Up to 5 t3.medium 2 24
4 EBS-Only
Up to 5 t3.large 2 36
8 EBS-Only Up to 5 t3.xlarge 4 96
16 EBS-Only Up to 5 t3.2xlarge 8 192
32 EBS-Only Up to 5 All instances have the following specs:
- Up to 3.1 GHz Intel Xeon Scalable processor
- Intel AVX†, Intel AVX2†, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases:
Micro-services, low-latency interactive applications, small and medium databases, virtual desktops, development environments, code repositories, and business-critical applications
- Up to 3.1 GHz Intel Xeon Scalable processor (Skylake 8175M or Cascade Lake 8259CL)
-
T3a
-
Amazon EC2 T3a instances are the next generation burstable general-purpose instance type that provide a baseline level of CPU performance with the ability to burst CPU usage at any time for as long as required. T3a instances offer a balance of compute, memory, and network resources and are designed for applications with moderate CPU usage that experience temporary spikes in use. T3a instances deliver up to 10% cost savings over comparable instance types.
T3a instances accumulate CPU credits when a workload is operating below baseline threshold. Each earned CPU credit provides the T3a instance the opportunity to burst with the performance of a full CPU core for one minute when needed. T3a instances can burst at any time for as long as required in Unlimited mode.
Features:
- AMD EPYC 7000 series processors (AMD EPYC 7571) with an all core turbo clock speed of 2.5 GHz
- Burstable CPU, governed by CPU Credits, and consistent baseline performance
- Unlimited mode by default to ensure performance during peak periods and Standard mode option for a predictable monthly cost
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance vCPU* CPU Credits/hour Mem (GiB) Storage Network Performance (Gbps)*** t3a.nano 2
6
0.5 EBS-Only Up to 5 t3a.micro 2
12
1 EBS-Only
Up to 5 t3a.small 2
24
2 EBS-Only
Up to 5 t3a.medium 2 24
4 EBS-Only
Up to 5 t3a.large 2 36
8 EBS-Only Up to 5 t3a.xlarge 4 96
16 EBS-Only Up to 5 t3a.2xlarge 8 192
32 EBS-Only Up to 5 All instances have the following specs:
- 2.5 GHz AMD EPYC 7000 series processors
- EBS Optimized
- Enhanced Networking†
Use Cases:
Micro-services, low-latency interactive applications, small and medium databases, virtual desktops, development environments, code repositories, and business-critical applications
-
T2
-
Amazon EC2 T2 instances are Burstable Performance Instances that provide a baseline level of CPU performance with the ability to burst above the baseline.
T2 Unlimited instances can sustain high CPU performance for as long as a workload needs it. For most general-purpose workloads, T2 Unlimited instances will provide ample performance without any additional charges. If the instance needs to run at higher CPU utilization for a prolonged period, it can also do so at a flat additional charge of 5 cents per vCPU-hour.
The baseline performance and ability to burst are governed by CPU Credits. T2 instances receive CPU Credits continuously at a set rate depending on the instance size, accumulating CPU Credits when they are idle, and consuming CPU credits when they are active. T2 instances are a good choice for a variety of general-purpose workloads including micro-services, low-latency interactive applications, small and medium databases, virtual desktops, development, build and stage environments, code repositories, and product prototypes. For more information see Burstable Performance Instances.
Features:
- Up to 3.3 GHz Intel Xeon Scalable processor (Haswell E5-2676 v3 or Broadwell E5-2686 v4)
- High frequency Intel Xeon processors
- Burstable CPU, governed by CPU Credits, and consistent baseline performance
- Low-cost general purpose instance type, and Free Tier eligible*
- Balance of compute, memory, and network resources
* t2.micro only. If configured as T2 Unlimited, charges may apply if average CPU utilization exceeds the baseline of the instance. See documentation for more details.
Instance vCPU* CPU Credits / hour
Mem (GiB) Storage
Network Performance t2.nano 1 3 0.5 EBS-Only Low t2.micro 1 6 1 EBS-Only
Low to Moderate t2.small 1 12 2 EBS-Only
Low to Moderate t2.medium 2 24 4 EBS-Only
Low to Moderate t2.large 2 36 8 EBS-Only Low to Moderate t2.xlarge 4 54 16 EBS-Only Moderate t2.2xlarge 8 81 32 EBS-Only Moderate All instances have the following specs:
- Intel AVX†, Intel Turbo†
- t2.nano, t2.micro, t2.small, t2.medium have up to 3.3 GHz Intel Xeon Scalable processor
- t2.large, t2.xlarge, and t2.2xlarge have up to 3.0 GHz Intel Scalable Processor
Use Cases
Websites and web applications, development environments, build servers, code repositories, micro services, test and staging environments, and line of business applications.
- Up to 3.3 GHz Intel Xeon Scalable processor (Haswell E5-2676 v3 or Broadwell E5-2686 v4)
Each vCPU on Graviton-based Amazon EC2 instances is a core of AWS Graviton processor.
Each vCPU on non-Graviton-based Amazon EC2 instances is a thread of x86-based processor, except for M7a instances, T2 instances, and m3.medium.
† AVX, AVX2, and Enhanced Networking are only available on instances launched with HVM AMIs.
* This is the default and maximum number of vCPUs available for this instance type. You can specify a custom number of vCPUs when launching this instance type. For more details on valid vCPU counts and how to start using this feature, visit the Optimize CPUs documentation page here.
** These M4 instances may launch on an Intel Xeon E5-2686 v4 (Broadwell) processor.
*** Instances marked with "Up to" Network Bandwidth have a baseline bandwidth and can use a network I/O credit mechanism to burst beyond their baseline bandwidth on a best effort basis. For more information, see instance network bandwidth.
Compute Optimized
Compute Optimized instances are ideal for compute bound applications that benefit from high performance processors. Instances belonging to this category are well suited for batch processing workloads, media transcoding, high performance web servers, high performance computing (HPC), scientific modeling, dedicated gaming servers and ad server engines, machine learning inference and other compute intensive applications.
-
C8g
-
C7g
-
C7gn
-
C7i
-
C7i-flex
-
C7a
-
C6g
-
C6gn
-
C6i
-
C6in
-
C6a
-
C5
-
C5n
-
C5a
-
C4
-
C8g
-
Amazon EC2 C8g instances are powered by AWS Graviton4 processors. They deliver the best price performance in Amazon EC2 for compute-intensive workloads.
Features:
- Powered by custom-built AWS Graviton4 processors
- Larger instance sizes with up to 3x more vCPUs and memory than C7g instances
- Features the latest DDR5-5600 memory
- Optimized for Amazon EBS by default
- Supports Elastic Fabric Adapter (EFA) on c8g.24xlarge, c8g.48xlarge, c8g.metal-24xl, and c8g.metal-48xl
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance size vCPU Memory (GiB) Instance storage (GB) Network bandwidth (Gbps) Amazon EBS bandwidth (Gbps) c8g.medium
1
2
EBS-only
Up to 12.5
Up to 10
c8g.large
2
4
EBS-only
Up to 12.5
Up to 10
c8g.xlarge
4
8
EBS-only
Up to 12.5
Up to 10
c8g.2xlarge
8
16
EBS-only
Up to 15
Up to 10
c8g.4xlarge
16
32
EBS-only
Up to 15
Up to 10
c8g.8xlarge
32
64
EBS-only
15
10
c8g.12xlarge
48
96
EBS-only
22.5
15
c8g.16xlarge
64
128
EBS-only
30
20
c8g.24xlarge
96
192
EBS-only
40
30
c8g.48xlarge
192
384
EBS-only
50
40
c8g.metal-24xl
96
192
EBS-only
40
30
c8g.metal-48xl
192
384
EBS-only
50
40
All instances have the following specs:
Use cases
High performance computing (HPC), batch processing, ad serving, video encoding, gaming, scientific modeling, distributed analytics, and CPU-based ML inference.
-
C7g
-
Amazon EC2 C7g instances are powered by Arm-based AWS Graviton3 processors. They are ideal for compute-intensive workloads.
Features:
- Powered by custom-built AWS Graviton3 processors
- Featuring the latest DDR5 memory that offers 50% more bandwidth compared to DDR4
- 20% higher enhanced networking bandwidth compared to C6g instances
- EBS-optimized by default
- Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
- With C7gd instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the instance
- Supports Elastic Fabric Adapter on c7g.16xlarge, c7g.metal, c7gd.16xlarge, and c7gd.metal instances
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) c7g.medium
1
2
EBS-Only
Up to 12.5
Up to 10
c7g.large
2
4
EBS-Only
Up to 12.5
Up to 10
c7g.xlarge
4
8
EBS-Only
Up to 12.5
Up to 10
c7g.2xlarge
8
16
EBS-Only
Up to 15
Up to 10
c7g.4xlarge
16
32
EBS-Only
Up to 15
Up to 10
c7g.8xlarge
32
64
EBS-Only
15
10
c7g.12xlarge
48
96
EBS-Only
22.5
15
c7g.16xlarge
64
128
EBS-Only
30
20
c7g.metal 64 128 EBS-Only 30 20 c7gd.medium
1
2
1 x 59 NVMe SSD
Up to 12.5
Up to 10
c7gd.large
2
4
1 x 118 NVMe SSD
Up to 12.5
Up to 10
c7gd.xlarge
4
8
1 x 237 NVMe SSD
Up to 12.5
Up to 10
c7gd.2xlarge
8
16
1 x 474 NVMe SSD
Up to 15
Up to 10
c7gd.4xlarge
16
32
1 x 950 NVMe SSD
Up to 15
Up to 10
c7gd.8xlarge
32
64
1 x 1900 NVMe SSD
15
10
c7gd.12xlarge
48
96
2 x 1425 NVMe SSD
22.5
15
c7gd.16xlarge
64
128
2 x 1900 NVMe SSD
30
20
c7gd.metal
64
128
2 x 1900 NVMe SSD
30
20
All instances have the following specs:
- Custom-built AWS Graviton3 Processor with 64-bit Arm cores
- EBS optimized
- Enhanced networking
Use Cases
High performance computing (HPC), batch processing, ad serving, video encoding, gaming, scientific modelling, distributed analytics, and CPU-based machine learning inference.
- Powered by custom-built AWS Graviton3 processors
-
C7gn
-
Amazon EC2 C7gn instances are powered by Arm-based AWS Graviton3E processors. They offer up to 200 Gbps of network bandwidth and up to 3x higher packet-processing performance per vCPU compared with comparable current generation x86-based network optimized instances.
Features:
- Powered by custom-built AWS Graviton3E processors
- Featuring the latest Double Data Rate 5 (DDR5) memory that offers 50% more bandwidth compared to DDR4
- Up to 200 Gbps of networking bandwidth
- Up to 40 Gbps of bandwidth to Amazon Elastic Block Store (EBS)
- 2x higher enhanced network bandwidth compared to C6gn instances
- EBS-optimized, by default
- Supports Elastic Fabric Adapter (EFA) on c7gn.16xlarge and c7gn.metal instances
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) c7gn.medium
1
2
EBS-Only
Up to 25
Up to 10
c7gn.large
2
4
EBS-Only
Up to 30
Up to 10
c7gn.xlarge
4
8
EBS-Only
Up to 40
Up to 10
c7gn.2xlarge
8
16
EBS-Only
Up to 50
Up to 10
c7gn.4xlarge
16
32
EBS-Only
50
Up to 10
c7gn.8xlarge
32
64
EBS-Only
100
Up to 20
c7gn.12xlarge
48
96
EBS-Only
150
Up to 30
c7gn.16xlarge
64
128
EBS-Only
200
Up to 40
c7gn.metal
64
128
EBS-Only
200
Up to 40
All instances have the following specs:
- Custom-built AWS Graviton3 Processor with 64-bit Arm cores
- EBS optimized
- Enhanced networking
Use Cases
Network-intensive workloads, such as network virtual appliances, data analytics, and CPU-based artificial intelligence and machine learning (AI/ML) inference - Powered by custom-built AWS Graviton3E processors
-
C7i
-
Amazon EC2 C7i instances are powered by 4th Generation Intel Xeon Scalable processors and deliver 15% better price performance than C6i instances.
Features:
- Up to 3.2 GHz 4th Generation Intel Xeon Scalable processor (Sapphire Rapids 8488C)
- New Advance Matrix Extensions (AMX) accelerate matrix multiplication operations
- 2 metal sizes: c7i.metal-24xl and c7i.metal-48xl
- Discrete built-in accelerators (available on C7i bare metal sizes only)—Data Streaming Accelerator (DSA), In-Memory Analytics Accelerator (IAA), and QuickAssist Technology (QAT)—enable efficient offload and acceleration of data operations that help optimize performance for databases, encryption and compression, and queue management workloads
- Latest DDR5 memory, which offers more bandwidth compared to DDR4
- Support for always-on memory encryption using Intel Total Memory Encryption (TME)
- Support for up to 128 EBS volume attachments per instance
- Up to 192 vCPUs and 384 GiB memory
- Supports Elastic Fabric Adapter on the 48xlarge size and metal-48xl size
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) c7i.large
2
4
EBS-Only
Up to 12.5
Up to 10
c7i.xlarge
4
8
EBS-Only
Up to 12.5
Up to 10
c7i.2xlarge
8
16
EBS-Only
Up to 12.5
Up to 10
c7i.4xlarge
16
32
EBS-Only
Up to 12.5
Up to 10
c7i.8xlarge
32
64
EBS-Only
12.5
10
c7i.12xlarge
48
96
EBS-Only
18.75
15
c7i.16xlarge
64
128
EBS-Only
25
20
c7i.24xlarge
96
192
EBS-Only
37.5
30
c7i.48xlarge
192
384
EBS-Only
50
40
c7i.metal-24xl
96
192
EBS-Only
37.5
30
c7i.metal-48xl
192
384
EBS-Only
50
40
All instances have the following specs:
- Up to 3.2 GHz 4th generation Intel Xeon Scalable processors
- EBS Optimized
- Enhanced Networking†
Use Cases
C7i instances are ideal for compute-intensive workloads such as batch processing, distributed analytics, high-performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding.
-
C7i-flex
-
Amazon EC2 C7i-flex instances are powered by 4th Generation Intel Xeon Scalable processors and deliver 19% better price performance than C6i instances.
Features:
- Easiest way for you to achieve price performance and cost benefits in the cloud for a majority of your compute-intensive workloads
- Up to 3.2 GHz 4th Generation Intel Xeon Scalable processor (Sapphire Rapids 8488C)
- New Advance Matrix Extensions (AMX) accelerate matrix multiplication operations
- Latest DDR5 memory, which offers more bandwidth compared to DDR4
- EBS-optimized by default
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) c7i-flex.large
2
4
EBS-Only
Up to 12.5
Up to 10
c7i-flex.xlarge
4
8
EBS-Only
Up to 12.5
Up to 10
c7i-flex.2xlarge
8
16
EBS-Only
Up to 12.5
Up to 10
c7i-flex.4xlarge
16
32
EBS-Only
Up to 12.5
Up to 10
c7i-flex.8xlarge
32
64
EBS-Only
Up to 12.5
Up to 10
Use Cases
C7i-flex instances are a great first choice to seamlessly run a majority of compute-intensive workloads, including web and application servers, databases, caches, Apache Kafka, and Elasticsearch.
-
C7a
-
Amazon EC2 C7a instances, powered by 4th generation AMD EPYC processors, deliver up to 50% higher performance compared to C6a instances.
Features:- Up to 3.7 GHz 4th generation AMD EPYC processors (AMD EPYC 9R14)
- Up to 50 Gbps of networking bandwidth
- Up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS)
- Instance sizes with up to 192 vCPUs and 384 GiB of memory
- Built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for always-on memory encryption using AMD secure memory encryption (SME)
- Support for new processor capabilities such as AVX-512, VNNI, and bfloat16
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps) EBS Bandwidth (Gbps) c7a.medium
1
2
EBS-Only
Up to 12.5
Up to 10
c7a.large
2
4
EBS-Only
Up to 12.5
Up to 10
c7a.xlarge
4
8
EBS-Only
Up to 12.5
Up to 10
c7a.2xlarge
8
16
EBS-Only
Up to 12.5
Up to 10
c7a.4xlarge
16
32
EBS-Only
Up to 12.5
Up to 10
c7a.8xlarge
32
64
EBS-Only
12.5
10
c7a.12xlarge
48
96
EBS-Only
18.75
15
c7a.16xlarge
64
128
EBS-Only
25
20
c7a.24xlarge
96
192
EBS-Only
37.5
30
c7a.32xlarge
128
256
EBS-Only
50
40
c7a.48xlarge
192
384
EBS-Only
50
40
c7a.metal-48xl
192
384
EBS-Only
50
40
Use cases
Compute-intensive workloads such as batch processing, distributed analytics, high-performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding.
-
C6g
-
Amazon EC2 C6g instances are powered by Arm-based AWS Graviton2 processors. They deliver up to 40% better price performance over current generation C5 instances for compute-intensive applications.
Features:
- Custom built AWS Graviton2 Processor with 64-bit Arm Neoverse cores
- Support for Enhanced Networking with Up to 25 Gbps of Network bandwidth
- EBS-optimized by default
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- With C6gd instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the instance
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Mbps) c6g.medium 1 2 EBS-Only Up to 10 Up to 4,750 c6g.large 2 4 EBS-Only Up to 10 Up to 4,750 c6g.xlarge 4 8 EBS-Only Up to 10 Up to 4,750 c6g.2xlarge 8 16 EBS-Only Up to 10 Up to 4,750 c6g.4xlarge 16 32 EBS-Only Up to 10 4750 c6g.8xlarge 32 64 EBS-Only 12 9000 c6g.12xlarge 48 96 EBS-Only 20 13500 c6g.16xlarge 64 128 EBS-Only 25 19000 c6g.metal 64 128 EBS-Only 25 19000 c6gd.medium 1 2 1 x 59 NVMe SSD Up to 10 Up to 4,750 c6gd.large 2 4 1 x 118 NVMe SSD Up to 10 Up to 4,750 c6gd.xlarge 4 8 1 x 237 NVMe SSD Up to 10 Up to 4,750 c6gd.2xlarge 8 16 1 x 474 NVMe SSD Up to 10 Up to 4,750 c6gd.4xlarge 16 32 1 x 950 NVMe SSD Up to 10 4,750 c6gd.8xlarge 32 64 1 x 1900 NVMe SSD 12 9,000 c6gd.12xlarge 48 96 2 x 1425 NVMe SSD 20 13,500 c6gd.16xlarge 64 128 2 x 1900 NVMe SSD 25 19,000 c6gd.metal 64 128 2 x 1900 NVMe SSD 25 19,000 All instances have the following specs:
- Custom built AWS Graviton2 Processor with 64-bit Arm cores
- EBS Optimized
- Enhanced Networking
Use Cases
High performance computing (HPC), batch processing, ad serving, video encoding, gaming, scientific modelling, distributed analytics, and CPU-based machine learning inference.
-
C6gn
-
Amazon EC2 C6gn instances are powered by Arm-based AWS Graviton2 processors. They deliver up to 40% better price performance over current generation C5n instances and provide up to 100 Gbps networking and support for Elastic Fabric Adapter (EFA) for applications that need higher networking throughput, such as high performance computing (HPC), network appliance, real-time video communication, and data analytics.
Features:
- Custom built AWS Graviton2 Processor with 64-bit Arm Neoverse cores
- Support for Enhanced Networking with Up to 100 Gbps of Network bandwidth
- EFA support on c6gn.16xlarge instances
- EBS-optimized by default, 2x EBS bandwidth compared to C5n instances
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) c6gn.medium
1
2
EBS-Only
Up to 16
Up to 9.5
c6gn.large
2
4
EBS-Only
Up to 25
Up to 9.5
c6gn.xlarge
4
8
EBS-Only
Up to 25
Up to 9.5
c6gn.2xlarge
8
16
EBS-Only
Up to 25
Up to 9.5
c6gn.4xlarge
16
32
EBS-Only
Up to 25
9.5
c6gn.8xlarge
32
64
EBS-Only
50
19
c6gn.12xlarge
48
96
EBS-Only
75
28.5
c6gn.16xlarge
64
128
EBS-Only
100
38
All instances have the following specs:
- Custom built AWS Graviton2 Processor with 64-bit Arm cores
- EBS Optimized
- Enhanced Networking
Use Cases
High performance web servers, scientific modelling, batch processing, distributed analytics, high-performance computing (HPC), network appliance, machine/deep learning inference, ad serving, highly scalable multiplayer gaming, and video encoding.
-
C6i
-
Amazon EC2 C6i instances are powered by 3rd generation Intel Xeon Scalable processors and are an ideal fit for compute-intensive workloads.
Features:
- Up to 3.5 GHz 3rd generation Intel Xeon Scalable processors (Ice Lake 8375C)
- Up to 15% better compute price performance over C5 instances
- Up to 9% higher memory bandwidth per vCPU compared to C5 instances
- Up to 50 Gbps of networking speed
- Up to 40 Gbps of bandwidth to the Amazon Elastic Block Store
- A new instance size (32xlarge) with 128 vCPUs and 256 GiB of memory
- Supports Elastic Fabric Adapter on the 32xlarge and metal sizes
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for always-on memory encryption using Intel Total Memory Encryption (TME)
- Support for new Intel Advanced Vector Extensions (AVX 512) instructions for faster execution of cryptographic algorithms
- With C6id instances, up to 7.6 TB of local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the C6i instance
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) c6i.large
2
4
EBS-Only
Up to 12.5
Up to 10
c6i.xlarge
4
8
EBS-Only
Up to 12.5
Up to 10
c6i.2xlarge
8
16
EBS-Only
Up to 12.5
Up to 10
c6i.4xlarge
16
32
EBS-Only
Up to 12.5
Up to 10
c6i.8xlarge
32
64
EBS-Only
12.5
10
c6i.12xlarge
48
96
EBS-Only
18.75
15
c6i.16xlarge
64
128
EBS-Only
25
20
c6i.24xlarge
96
192
EBS-Only
37.5
30
c6i.32xlarge
128
256
EBS-Only
50
40
c6i.metal 128 256 EBS-Only 50 40 c6id.large
2
4
1x118 NVMe SSD
Up to 12.5
Up to 10
c6id.xlarge
4
8
1x237 NVMe SSD
Up to 12.5
Up to 10
c6id.2xlarge
8
16
1x474 NVMe SSD
Up to 12.5
Up to 10
c6id.4xlarge
16
32
1x950 NVMe SSD
Up to 12.5
Up to 10
c6id.8xlarge
32
64
1x1900 NVMe SSD
12.5
10
c6id.12xlarge
48
96
2x1425 NVMe SSD
18.75
15
c6id.16xlarge
64
128
2x1900 NVMe SSD
25
20
c6id.24xlarge
96
192
4x1425 NVMe SSD
37.5
30
c6id.32xlarge
128
256
4x1900 NVMe SSD
50
40
c6id.metal 128 256 4x1900 NVMe SSD 50 40 All instances have the following specs:
- Up to 3.5 GHz 3rd generation Intel Xeon Scalable processors
- EBS Optimized
- Enhanced Networking†
Use Cases
Compute-intensive workloads such as batch processing, distributed analytics, high-performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding.
-
C6in
-
Amazon EC2 C6in instances are ideal for network-intensive workloads such as network virtual appliances, data analytics, high performance computing (HPC), and CPU-based AI/ML. They are powered by 3rd Generation Intel Xeon Scalable processors (Ice Lake) with an all-core turbo frequency of 3.5 GHz. C6in instances offer up to 200 Gbps of network bandwidth and up to 80 Gbps Amazon Elastic Block Store (EBS) bandwidth. The C6in.32xlarge and C6in.metal instances support Elastic Fabric Adapter (EFA). EFA is a network interface for Amazon EC2 instances that you can use to run applications that require high levels of internode communications, such as HPC applications using Message Passing Interface (MPI) libraries, at scale on AWS.
Features:
- Up to 3.5 GHz 3rd Generation Intel Xeon Scalable processors (Ice Lake 8375C)
- Support for Enhanced Networking with up to 200 Gbps of network bandwidth, up to 2x compared to C5n instances
- Up to 100 Gbps of EBS bandwidth, up to 5.2x compared to C5n instances
- EFA support on the 32xlarge and metal sizes
- Support for always-on memory encryption using Intel Total Memory Encryption (TME)
- Support for new Intel Advanced Vector Extensions (AVX-512) instructions for faster processing of cryptographic algorithm
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) c6in.large
2
4
EBS-Only
Up to 25
Up to 25
c6in.xlarge
4
8
EBS-Only
Up to 30
Up to 25
c6in.2xlarge
8
16
EBS-Only
Up to 40
Up to 25
c6in.4xlarge
16
32
EBS-Only
Up to 50
Up to 25
c6in.8xlarge
32
64
EBS-Only
50
25
c6in.12xlarge
48
96
EBS-Only
75
37.5
c6in.16xlarge
64
128
EBS-Only
100
50
c6in.24xlarge
96
192
EBS-Only
150
75
c6in.32xlarge 128 256 EBS-Only 200**** 100 c6in.metal 128 256 EBS-Only 200**** 100
****For 32xlarge and metal sizes, at least 2 elastic network interfaces, with each attached to a different network card, are required on the instance to achieve 200 Gbps throughput. Each network interface attached to a network card can achieve a maximum of 170 Gbps. For more information, see Network cardsAll instances have the following specs:
- Up to 3.5 GHz 3rd Generation Intel Xeon Scalable processors
- EBS Optimized
- Enhanced Networking†
Use Cases
Compute-intensive workloads that require high network bandwidth or high packet-processing performance such as distributed computing applications, network virtual appliances, data analytics, high performance computing (HPC), and CPU-based AI/ML.
-
C6a
-
Amazon C6a instances are powered by 3rd generation AMD EPYC processors and are designed for compute-intensive workloads.
Features:
- Up to 3.6 GHz 3rd generation AMD EPYC processors (AMD EPYC 7R13)
- Up to 15% better compute price performance over C5a instances
- Up to 50 Gbps of networking speed
- Up to 40 Gbps of bandwidth to the Amazon Elastic Block Store
- Up to 192 vCPUs and 384 GiB of memory in the largest size
- SAP-Certified instances
- Supports Elastic Fabric Adapter on the 48xlarge size
- Built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for always-on memory encryption using AMD Transparent Single Key Memory Encryption (TSME)
- Support for new AMD Advanced Vector Extensions (AVX-2) instructions for faster execution of cryptographic algorithms
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) c6a.large 2 4 EBS-Only Up to 12.5 Up to 10 c6a.xlarge 4 8 EBS-Only Up to 12.5 Up to 10 c6a.2xlarge 8 16 EBS-Only Up to 12.5 Up to 10 c6a.4xlarge 16 32 EBS-Only Up to 12.5 Up to 10 c6a.8xlarge 32 64 EBS-Only 12.5 10 c6a.12xlarge 48 96 EBS-Only 18.75 15 c6a.16xlarge 64 128 EBS-Only 25 20 c6a.24xlarge 96 192 EBS-Only 37.5 30 c6a.32xlarge 128 256 EBS-Only 50 40 c6a.48xlarge 192 384 EBS-Only 50 40 c6a.metal 192 384 EBS-Only 50 40 All instances have the following specs:
- Up to 3.6 GHz 3rd generation AMD EPYC processors
- EBS Optimized
- Enhanced Networking†
Use Cases
Compute-intensive workloads such as batch processing, distributed analytics, high-performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding.
- Up to 3.6 GHz 3rd generation AMD EPYC processors (AMD EPYC 7R13)
-
C5
-
Amazon EC2 C5 instances are optimized for compute-intensive workloads and deliver cost-effective high performance at a low price per compute ratio.
Features:
- C5 instances offer a choice of processors based on the size of the instance.
- C5 and C5d 12xlarge, 24xlarge, and metal instance sizes feature custom 2nd generation Intel Xeon Scalable Processors (Cascade Lake 8275CL) with a sustained all core Turbo frequency of 3.6GHz and single core turbo frequency of up to 3.9GHz.
- Other C5 instance sizes will launch on the 2nd generation Intel Xeon Scalable Processors (Cascade Lake 8223CL) or 1st generation Intel Xeon Platinum 8000 series (Skylake 8124M) processor with a sustained all core Turbo frequency of up to 3.4GHz, and single core turbo frequency of up to 3.5 GHz.
- New larger 24xlarge instance size offering 96 vCPUs, 192 GiB of memory, and optional 3.6TB local NVMe-based SSDs
- Requires HVM AMIs that include drivers for ENA and NVMe
- With C5d instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the C5 instance
- Elastic Network Adapter (ENA) provides C5 instances with up to 25 Gbps of network bandwidth and up to 19 Gbps of dedicated bandwidth to Amazon EBS.
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Model vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)***
EBS Bandwidth (Mbps) c5.large 2 4 EBS-Only Up to 10 Up to 4,750 c5.xlarge 4 8 EBS-Only Up to 10 Up to 4,750 c5.2xlarge 8 16 EBS-Only Up to 10 Up to 4,750 c5.4xlarge 16 32 EBS-Only Up to 10 4,750 c5.9xlarge 36 72 EBS-Only 12 9,500 c5.12xlarge 48 96 EBS-Only 12 9,500 c5.18xlarge 72 144 EBS-Only 25 19,000 c5.24xlarge 96 192 EBS-Only 25 19,000 c5.metal 96 192 EBS-Only 25 19,000 c5d.large 2 4 1 x 50 NVMe SSD Up to 10 Up to 4,750 c5d.xlarge 4 8 1 x 100 NVMe SSD Up to 10 Up to 4,750 c5d.2xlarge 8 16 1 x 200 NVMe SSD Up to 10 Up to 4,750 c5d.4xlarge 16 32 1 x 400 NVMe SSD Up to 10 4,750 c5d.9xlarge 36 72 1 x 900 NVMe SSD 12 9,500 c5d.12xlarge 48 96 2 x 900 NVMe SSD 12 9,500 c5d.18xlarge 72 144 2 x 900 NVMe SSD 25 19,000 c5d.24xlarge 96 192 4 x 900 NVMe SSD 25 19,000 c5d.metal 96 192 4 x 900 NVMe SSD 25 19,000 C5 and C5d 12xlarge, 24xlarge, and metal instances have the following specs:
- Custom 2nd generation Intel Xeon Scalable Processors (Cascade Lake) with a sustained all core Turbo frequency of 3.6GHz and single core turbo frequency of up to 3.9GHz.
- Intel AVX†, Intel AVX2†, Intel AVX-512, Intel Turbo, Intel DL Boost
- EBS Optimized
- Enhanced Networking†
All other C5 and C5d instances have the following specs:
- Custom 2nd generation Intel Xeon Scalable Processors (Cascade Lake) with a sustained all core Turbo frequency of 3.6GHz and single core turbo frequency of up to 3.9GHz or 1st generation Intel Xeon Platinum 8000 series (Skylake-SP) processor with a sustained all core Turbo frequency of up to 3.4GHz, and single core turbo frequency of up to 3.5 GHz.
- Intel AVX†, Intel AVX2†, Intel AVX-512, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
High performance web servers, scientific modelling, batch processing, distributed analytics, high-performance computing (HPC), machine/deep learning inference, ad serving, highly scalable multiplayer gaming, and video encoding.
-
C5n
-
Amazon EC2 C5n instances are ideal for high compute applications (including High Performance Computing (HPC) workloads, data lakes, and network appliances such as firewalls and routers) that can take advantage of improved network throughput and packet rate performance. C5n instances offers up to 100 Gbps network bandwidth and increased memory over comparable C5 instances. C5n.18xlarge instances support Elastic Fabric Adapter (EFA), a network interface for Amazon EC2 instances that enables customers to run applications requiring high levels of inter-node communications, like High Performance Computing (HPC) applications using the Message Passing Interface (MPI), at scale on AWS.
Features:
- 3.0 GHz Intel Xeon Platinum processors (Skylake 8124) with Intel Advanced Vector Extension 512 (AVX-512) instruction set
- Sustained all core Turbo frequency of up to 3.4GHz, and single core turbo frequency of up to 3.5 GHz
- Larger instance size, c5n.18xlarge, offering 72 vCPUs and 192 GiB of memory
- Requires HVM AMIs that include drivers for ENA and NVMe
- Network bandwidth increases to up to 100 Gbps, delivering increased performance for network intensive applications.
- EFA support on c5n.18xlarge instances
- 33% higher memory footprint compared to C5 instances
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Model vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)***
EBS Bandwidth (Mbps) c5n.large 2 5.25 EBS-Only Up to 25
Up to 4,750 c5n.xlarge 4 10.5 EBS-Only Up to 25 Up to 4,750 c5n.2xlarge 8 21 EBS-Only Up to 25 Up to 4,750 c5n.4xlarge 16 42 EBS-Only Up to 25 4,750 c5n.9xlarge 36 96
EBS-Only 50 9,500 c5n.18xlarge 72 192 EBS-Only 100 19,000 c5n.metal 72 192 EBS-Only 100 19,000 All instances have the following specs:
- 3.0 GHz Intel Xeon Platinum Processor
- Intel AVX†, Intel AVX2†, Intel AVX-512, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
High performance web servers, scientific modelling, batch processing, distributed analytics, high-performance computing (HPC), machine/deep learning inference, ad serving, highly scalable multiplayer gaming, and video encoding.
-
C5a
-
Amazon EC2 C5a instances offer leading x86 price-performance for a broad set of compute-intensive workloads.
Features:
- 2nd generation AMD EPYC 7002 series processors (AMD EPYC 7R32) running at frequencies up to 3.3 GHz
- Elastic Network Adapter (ENA) provides C5a instances with up to 20 Gbps of network bandwidth and up to 9.5 Gbps of dedicated bandwidth to Amazon EBS
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- With C5ad instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the C5a instance
Model vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)***
EBS Bandwidth (Mbps) c5a.large 2 4 EBS-Only Up to 10
Up to 3,170 c5a.xlarge 4 8 EBS-Only Up to 10 Up to 3,170 c5a.2xlarge 8 16 EBS-Only Up to 10 Up to 3,170 c5a.4xlarge 16 32 EBS-Only Up to 10 Up to 3,170 c5a.8xlarge 32 64
EBS-Only 10 3,170 c5a.12xlarge 48 96 EBS-Only 12 4,750
c5a.16xlarge 64 128 EBS-Only 20 6,300 c5a.24xlarge 96 192 EBS-Only 20 9,500 c5ad.large 2 4 1 x 75 NVMe SSD up to 10 up to 3,170 c5ad.xlarge 4 8 1 x 150 NVMe SSD up to 10 up to 3,170 c5ad.2xlarge 8 16 1 x 300 NVMe SSD up to 10 up to 3,170 c5ad.4xlarge 16 32 2 x 300 NVMe SSD up to 10 up to 3,170 c5ad.8xlarge 32 64 2 x 600 NVMe SSD 10 3,170 c5ad.12xlarge 48 96 2 x 900 NVMe SSD 12 4,750 c5ad.16xlarge 64 128 2 x 1200 NVMe SSD 20 6,300 c5ad.24xlarge 96 192 2 x 1900 NVMe SSD 20 9,500 All instances have the following specs:
- Up to 3.3 GHz 2nd generation AMD EPYC Processor
- EBS Optimized
- Enhanced Networking†
Use Cases
C5a instances are ideal for workloads requiring high vCPU and memory bandwidth such as batch processing, distributed analytics, data transformations, gaming, log analysis, web applications, and other compute-intensive workloads.
-
C4
-
C4 instances are optimized for compute-intensive workloads and deliver very cost-effective high performance at a low price per compute ratio.
Features:
- Up to 2.9 GHz Intel Xeon Scalable Processor (Haswell E5-2666 v3)
- High frequency Intel Xeon E5-2666 v3 (Haswell) processors optimized specifically for EC2
- Default EBS-optimized for increased storage performance at no additional cost
- Higher networking performance with Enhanced Networking supporting Intel 82599 VF
- Requires Amazon VPC, Amazon EBS and 64-bit HVM AMIs
Instance vCPU* Mem (GiB) Storage Dedicated EBS Bandwidth (Mbps) Network Performance c4.large 2 3.75 EBS-Only 500 Moderate c4.xlarge 4 7.5 EBS-Only 750 High c4.2xlarge 8 15 EBS-Only 1,000 High c4.4xlarge 16 30 EBS-Only 2,000 High c4.8xlarge 36 60 EBS-Only 4,000 10 Gigabit All instances have the following specs:
- Up to 2.9 GHz Intel Xeon Scalable Processor
- Intel AVX†, Intel AVX2†, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
High performance front-end fleets, web-servers, batch processing, distributed analytics, high performance science and engineering applications, ad serving, MMO gaming, and video-encoding.
- Up to 2.9 GHz Intel Xeon Scalable Processor (Haswell E5-2666 v3)
Each vCPU on Graviton-based Amazon EC2 instances is a core of AWS Graviton processor.
Each vCPU on non-Graviton-based Amazon EC2 instances is a thread of x86-based processor, except for C7a instances.
† AVX, AVX2, and Enhanced Networking are only available on instances launched with HVM AMIs.
* This is the default and maximum number of vCPUs available for this instance type. You can specify a custom number of vCPUs when launching this instance type. For more details on valid vCPU counts and how to start using this feature, visit the Optimize CPUs documentation page here.
*** Instances marked with "Up to" Network Bandwidth have a baseline bandwidth and can use a network I/O credit mechanism to burst beyond their baseline bandwidth on a best effort basis. For more information, see instance network bandwidth.
Memory Optimized
Memory optimized instances are designed to deliver fast performance for workloads that process large data sets in memory.
-
R8g
-
R7g
-
R7i
-
R7iz
-
R7a
-
R6g
-
R6i
-
R6in
-
R6a
-
R5
-
R5n
-
R5b
-
R5a
-
R4
-
U7i
-
High Memory (U-1)
-
X8g
-
X2gd
-
X2idn
-
X2iedn
-
X2iezn
-
X1
-
X1e
-
z1d
-
R8g
-
Amazon EC2 R8g instances are powered by AWS Graviton4 processors. They deliver the best price performance in Amazon EC2 for memory-intensive workloads.
Features:
- Powered by custom-built AWS Graviton4 processors
- Larger instance sizes with up to 3x more vCPUs and memory than R7g instances
- Features the latest DDR5-5600 memory
- Optimized for Amazon EBS by default
- Supports Elastic Fabric Adapter (EFA) on r8g.24xlarge, r8g.48xlarge, r8g.metal-24xl, and r8g.metal-48xl
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance size vCPU Memory (GiB) Instance storage (GB) Network bandwidth (Gbps) EBS bandwidth (Gbps) r8g.medium
1
8
EBS-only
Up to 12.5
Up to 10
r8g.large
2
16
EBS-only
Up to 12.5
Up to 10
r8g.xlarge
4
32
EBS-only
Up to 12.5
Up to 10
r8g.2xlarge
8
64
EBS-only
Up to 15
Up to 10
r8g.4xlarge
16
128
EBS-only
Up to 15
Up to 10
r8g.8xlarge
32
256
EBS-only
15
10
r8g.12xlarge
48
384
EBS-only
22.5
15
r8g.16xlarge
64
512
EBS-only
30
20
r8g.24xlarge
96
768
EBS-only
40
30
r8g.48xlarge
192
1,536
EBS-only
50
40
r8g.metal-24xl
96
768
EBS-only
40
30
r8g.metal-48xl
192
1,536
EBS-only
50
40
All instances have the following specs:
Use cases
Memory-intensive workloads such as open source databases, in-memory caches, and real-time big data analytics.
-
R7g
-
Amazon EC2 R7g instances are powered by AWS Graviton3 processors. They are ideal for memory-intensive workloads.
Features:
- Powered by custom-built AWS Graviton3 processors
- Features DDR5 memory that offers 50% more bandwidth compared to DDR4
- 20% higher enhanced networking bandwidth compared to R6g instances
- EBS-optimized by default
- Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
- With R7gd instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the instance
- Supports Elastic Fabric Adapter (EFA) on r7g.16xlarge, r7g.metal, r7gd.16xlarge, and r7gd.metal instances
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps) EBS Bandwidth (Gbps) r7g.medium
1
8
EBS-Only
Up to 12.5
Up to 10
r7g.large
2
16
EBS-Only
Up to 12.5
Up to 10
r7g.xlarge
4
32
EBS-Only
Up to 12.5
Up to 10
r7g.2xlarge
8
64
EBS-Only
Up to 15
Up to 10
r7g.4xlarge
16
128
EBS-Only
Up to 15
Up to 10
r7g.8xlarge
32
256
EBS-Only
15
10
r7g.12xlarge
48
384
EBS-Only
22.5
15
r7g.16xlarge
64
512
EBS-Only
30
20
r7g.metal
64
512
EBS-Only
30
20
r7gd.medium
1
8
1 x 59 NVMe SSD
Up to 12.5
Up to 10
r7gd.large
2
16
1 x 118 NVMe SSD
Up to 12.5
Up to 10
r7gd.xlarge
4
32
1 x 237 NVMe SSD
Up to 12.5
Up to 10
r7gd.2xlarge
8
64
1 x 474 NVMe SSD
Up to 15
Up to 10
r7gd.4xlarge
16
128
1 x 950 NVMe SSD
Up to 15
Up to 10
r7gd.8xlarge
32
256
1 x 1900 NVMe SSD
15
10
r7gd.12xlarge
48
384
2 x 1425 NVMe SSD
22.5
15
r7gd.16xlarge
64
512
2 x 1900 NVMe SSD
30
20
r7gd.metal
64
512
2 x 1900 NVMe SSD
30
20
All instances have the following specs:
- Custom-built AWS Graviton3 processor with 64-bit Arm cores
- EBS-optimized
- Enhanced Networking†
Use cases
Memory-intensive workloads such as open-source databases, in-memory caches, and real-time big data analytics. -
R7i
-
Amazon EC2 R7i instances are powered by 4th Generation Intel Xeon Scalable processors and deliver 15% better price performance than R6i instances.
Features:
- Up to 3.2 GHz 4th Generation Intel Xeon Scalable processor (Sapphire Rapids 8488C)
- New Advance Matrix Extensions (AMX) accelerate matrix multiplication operations
- 2 metal sizes: r7i.metal-24xl and r7i.metal-48xl
- Discrete built-in accelerators (available on R7i bare metal sizes only)—Data Streaming Accelerator (DSA), In-Memory Analytics Accelerator (IAA), and QuickAssist Technology (QAT)—enable efficient offload and acceleration of data operations that help optimize performance for databases, encryption and compression, and queue management workloads
- Latest DDR5 memory, which offers more bandwidth compared to DDR4
- Support for always-on memory encryption using Intel Total Memory Encryption (TME)
- Support for up to 128 EBS volume attachments per instance
- Up to 192 vCPUs and 1,536GiB memory
- Supports Elastic Fabric Adapter on the 48xlarge size and metal-48xl size
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) r7i.large
2
16
EBS-Only
Up to 12.5
Up to 10
r7i.xlarge
4
32
EBS-Only
Up to 12.5
Up to 10
r7i.2xlarge
8
64
EBS-Only
Up to 12.5
Up to 10
r7i.4xlarge
16
128
EBS-Only
Up to 12.5
Up to 10
r7i.8xlarge
32
256
EBS-Only
12.5
10
r7i.12xlarge
48
384
EBS-Only
18.75
15
r7i.16xlarge
64
512
EBS-Only
25
20
r7i.24xlarge
96
768
EBS-Only
37.5
30
r7i.48xlarge
192
1,536
EBS-Only
50
40
r7i.metal-24xl
96
768
EBS-Only
37.5
30
r7i.metal-48xl
192
1,536
EBS-Only
50
40
All instances have the following specs:
- Up to 3.2 GHz 4th generation Intel Xeon Scalable processors
- EBS Optimized
- Enhanced Networking†
Use Cases
R7i instances are SAP-certified and ideal for all memory-intensive workloads (SQL and NoSQL databases), distributed web scale in-memory caches (Memcached and Redis), in-memory databases (SAP HANA), and real-time big data analytics (Apache Hadoop and Apache Spark clusters).
-
R7iz
-
Amazon EC2 R7iz instances are powered by 4th Generation Intel Xeon Scalable processors and are an ideal fit for high CPU and memory-intensive workloads.
Features:
- 4th Generation Intel Xeon Scalable Processors (Sapphire Rapids 6455B) with an all-core turbo frequency up to 3.9 GHz
- Up to 20% higher compute performance than z1d instances
- New Advance Matrix Extensions (AMX) accelerate matrix multiplication operations – available in all sizes
- Discrete built-in accelerators (available on R7iz bare metal sizes only) - Data Streaming Accelerator (DSA), In-Memory Analytics Accelerator (IAA), and QuickAssist Technology (QAT) - enable efficient offload and acceleration of data operations that help in optimizing performance for databases, encryption and compression, and queue management workloads
- Up to 50 Gbps of networking speed
- Up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (EBS)
- Instance size with up to 128 vCPUs and 1,024 GiB of memory
- Supports Elastic Fabric Adapter on the 32xlarge size and the metal-32xl size
- Built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps) EBS Bandwidth (Gbps) r7iz.large
2
16
EBS-Only
Up to 12.5
Up to 10
r7iz.xlarge
4
32
EBS-Only
Up to 12.5
Up to 10
r7iz.2xlarge
8
64
EBS-Only
Up to 12.5
Up to 10
r7iz.4xlarge
16
128
EBS-Only
Up to 12.5
Up to 10
r7iz.8xlarge
32
256
EBS-Only
12.5
10
r7iz.12xlarge
48
384
EBS-Only
25
19
r7iz.16xlarge
64
512
EBS-Only
25
20
r7iz.32xlarge
128
1,024
EBS-Only
50
40
r7iz.metal-16xl
64
512
EBS-Only
25
20
r7iz.metal-32xl
128
1,024
EBS-Only
50
40
Use Cases
High-compute and memory-intensive workloads such as frontend Electronic Design Automation (EDA), relational database workloads with high per-core licensing fees, and financial, actuarial, and data analytics simulation workloads.
-
R7a
-
Amazon EC2 R7a instances, powered by 4th generation AMD EPYC processors, deliver up to 50% higher performance compared to R6a instances.
Features:
- Up to 3.7 GHz 4th generation AMD EPYC processors (AMD EPYC 9R14)
- Up to 50 Gbps of networking bandwidth
- Up to 40 Gbps of bandwidth to the Amazon Elastic Block Store
- Instance sizes with up to 192 vCPUs and 1,536 GiB of memory
- SAP-certified instances
- Built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for always-on memory encryption using AMD secure memory encryption (SME)
- Support for new processor capabilities such as AVX3-512, VNNI, and bfloat16.
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) r7a.medium
1
8
EBS-Only
Up to 12.5
Up to 10
r7a.large
2
16 EBS-Only
Up to 12.5
Up to 10
r7a.xlarge
4
32
EBS-Only
Up to 12.5
Up to 10
r7a.2xlarge
8
64
EBS-Only
Up to 12.5
Up to 10
r7a.4xlarge
16
128
EBS-Only
Up to 12.5
Up to 10
r7a.8xlarge
32
256
EBS-Only
12.5
10
r7a.12xlarge
48
384
EBS-Only
18.75
15
r7a.16xlarge
64
512
EBS-Only
25
20
r7a.24xlarge
96
768
EBS-Only
37.5
30
r7a.32xlarge
128
1,024
EBS-Only
50
40
r7a.48xlarge
192
1,536
EBS-Only
50
40
r7a.metal-48xl
192
1,536
EBS-Only
50
40
Use cases
Memory-intensive workloads, such as SQL and NoSQL databases, distributed web scale in-memory caches, in-memory databases, real-time big data analytics, and Electronic Design Automation (EDA)
-
R6g
-
Amazon EC2 R6g instances are powered by Arm-based AWS Graviton2 processors. They deliver up to 40% better price performance over current generation R5 instances for memory-intensive applications.
Features:
- Custom built AWS Graviton2 Processor with 64-bit Arm Neoverse cores
- Support for Enhanced Networking with Up to 25 Gbps of Network bandwidth
- EBS-optimized by default
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- With R6gd instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the instance
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Mbps) r6g.medium 1 8 EBS-Only Up to 10 Up to 4,750 r6g.large 2 16 EBS-Only Up to 10 Up to 4,750 r6g.xlarge 4 32 EBS-Only Up to 10 Up to 4,750 r6g.2xlarge 8 64 EBS-Only Up to 10 Up to 4,750 r6g.4xlarge 16 128 EBS-Only Up to 10 4750 r6g.8xlarge 32 256 EBS-Only 12 9000 r6g.12xlarge 48 384 EBS-Only 20 13500 r6g.16xlarge 64 512 EBS-Only 25 19000 r6g.metal 64 512 EBS-Only 25 19000 r6gd.medium 1 8 1 x 59 NVMe SSD Up to 10 Up to 4,750 r6gd.large 2 16 1 x 118 NVMe SSD Up to 10 Up to 4,750 r6gd.xlarge 4 32 1 x 237 NVMe SSD Up to 10 Up to 4,750 r6gd.2xlarge 8 64 1 x 474 NVMe SSD Up to 10 Up to 4,750 r6gd.4xlarge 16 128 1 x 950 NVMe SSD Up to 10 4,750 r6gd.8xlarge 32 256 1 x 1900 NVMe SSD 12 9,000 r6gd.12xlarge 48 384 2 x 1425 NVMe SSD 20 13,500 r6gd.16xlarge 64 512 2 x 1900 NVMe SSD 25 19,000 r6gd.metal 64 512 2 x 1900 NVMe SSD 25 19,000 All instances have the following specs:
- Custom built AWS Graviton2 Processor with 64-bit Arm cores
- EBS Optimized
- Enhanced Networking
Use Cases
Memory-intensive applications such as open-source databases, in-memory caches, and real time big data analytics
-
R6i
-
Amazon R6i instances are powered by 3rd generation Intel Xeon Scalable processors (code named Ice Lake) and are an ideal fit for memory-intensive workloads.
Features:
- Up to 3.5 GHz 3rd generation Intel Xeon Scalable processors (Ice Lake 8375C)
- Up to 15% better compute price performance over R5 instances
- Up to 20% higher memory bandwidth per vCPU compared to R5 instances
- Up to 50 Gbps of networking speed
- Up to 40 Gbps of bandwidth to the Amazon Elastic Block Store
- A new instance size (32xlarge) with 128 vCPUs and 1,024 GiB of memory
- Supports Elastic Fabric Adapter on the 32xlarge and metal sizes
- Built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for always-on memory encryption using Intel Total Memory Encryption (TME)
- Support for new Intel Advanced Vector Extension (AVX 512) instructions for faster execution of cryptographic algorithms
- With R6id instances, up to 7.6 TB of local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the R6i instance
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) r6i.large 2 16 EBS-Only Up to 12.5 Up to 10 r6i.xlarge 4 32 EBS-Only Up to 12.5 Up to 10 r6i.2xlarge 8 64 EBS-Only Up to 12.5 Up to 10 r6i.4xlarge 16 128 EBS-Only Up to 12.5 Up to 10 r6i.8xlarge 32 256 EBS-Only 12.5 10 r6i.12xlarge 48 384 EBS-Only 18.75 15 r6i.16xlarge 64 512 EBS-Only 25 20 r6i.24xlarge 96 768 EBS-Only 37.5 30 r6i.32xlarge 128 1,024 EBS-Only 50 40 r6i.metal 128 1,024 EBS-Only 50 40 r6id.large 2 16 1x118 NVMe SSD Up to 12.5 Up to 10 r6id.xlarge 4 32 1x237 NVMe SSD Up to 12.5 Up to 10 r6id.2xlarge 8 64 1x474 NVMe SSD Up to 12.5 Up to 10 r6id.4xlarge 16 128 1x950 NVMe SSD Up to 12.5 Up to 10 r6id.8xlarge 32 256 1x1900 NVMe SSD 12.5 10 r6id.12xlarge 48 384 2x1425 NVMe SSD 18.75 15 r6id.16xlarge 64 512 2x1900 NVMe SSD 25 20 r6id.24xlarge 96 768 4x1425 NVMe SSD 37.5 30 r6id.32xlarge 128 1,024 4x1900 NVMe SSD 50 40 r6id.metal 128 1,024 4x1900 NVMe SSD 50 40 All instances have the following specs:
- Up to 3.5 GHz 3rd generation Intel Xeon Scalable processors
- EBS Optimized
- Enhanced Networking
Use Cases
Memory-intensive workloads such as SAP, SQL and NoSQL databases, distributed web scale in-memory caches like Memcached and Redis, in-memory databases like SAP HANA, and real time big data analytics like Hadoop and Spark clusters.
- Up to 3.5 GHz 3rd generation Intel Xeon Scalable processors (Ice Lake 8375C)
-
R6in
-
Amazon EC2 R6in and R6idn instances are ideal for memory-intensive workloads that can take advantage of high networking bandwidth, such as SAP, SQL and NoSQL databases, and in-memory databases, such as SAP HANA. R6in and R6idn instances offer up to 200 Gbps of network bandwidth and up to 80 Gbps Amazon Elastic Block Store (EBS) bandwidth.
Features:
- Up to 3.5 GHz 3rd Generation Intel Xeon Scalable processors (Ice Lake 8375C)
- Up to 20% higher memory bandwidth per vCPU compared to R5n and R5dn instances
- Up to 200 Gbps of networking speed, which is up to 2x compared to R5n and R5dn instances
- Up to 100 Gbps of EBS bandwidth, which is up to 1.6x more than R5b instances
- Supports Elastic Fabric Adapter (EFA) on the 32xlarge and metal sizes
- Support for always-on memory encryption using Intel Total Memory Encryption (TME)
- Support for new Intel Advanced Vector Extensions (AVX-512) instructions for faster processing of cryptographic algorithms
- With R6idn instances, up to 7.6 TB of local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the R6idn instance lifetime
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps) EBS Bandwidth (Gbps) r6in.large 2 16 EBS-Only Up to 25 Up to 25 r6in.xlarge 4 32 EBS-Only Up to 30 Up to 25 r6in.2xlarge 8 64 EBS-Only Up to 40 Up to 25 r6in.4xlarge 16 128 EBS-Only Up to 50 Up to 25 r6in.8xlarge 32 256 EBS-Only 50 25 r6in.12xlarge 48 384 EBS-Only 75 37.5 r6in.16xlarge 64 512 EBS-Only 100 50 r6in.24xlarge 96 768 EBS-Only 150 75 r6in.32xlarge 128 1,024 EBS-Only 200**** 100 r6in.metal 128 1,024 EBS-Only 200**** 100 r6idn.large 2 16 1x118 NVMe SSD Up to 25 Up to 25 r6idn.xlarge 4 32 1x237 NVMe SSD Up to 30 Up to 25 r6idn.2xlarge 8 64 1x474 NVMe SSD Up to 40 Up to 25 r6idn.4xlarge 16 128 1x950 NVMe SSD Up to 50 Up to 25 r6idn.8xlarge 32 256 1x1900 NVMe SSD 50 25 r6idn.12xlarge 48 384 2x1425 NVMe SSD 75 37.5 r6idn.16xlarge 64 512 2x1900 NVMe SSD 100 50 r6idn.24xlarge 96 768 4x1425 NVMe SSD 150 75 r6idn.32xlarge 128 1,024 4x1900 NVMe SSD 200**** 100 r6idn.metal 128 1,024 4x1900 NVMe SSD 200**** 100
****For 32xlarge and metal sizes, at least two elastic network interfaces, with each attached to a different network card, are required on the instance to achieve 200 Gbps throughput. Each network interface attached to a network card can achieve a maximum of 170 Gbps. For more information, see Network cards.All instances have the following specs:
- Up to 3.5 GHz 3rd Generation Intel Xeon Scalable processors
- EBS-optimized
- Enhanced Networking†
Use Cases
Memory-intensive workloads that can take advantage of high networking throughput, such as SAP, SQL and NoSQL databases, and in-memory databases, such as SAP HANA.
-
R6a
-
Amazon EC2 R6a instances are powered by 3rd generation AMD EPYC processors and are an ideal fit for memory intensive workloads.
Features:
- Up to 3.6 GHz 3rd generation AMD EPYC processors (AMD EPYC 7R13)
- Up to 35% better compute price performance over R5a instances
- Up to 50 Gbps of networking speed
- Up to 40 Gbps of bandwidth to the Amazon Elastic Block Store
- Instance size with up to 192 vCPUs and 1,536 GiB of memory
- SAP-Certified instances
- Supports Elastic Fabric Adapter on the 48xlarge and metal sizes
- Built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for always-on memory encryption using AMD Transparent Single Key Memory Encryption (TSME)
- Support for new AMD Advanced Vector Extensions (AVX-2) instructions for faster execution of cryptographic algorithms
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) r6a.large 2 16 EBS-Only Up to 12.5 Up to 10 r6a.xlarge 4 32 EBS-Only Up to 12.5 Up to 10 r6a.2xlarge 8 64 EBS-Only Up to 12.5 Up to 10 r6a.4xlarge 16 128 EBS-Only Up to 12.5 Up to 10 r6a.8xlarge 32 256 EBS-Only 12.5 10 r6a.12xlarge 48 384 EBS-Only 18.75 15 r6a.16xlarge 64 512 EBS-Only 25 20 r6a.24xlarge 96 768 EBS-Only 37.5 30 r6a.32xlarge 128 1024 EBS-Only 50 40 r6a.48xlarge 192 1536 EBS-Only 50 40 r6a.metal 192 1536 EBS-Only 50 40 All instances have the following specs:
- Up to 3.6 GHz 3rd generation AMD EPYC processors
- EBS-optimized
- Enhanced Networking
Use Cases
Memory-intensive workloads, such as SAP, SQL, and NoSQL databases; distributed web scale in-memory caches, such as Memcached and Redis; in-memory databases and real-time big data analytics, such as Hadoop and Spark clusters; and other enterprise applications
- Up to 3.6 GHz 3rd generation AMD EPYC processors (AMD EPYC 7R13)
-
R5
-
Amazon EC2 R5 instances deliver 5% additional memory per vCPU than R4 and the largest size provides 768 GiB of memory. In addition, R5 instances deliver a 10% price per GiB improvement and a ~20% increased CPU performance over R4.
Features:
- Up to 3.1 GHz Intel Xeon® Platinum 8000 series processors (Skylake 8175M or Cascade Lake 8259CL) with new Intel Advanced Vector Extension (AVX-512) instruction set
- Up to 768 GiB of memory per instance
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- With R5d instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the R5 instance
- New 8xlarge and 16xlarge sizes now available.
Instance vCPU Memory (GiB) Instance Storage (GB) Networking Performance (Gbps)*** EBS Bandwidth (Mbps) r5.large 2 16 EBS-Only up to 10 Up to 4,750 r5.xlarge 4 32 EBS-Only up to 10 Up to 4,750 r5.2xlarge 8 64 EBS-Only up to 10 Up to 4,750 r5.4xlarge 16 128 EBS-Only up to 10 4,750 r5.8xlarge 32 256 EBS-Only 10 6,800 r5.12xlarge 48 384 EBS-Only 10 9,500 r5.16xlarge 64 512 EBS Only 20 13,600 r5.24xlarge 96 768 EBS-Only 25 19,000 r5.metal 96* 768 EBS-Only 25 19,000 r5d.large 2 16 1 x 75 NVMe SSD up to 10 Up to 4,750 r5d.xlarge 4 32 1 x 150 NVMe SSD up to 10 Up to 4,750 r5d.2xlarge 8 64 1 x 300 NVMe SSD up to 10 Up to 4,750 r5d.4xlarge 16 128 2 x 300 NVMe SSD up to 10 4,750 r5d.8xlarge 32 256 2 x 600 NVMe SSD 10 6,800 r5d.12xlarge 48 384 2 x 900 NVMe SSD 10 9,500 r5d.16xlarge 64 512 4 x 600 NVMe SSD 20 13,600 r5d.24xlarge 96 768 4 x 900 NVMe SSD 25 19,000 r5d.metal 96* 768 4 x 900 NVMe SSD 25 19,000 *r5.metal and r5d.metal provide 96 logical processors on 48 physical cores; they run on single servers with two physical Intel sockets
All instances have the following specs:
- Up to 3.1 GHz Intel Xeon Platinum Processor
- Intel AVX†, Intel AVX2†, Intel AVX-512, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
R5 instances are well suited for memory intensive applications such as high performance databases, distributed web scale in-memory caches, mid-size in-memory databases, real time big data analytics, and other enterprise applications.
-
R5n
-
Amazon EC2 R5 instances are ideal for memory-bound workloads including high performance databases, distributed web scale in-memory caches, mid-sized in-memory database, real time big data analytics, and other enterprise applications. The higher bandwidth, R5n and R5dn, instance variants are ideal for applications that can take advantage of improved network throughput and packet rate performance.
Features:
- 2nd generation Intel Xeon Scalable Processors (Cascade Lake 8259CL) with a sustained all-core Turbo CPU frequency of 3.1 GHz and maximum single core turbo frequency of 3.5 GHz
- Support for the new Intel Vector Neural Network Instructions (AVX-512 VNNI) which will help speed up typical machine learning operations like convolution, and automatically improve inference performance over a wide range of deep learning workloads
- 25 Gbps of peak bandwidth on smaller instance sizes
- 100 Gbps of network bandwidth on the largest instance size
- Requires HVM AMIs that include drivers for ENA and NVMe
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
- With R5dn instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the R5 instance
Instance Size
vCPU Memory (GiB) Instance Storage (GB) Networking Performance (Gbps)*** EBS Bandwidth (Mbps) r5n.large 2 16 EBS-Only Up to 25 Up to 4,750 r5n.xlarge 4 32 EBS-Only Up to 25 Up to 4,750 r5n.2xlarge 8 64 EBS-Only Up to 25 Up to 4,750 r5n.4xlarge 16 128 EBS-Only Up to 25 4,750 r5n.8xlarge 32 256 EBS-Only 25 6,800 r5n.12xlarge 48 384 EBS-Only 50 9,500 r5n.16xlarge 64 512 EBS Only 75 13,600 r5n.24xlarge 96 768 EBS-Only 100 19,000 r5n.metal 96* 768 EBS-Only 100 19,000 r5dn.large 2 16 1 x 75 NVMe SSD Up to 25 Up to 4,750 r5dn.xlarge 4 32 1 x 150 NVMe SSD Up to 25 Up to 4,750 r5dn.2xlarge 8 64 1 x 300 NVMe SSD Up to 25 Up to 4,750 r5dn.4xlarge 16 128 2 x 300 NVMe SSD Up to 25 4,750 r5dn.8xlarge 32 256 2 x 600 NVMe SSD 25 6,800 r5dn.12xlarge 48 384 2 x 900 NVMe SSD 50 9,500 r5dn.16xlarge 64 512 4 x 600 NVMe SSD 75 13,600 r5dn.24xlarge 96 768 4 x 900 NVMe SSD 100 19,000 r5dn.metal 96* 768 4 x 900 NVMe SSD 100 19,000 *r5n.metal and r5dn.metal provide 96 logical processors on 48 physical cores.
All instances have the following specs:
- Custom 2nd generation Intel Xeon Scalable Processors (Cascade Lake) with a sustained all-core Turbo CPU frequency of 3.1 GHz and maximum single core turbo frequency of 3.5 GHz
- Intel AVX†, Intel AVX2†, Intel AVX-512, Intel Turbo, Intel DL Boost
- EBS Optimized
- Enhanced Networking†
Use Cases
High performance databases, distributed web scale in-memory caches, mid-sized in-memory database, real time big data analytics and other enterprise applications
-
R5b
-
Amazon EC2 R5b instances are EBS-optimized variants of memory-optimized R5 instances. R5b instances increase EBS performance by 3x compared to same-sized R5 instances. R5b instances deliver up to 60 Gbps bandwidth and 260K IOPS of EBS performance, the fastest block storage performance on EC2.
Features:
- Custom 2nd generation Intel Xeon Scalable Processors (Cascade Lake 8259CL) with a sustained all-core Turbo CPU frequency of 3.1 GHz and maximum single core turbo frequency of 3.5 GHz
- Up to 96 vCPUs, Up to 768 GiB of Memory
- Up to 25 Gbps network bandwidth
- Up to 60 Gbps of EBS bandwidth
Instance Size
vCPU Memory (GiB) Instance Storage (GB) Networking Performance (Gbps)*** EBS Bandwidth (Mbps) r5b.large
2
16
EBS-Only
up to 10
Up to 10,000
r5b.xlarge
4
32
EBS-Only
up to 10
Up to 10,000
r5b.2xlarge
8
64
EBS-Only
up to 10
Up to 10,000
r5b.4xlarge
16
128
EBS-Only
up to 10
10,000
r5b.8xlarge
32
256
EBS-Only
10
20,000
r5b.12xlarge
48
384
EBS-Only
10
30,000
r5b.16xlarge
64
512
EBS Only
20
40,000
r5b.24xlarge
96
768
EBS-Only
25
60,000
r5b.metal
96*
768
EBS-Only
25
60,000
Use Cases
High performance databases, distributed web scale in-memory caches, mid-size in-memory databases, real time big data analytics.
-
R5a
-
Amazon EC2 R5a instances are the latest generation of Memory Optimized instances ideal for memory-bound workloads and are powered by AMD EPYC 7000 series processors. R5a instances deliver up to 10% lower cost per GiB memory over comparable instances.
Features:
- AMD EPYC 7000 series processors (AMD EPYC 7571) with an all core turbo clock speed of 2.5 GHz
- Up to 20 Gbps network bandwidth using Enhanced Networking
- Up to 768 GiB of memory per instance
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Instance storage offered via EBS or NVMe SSDs that are physically attached to the host server
- With R5ad instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the R5a instance
Instance Size
vCPU Memory (GiB) Instance Storage (GB) Networking Performance (Gbps)*** EBS Bandwidth (Mbps) r5a.large 2 16
EBS-Only Up to 10
Up to 2,880 r5a.xlarge 4 32 EBS-Only Up to 10 Up to 2,880 r5a.2xlarge 8 64 EBS-Only Up to 10 Up to 2,880 r5a.4xlarge 16 128 EBS-Only Up to 10 2,880 r5a.8xlarge 32 256 EBS-Only Up to 10 4,750 r5a.12xlarge 48 384 EBS-Only 10 6,780 r5a.16xlarge 64 512 EBS-Only 12 9,500 r5a.24xlarge 96 768 EBS-Only 20 13,570 r5ad.large 2 16 1 x 75 NVMe SSD Up to 10
Up to 2,880 r5ad.xlarge 4 32 1 x 150 NVMe SSD Up to 10
Up to 2,880 r5ad.2xlarge 8 64 1 x 300 NVMe SSD Up to 10
Up to 2,880 r5ad.4xlarge 16 128 2 x 300 NVMe SSD Up to 10
2,880 r5ad.8xlarge 32 256 2 x 600 NVMe SSD Up to 10 4,750 r5ad.12xlarge 48 384 2 x 900 NVMe SSD 10 6,780 r5ad.16xlarge 64 512 4 x 600 NVMe SSD 12 9,500 r5ad.24xlarge 96 768 4 x 900 NVMe SSD 20 13,570 All instances have the following specs:
- 2.5 GHz AMD EPYC 7000 series processors
- EBS Optimized
- Enhanced Networking†
Use Cases
R5a instances are well suited for memory intensive applications such as high performance databases, distributed web scale in-memory caches, mid-size in-memory databases, real time big data analytics, and other enterprise applications.
- AMD EPYC 7000 series processors (AMD EPYC 7571) with an all core turbo clock speed of 2.5 GHz
-
R4
-
Amazon EC2 R4 instances are optimized for memory-intensive applications and offer better price per GiB of RAM than R3.
Features:
- High Frequency Intel Xeon scalable (Broadwell E5-2686 v4) processors
- DDR4 Memory
- Support for Enhanced Networking
Instance vCPU Mem (GiB) Storage Networking Performance (Gbps)*** r4.large 2 15.25 EBS-Only Up to 10 r4.xlarge 4 30.5 EBS-Only Up to 10 r4.2xlarge 8 61 EBS-Only Up to 10 r4.4xlarge 16 122 EBS-Only Up to 10 r4.8xlarge 32 244 EBS-Only 10 r4.16xlarge 64 488 EBS-Only 25 All instances have the following specs:
- Up to 2.3 GHz Intel Xeon Scalable Processor
- Intel AVX†, Intel AVX2†, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
High performance databases, data mining & analysis, in-memory databases, distributed web scale in-memory caches, applications performing real-time processing of unstructured big data, Hadoop/Spark clusters, and other enterprise applications.
-
U7i
-
Amazon EC2 High Memory U7i instances are purpose built to run large in-memory databases such as SAP HANA and Oracle.
Features:
- Offer up to 1920 vCPUs
- Featuring DDR5 memory
- Up to 32 TiB of instance memory
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Virtualized instances are available with On-Demand and with 1-year and 3-year Savings Plan purchase options*
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps) EBS Bandwidth (Gbps) u7i-6tb.112xlarge
448
6,144
EBS-Only
100
60
u7i-8tb.112xlarge
448
8,192
EBS-Only
100
60
u7i-12tb.224xlarge
896
12,288
EBS-Only
100
60
u7in-16tb.224xlarge
896
16,384
EBS-Only
200
100
u7in-24tb.224xlarge
896
24,576
EBS-Only
200
100
u7in-32tb.224xlarge
896
32,768
EBS-Only
200
100
u7inh-32tb.480xlarge
1,920
32,768
EBS-Only
200
160
* U7inh instances are available as a 3-year Instance Savings Plan Purchase.
U7i instances, powered by fourth generation Intel Xeon Scalable Processors (Sapphire Rapids), offer up to 32TiB of the latest DDR5 memory and up to 1920 vCPUs.
All instances have the following specs:
Use Cases
Ideal for running large enterprise databases, including SAP HANA in-memory database in the cloud. Certified by SAP for running Business Suite on HANA, the next-generation Business Suite S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANA in production environments. For details, see SAP HANA Hardware Directory.
-
High Memory (U-1)
-
Amazon EC2 High Memory (U-1) instances are purpose built to run large in-memory databases, including production deployments of SAP HANA in the cloud.
Features:
- Now available in both bare metal and virtualized memory
- From 3 to 24 TiB of instances memory
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Virtualized instances are available with On-Demand and with 1-year and 3-year Savings Plan purchase options
Name Logical Processors* Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps) EBS Bandwidth (Gbps) u-3tb1.56xlarge 224 3,072 EBS-Only 50 19 u-6tb1.56xlarge 224 6,144 EBS-Only 100 38 u-6tb1.112xlarge 448 6,144 EBS-Only 100 38 u-6tb1.metal** 448 6,144 EBS-Only 100 38 u-9tb1.112xlarge 448 9,216 EBS-Only 100 38 u-9tb1.metal** 448 9,216 EBS-Only 100 38 u-12tb1.112xlarge 448 12,288 EBS-Only 100 38 u-12tb1.metal** 448 12,288 EBS-Only 100 38 u-18tb1.112xlarge 448 18,432 EBS-Only 100 38 u-18tb1.metal 448 18,432 EBS-Only 100 38 u-24tb1.112xlarge 448 24,576 EBS-Only 100 38 u-24tb1.metal 448 24,576 EBS-Only 100 38 **Some instances launched before March 12, 2020 might offer lower performance, please reach out to your account team to upgrade your instance (at no additional cost) for higher performance
* Each logical processor is a hyperthread on 224 cores
- 6 TB, 9 TB, and 12 TB instances are powered by 2.1 GHz (with Turbo Boost to 3.80 GHz) Intel Xeon Scalable processors (Skylake 8176M) or 2nd Generation 2.7 GHz (with Turbo Boost to 4.0 GHz) Intel Xeon Scalable processors (Cascade Lake 8280L)
- 18 TB and 24 TB instances are powered by 2nd Generation 2.7 GHz (with Turbo Boost to 4.0 GHz) Intel Xeon Scalable processors (Cascade Lake 8280L)
All instances have the following specs:
Use Cases
Ideal for running large enterprise databases, including production installations of SAP HANA in-memory database in the cloud. Certified by SAP for running Business Suite on HANA, the next-generation Business Suite S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANA in production environments.
-
X8g
-
Amazon EC2 X8g instances are powered by AWS Graviton4 processors. They deliver the best price performance among Amazon EC2 X-series instances.
Features:
- Powered by custom-built AWS Graviton4 processors
- Larger instance sizes with up to 3x more vCPUs and memory than X2gd instances
- Features the latest DDR5-5600 memory
- Optimized for Amazon EBS by default
- Supports Elastic Fabric Adapter (EFA) on x8g.24xlarge, x8g.48xlarge, x8g.metal-24xl, and x8g.metal-48xl
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance size vCPU Memory (GiB) Instance storage (GB) Network bandwidth (Gbps) EBS bandwidth (Gbps) x8g.medium
1
16
EBS-only
Up to 12.5
Up to 10
x8g.large
2
32
EBS-only
Up to 12.5
Up to 10
x8g.xlarge
4
64
EBS-only
Up to 12.5
Up to 10
x8g.2xlarge
8
128
EBS-only
Up to 15
Up to 10
x8g.4xlarge
16
256
EBS-only
Up to 15
Up to 10
x8g.8xlarge
32
512
EBS-only
15
10
x8g.12xlarge
48
768
EBS-only
22.5
15
x8g.16xlarge
64
1,024
EBS-only
30
20
x8g.24xlarge
96
1,536
EBS-only
40
30
x8g.48xlarge
192
3,072
EBS-only
50
40
x8g.metal-24xl
96
1,536
EBS-only
40
30
x8g.metal-48xl
192
3,072
EBS-only
50
40
All instances have the following specs:
Use cases
Memory-intensive workloads such as in-memory databases (Redis, Memcached), relational databases (MySQL, PostgreSQL), electronic design automation (EDA) workloads, real-time big data analytics, real-time caching servers, and memory-intensive containerized applications.
-
X2gd
-
Amazon EC2 X2gd instances are powered by Arm-based AWS Graviton2 processors and provide the lowest cost per GiB of memory in Amazon EC2. They deliver up to 55% better price performance compared to current generation X1 instances.
Features:
- Custom built AWS Graviton2 Processor with 64-bit Arm Neoverse cores
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for Enhanced Networking with up to 25 Gbps of network bandwidth
- Local NVMe-based SSD storage provide high speed, low latency access to in-memory data
- EBS-optimized by default
Instance Size
vCPU
Memory (GiB)
Instance Storage (GB)
Network Bandwidth (Gbps)***
EBS Bandwidth (Gbps)
x2gd.medium
1
16
1x59 NVMe SSD
Up to 10
Up to 4.75
x2gd.large
2
32
1x118 NVMe SSD
Up to 10
Up to 4.75
x2gd.xlarge
4
64
1x237 NVMe SSD
Up to 10
Up to 4.75
x2gd.2xlarge
8
128
1x475 NVMe SSD
Up to 10
Up to 4.75
x2gd.4xlarge
16
256
1x950 NVMe SSD
Up to 10
4.75
x2gd.8xlarge
32
512
1x1900 NVMe SSD
12
9.5
x2gd.12xlarge
48
768
2x1425 NVMe SSD
20
14.25
x2gd.16xlarge
64
1024
2x1900 NVMe SSD
25
19
x2gd.metal
64
1024
2x1900 NVMe SSD
25
19
All instances have the following specs:
- Custom built AWS Graviton2 Processor with 64-bit Arm cores
- EBS Optimized
- Enhanced Networking
Use Cases
Memory-intensive workloads such as open-source databases (MySQL, MariaDB, and PostgreSQL), in-memory caches (Redis, KeyDB, Memcached), electronic design automation (EDA) workloads, real-time analytics, and real-time caching servers.
-
X2idn
-
Amazon EC2 X2idn instances are powered by 3rd generation Intel Xeon Scalable processors with an all core turbo frequency up to 3.5 GHz and are a good choice for a wide range of memory-intensive applications.
Features:
- Up to 3.5 GHz 3rd generation Intel Xeon Scalable processors (Ice Lake 8375C)
- 16:1 ratio of memory to vCPU on all sizes
- Up to 50% better price performance than X1 instances
- Up to 100 Gbps of networking speed
- Up to 80 Gbps of bandwidth to the Amazon Elastic Block Store
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for always-on memory encryption using Intel Total Memory Encryption (TME)
- Support for new Intel Advanced Vector Extensions (AVX 512) instructions for faster execution of cryptographic algorithms
Instance Size
vCPU
Memory (GiB)
Instance Storage (GB)
Network Bandwidth (Gbps)
EBS Bandwidth (Gbps)
x2idn.16xlarge
64
1,024
1 x 1900 NVMe SSD
50
40
x2idn.24xlarge
96 1,536 2 x 1425 NVMe SSD
75
60
x2idn.32xlarge
128 2,048
2 x 1900 NVMe SSD
100
80
x2idn.metal
128 2,048
2 x 1900 NVMe SSD
100
80
All instances have the following specs:
Use Cases
In-memory databases (e.g. SAP HANA, Redis) traditional databases (e.g. Oracle DB, Microsoft SQL Server), and in-memory analytics (e.g. SAS, Aerospike).
- Up to 3.5 GHz 3rd generation Intel Xeon Scalable processors (Ice Lake 8375C)
-
X2iedn
-
Amazon EC2 X2iedn instances are powered by 3rd generation Intel Xeon Scalable processors (code named Ice Lake) with an all core turbo frequency up to 3.5 GHz and are a good choice for a wide range of large scale memory-intensive applications.
Features:
- Up to 3.5 GHz 3rd generation Intel Xeon Scalable processors (Ice Lake 8375C)
- 32:1 ratio of memory to vCPU on all sizes
- Up to 50% better price performance than X1 instances
- Up to 100 Gbps of networking speed
- Up to 80 Gbps of bandwidth to the Amazon Elastic Block Store
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for always-on memory encryption using Intel Total Memory Encryption (TME)
- Support for new Intel Advanced Vector Extensions (AVX 512) instructions for faster execution of cryptographic algorithms
Instance Size
vCPU
Memory (GiB)
Instance Storage (GB)
Network Bandwidth (Gbps)***
EBS Bandwidth (Gbps)
x2iedn.xlarge
4
128
1 x 118 NVMe SSD
Up to 25
Up to 20
x2iedn.2xlarge
8 256
1 x 237 NVMe SSD
Up to 25
Up to 20
x2iedn.4xlarge
16 512
1 x 475 NVMe SSD
Up to 25
Up to 20
x2iedn.8xlarge
32 1,024
1 x 950 NVMe SSD
25
20
x2iedn.16xlarge
64 2,048
1 x 1900 NVMe SSD
50
40
x2iedn.24xlarge
96 3,072
2 x 1425 NVMe SSD
75
60
x2iedn.32xlarge
128 4,096
2 x 1900 NVMe SSD
100
80
x2iedn.metal
128 4,096
2 x 1900 NVMe SSD
100
80
All instances have the following specs:
Use Cases
Large scale in-memory databases (e.g. SAP HANA, Redis) traditional databases (e.g. Oracle DB, Microsoft SQL Server), and in-memory analytics (e.g. SAS, Aerospike).
- Up to 3.5 GHz 3rd generation Intel Xeon Scalable processors (Ice Lake 8375C)
-
X2iezn
-
Amazon EC2 X2iezn instances are powered by the fastest Intel Xeon Scalable processors (code named Cascade Lake) in the cloud, with an all-core turbo frequency up to 4.5 GHz and are a good choice for memory-intensive electronic design automation (EDA) workloads.
Features:
- Up to 4.5 GHz 2nd generation Intel Xeon Scalable processors (Cascade Lake 8252C)
- 32:1 ratio of memory to vCPU on all sizes
- Up to 55% better price performance than X1e instances
- Up to 100 Gbps of networking speed
- Up to 19 Gbps of bandwidth to the Amazon Elastic Block Store
- Supports Elastic Fabric Adapter on 12xlarge and metal sizes
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) x2iezn.2xlarge 8 256 EBS-Only Up to 25 3.17 x2iezn.4xlarge 16 512 EBS-Only Up to 25 4.75 x2iezn.6xlarge 24 768 EBS-Only 50 9.5 x2iezn.8xlarge 32 1,024 EBS-Only 75 12 x2iezn.12xlarge 48 1,536 EBS-Only 100 19 x2iezn.metal 48 1,536 EBS-Only 100 19 All instances have the following specs:
Use Cases
Electronic design automation (EDA) workloads like physical verification, static timing analysis, power signoff, and full chip gate-level simulation.
-
X1
-
Amazon EC2 X1 instances are optimized optimized for enterprise-class databases and in-memory applications.
Features:
- High frequency Intel Xeon E7-8880 v3 (Haswell) processors
- One of the lowest prices per GiB of RAM
- Up to 1,952 GiB of DRAM-based instance memory
- SSD instance storage for temporary block-level storage and EBS-optimized by default at no additional cost
- Ability to control processor C-state and P-state configuration
Instance vCPU Mem (GiB) SSD Storage (GB) Dedicated EBS Bandwidth (Mbps) Network Performance (Gbps) x1.16xlarge 64 976 1 x 1,920 7,000 10 x1.32xlarge 128 1,952 2 x 1,920 14,000 25 All instances have the following specs:
- 2.3 GHz Intel Xeon Scalable Processor (Haswell E7-8880 v3)
- Intel AVX†, Intel AVX2†, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
In-memory databases (e.g. SAP HANA), big data processing engines (e.g. Apache Spark or Presto), high performance computing (HPC). Certified by SAP to run Business Warehouse on HANA (BW), Data Mart Solutions on HANA, Business Suite on HANA (SoH), Business Suite S/4HANA.
-
X1e
-
Amazon EC2 X1e instances are optimized for large scale databases, in-memory databases, and other memory-intensive enterprise applications.
Features:
- High frequency Intel Xeon E7-8880 v3 (Haswell) processors
- One of the lowest prices per GiB of RAM
- Up to 3,904 GiB of DRAM-based instance memory
- SSD instance storage for temporary block-level storage and EBS-optimized by default at no additional cost
- Ability to control processor C-state and P-state configurations on x1e.32xlarge, x1e.16xlarge and x1e.8xlarge instances
Instance vCPU Mem (GiB) SSD Storage (GB) Dedicated EBS Bandwidth (Mbps) Networking Performance (Gbps)*** x1e.xlarge 4 122 1 x 120 500 Up to 10 x1e.2xlarge 8 244 1 x 240 1,000 Up to 10 x1e.4xlarge 16 488 1 x 480 1,750 Up to 10 x1e.8xlarge 32 976 1 x 960 3,500 Up to 10 x1e.16xlarge 64 1,952 1 x 1,920 7,000 10 x1e.32xlarge 128 3,904 2 x 1,920 14,000 25 All instances have the following specs:
- 2.3 GHz Intel Xeon Scalable Processor (Haswell E7-8880 v3)
- Intel AVX†, Intel AVX2†
- EBS Optimized
- Enhanced Networking†
In addition, x1e.16xlarge and x1e.32xlarge have
Use Cases
High performance databases, in-memory databases (e.g. SAP HANA) and memory intensive applications. x1e.32xlarge instance certified by SAP to run next-generation Business Suite S/4HANA, Business Suite on HANA (SoH), Business Warehouse on HANA (BW), and Data Mart Solutions on HANA on the AWS cloud.
-
z1d
-
Amazon EC2 z1d instances offer both high compute capacity and a high memory footprint. High frequency z1d instances deliver a sustained all core frequency of up to 4.0 GHz, the fastest of any cloud instance.
Features:
- Custom Intel® Xeon® Scalable processor (Skylake 8151) with a sustained all core frequency of up to 4.0 GHz with new Intel Advanced Vector Extension (AVX-512) instruction set
- Up to 1.8TB of instance storage
- High memory with up to 384 GiB of RAM
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- With z1d instances, local NVMe-based SSDs are physically connected to the host server and provide block-level storage that is coupled to the lifetime of the z1d instance
Instance vCPU Mem (GiB) Networking Performance (Gbps)*** SSD Storage (GB) z1d.large 2 16 Up to 10 1 x 75 NVMe SSD z1d.xlarge 4 32 Up to 10 1 x 150 NVMe SSD z1d.2xlarge 8 64 Up to 10 1 x 300 NVMe SSD z1d.3xlarge 12 96 Up to 10 1 x 450 NVMe SSD z1d.6xlarge 24 192 10 1 x 900 NVMe SSD z1d.12xlarge 48 384 25 2 x 900 NVMe SSD z1d.metal 48* 384 25 2 x 900 NVMe SSD *z1d.metal provides 48 logical processors on 24 physical cores
All instances have the following specs:
- Up to 4.0 GHz Intel® Xeon® Scalable Processors
- Intel AVX, Intel AVX2, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
Ideal for electronic design automation (EDA) and certain relational database workloads with high per-core licensing costs.
Each vCPU on Graviton-based Amazon EC2 instances is a core of AWS Graviton processor.
Each vCPU on non-Graviton-based Amazon EC2 instances is a thread of x86-based processor, except for R7a instances.
† AVX, AVX2, and Enhanced Networking are available only on instances launched with HVM AMIs.
*** Instances marked with "Up to" Network Bandwidth have a baseline bandwidth and can use a network I/O credit mechanism to burst beyond their baseline bandwidth on a best effort basis. For more information, see instance network bandwidth.
Accelerated Computing
Accelerated computing instances use hardware accelerators, or co-processors, to perform functions, such as floating point number calculations, graphics processing, or data pattern matching, more efficiently than is possible in software running on CPUs.
-
P5
-
P4
-
P3
-
P2
-
G6e
-
G6
-
G5g
-
G5
-
G4dn
-
G4ad
-
G3
-
Trn2
-
Trn1
-
Inf2
-
Inf1
-
DL1
-
DL2q
-
F2
-
F1
-
VT1
-
P5
-
Amazon EC2 P5 instances are the latest generation of GPU-based instances and provide the highest performance in Amazon EC2 for deep learning and high performance computing (HPC).
Features:
- Intel Sapphire Rapids CPU and PCIe Gen5 between the CPU and GPU in P5en instances; 3rd Gen AMD EPYC processors (AMD EPYC 7R13) and PCIe Gen4 between the CPU and GPU in P5 and P5e instances.
- Up to 8 NVIDIA H100 (in P5) or H200 (in P5e and P5en) Tensor Core GPUs
- Up to 3,200 Gbps network bandwidth with support for Elastic Fabric Adapter (EFA) and NVIDIA GPUDirect RDMA (remote direct memory access)
- 900 GB/s peer-to-peer GPU communication with NVIDIA NVSwitch
Instance GPUs vCPUs Instance
Memory (TiB)GPU
MemoryNetwork Bandwidth GPUDirect RDMA GPU Peer to Peer Instance Storage (TB) EBS Bandwidth (Gbps) p5.48xlarge
8 H100
192
2
640 GB HBM3
3200 Gbps EFAv2
Yes
900 GB/s NVSwitch
8 x 3.84 NVMe SSD
80
p5e.48xlarge 8 H200 192 2 1128 GB HBM3 3200 Gbps EFAv2 Yes 900 GB/s NVSwitch 8x 3.84 NVMe SSD 80 p5en.48xlarge 8 H200 192 2 1128 GB HBM3 3200 Gbps EFAv3 Yes 900 GB/s NVSwitch 8x 3.84 NVMe SSD 100 P5 instances have the following specs:
Use Cases
Generative AI applications, including question answering, code generation, video and image generation, speech recognition, and more.
HPC applications at scale in pharmaceutical discovery, seismic analysis, weather forecasting, and financial modeling.
- Intel Sapphire Rapids CPU and PCIe Gen5 between the CPU and GPU in P5en instances; 3rd Gen AMD EPYC processors (AMD EPYC 7R13) and PCIe Gen4 between the CPU and GPU in P5 and P5e instances.
-
P4
-
Amazon EC2 P4 instances provide high performance for machine learning training and high performance computing in the cloud.
- 3.0 GHz 2nd Generation Intel Xeon Scalable processors (Cascade Lake P-8275CL)
- Up to 8 NVIDIA A100 Tensor Core GPUs
- 400 Gbps instance networking with support for Elastic Fabric Adapter (EFA) and NVIDIA GPUDirect RDMA (remote direct memory access)
- 600 GB/s peer-to-peer GPU communication with NVIDIA NVSwitch
- Deployed in Amazon EC2 UltraClusters consisting of more than 4,000 NVIDIA A100 Tensor Core GPUs, petabit-scale networking, and scalable low-latency storage with Amazon FSx for Lustre
Instance GPUs vCPUs Instance
Memory (GiB)GPU
MemoryNetwork Bandwidth GPUDirect RDMA GPU Peer to Peer Instance Storage (GB) EBS Bandwidth (Gbps) p4d.24xlarge 8 96 1152 320 GB HBM2 400 ENA and EFA Yes 600 GB/s NVSwitch 8 x 1000 NVMe SSD 19 p4de.24xlarge
(In preview)8 96 1152 640 GB HBM2e 400 ENA and EFA Yes 600 GB/s NVSwitch 8 x 1000 NVMe SSD 19 P4d instances have the following specs:
- 3.0 GHz 2nd Generation Intel Xeon Scalable processors
- Intel AVX, Intel AVX2, Intel AVX-512, and Intel Turbo
- EBS Optimized
- Enhanced Networking†
- Elastic Fabric Adapter (EFA)
Use Cases
Machine learning, high performance computing, computational fluid dynamics, computational finance, seismic analysis, speech recognition, autonomous vehicles, and drug discovery.
-
P3
-
Amazon EC2 P3 instances deliver high performance compute in the cloud with up to 8 NVIDIA® V100 Tensor Core GPUs and up to 100 Gbps of networking throughput for machine learning and HPC applications.
Features:
- Up to 8 NVIDIA Tesla V100 GPUs, each pairing 5,120 CUDA Cores and 640 Tensor Cores
- High frequency Intel Xeon Scalable Processor (Broadwell E5-2686 v4) for p3.2xlarge, p3.8xlarge, and p3.16xlarge.
- High frequency 2.5 GHz (base) Intel Xeon Scalable Processor (Skylake 8175) for p3dn.24xlarge.
- Supports NVLink for peer-to-peer GPU communication
- Provides up to 100 Gbps of aggregate network bandwidth.
- EFA support on p3dn.24xlarge instances
Instance GPUs vCPU Mem (GiB) GPU Mem (GiB) GPU P2P Storage (GB) Dedicated EBS Bandwidth (Gbps) Networking Performance (Gbps)*** p3.2xlarge 1 8 61 16 - EBS-Only 1.5 Up to 10 p3.8xlarge 4 32 244 64 NVLink EBS-Only 7 10 p3.16xlarge 8 64 488 128 NVLink EBS-Only 14 25 p3dn.24xlarge 8 96 768 256 NVLink 2 x 900 NVMe SSD 19 100 All instances have the following specs:
p3.2xlarge, p3.8xlarge, and p3.16xlarge have 2.3 GHz (base) and 2.7 GHz (turbo) Intel Xeon E5-2686 v4 processors.p3dn.24xlarge has 2.5 GHz (base) and 3.1 GHz (sustained all-core turbo) Intel Xeon 8175M processors and supports Intel AVX-512. p3dn.24xlarge instances also support Elastic Fabric Adapter (EFA) that enables High Performance Computing (HPC) applications using the Message Passing Interface (MPI) and Machine Learning (ML) applications using NVIDIA Collective Communications Library (NCCL) to scale to thousands of GPUs.Use Cases
Machine/Deep learning, high performance computing, computational fluid dynamics, computational finance, seismic analysis, speech recognition, autonomous vehicles, drug discovery.
-
P2
-
P2 instances are intended for general-purpose GPU compute applications.
Features:
- High frequency Intel Xeon Scalable Processor (Broadwell E5-2686 v4)
- High-performance NVIDIA K80 GPUs, each with 2,496 parallel processing cores and 12GiB of GPU memory
- Supports GPUDirect™ for peer-to-peer GPU communications
- Provides Enhanced Networking using Elastic Network Adapter (ENA) with up to 25 Gbps of aggregate network bandwidth within a Placement Group
- EBS-optimized by default at no additional cost
Instance GPUs vCPU Mem (GiB) GPU Memory (GiB) Network Performance (Gbps) p2.xlarge 1 4 61 12 High p2.8xlarge 8 32 488 96 10 p2.16xlarge 16 64 732 192 25 All instances have the following specs:
- 2.3 GHz (base) and 2.7 GHz (turbo) Intel Xeon E5-2686 v4 Processor
- Intel AVX, Intel AVX2, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
Machine learning, high performance databases, computational fluid dynamics, computational finance, seismic analysis, molecular modeling, genomics, rendering, and other server-side GPU compute workloads.
-
G6e
-
Amazon EC2 G6e instances are designed to accelerate deep learning inference and spatial computing workloads.
Features:
- 3rd generation AMD EPYC processors (AMD EPYC 7R13)
- Up to 8 NVIDIA L40S Tensor Core GPUs
- Up to 400 Gbps of network bandwidth
- Up to 7.6 TB of local NVMe local storage
Instance Name vCPUs Memory (GiB) NVIDIA L40S Tensor Core GPU GPU Memory (GiB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) g6e.xlarge 4 32 1 48 Up to 20 Up to 5 g6e.2xlarge 8 64 1 48 Up to 20 Up to 5 g6e.4xlarge 16 128 1 48 20 8 g6e.8xlarge 32 256 1 48 25 16 g6e.16xlarge 64 512 1 48 35 20 g6e.12xlarge 48 384 4 192 100 20 g6e.24xlarge 96 768 4 192 200 30 g6e.48xlarge 192 1536 8 384 400 60 Use Cases
Inference workloads for large language models and diffusion models for image, audio, and video, generation; single-node training of moderately complex generative AI models; 3D simulations, digital twins, and industrial digitization.
-
G6
-
Amazon EC2 G6 instances are designed to accelerate graphics-intensive applications and machine learning inference.
Features:
- 3rd generation AMD EPYC processors (AMD EPYC 7R13)
- Up to 8 NVIDIA L4 Tensor Core GPUs
- Up to 100 Gbps of network bandwidth
- Up to 7.52 TB of local NVMe local storage
Instance Name vCPUs Memory (GiB) NVIDIA L4 Tensor Core GPU GPU Memory (GiB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) g6.xlarge 4 16 1 24 Up to 10 Up to 5 g6.2xlarge 8 32 1 24 Up to 10 Up to 5 g6.4xlarge 16 64 1 24 Up to 25 8 g6.8xlarge 32 128 1 24 25 16 g6.16xlarge 64 256 1 24 25 20 g6.12xlarge 48 192 4 96 40 20 g6.24xlarge 96 384 4 96 50 30 g6.48xlarge 192 768 8 192 100 60 Gr6 instances with 1:8 vCPU:RAM ratio gr6.4xlarge 16 128 1 24 Up to 25 8 gr6.8xlarge 32 256 1 24 25 16 Use Cases
Deploying ML models for natural language processing, language translation, video and image analysis, speech recognition, and personalization as well as graphics workloads, such as creating and rendering real-time, cinematic-quality graphics and game streaming.
-
G5g
-
Amazon EC2 G5g instances are powered by AWS Graviton2 processors and feature NVIDIA T4G Tensor Core GPUs to provide the best price performance in Amazon EC2 for graphics workloads such as Android game streaming. They are the first Arm-based instances in a major cloud to feature GPU acceleration. Customers can also use G5g instances for cost-effective ML inference.
Features:
- Custom built AWS Graviton2 Processor with 64-bit Arm Neoverse cores
- Up to 2 NVIDIA T4G Tensor Core GPUs
- Up to 25 Gbps of networking bandwidth
- EBS-optimized by default
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance Name vCPUs Memory (GiB) NVIDIA T4G Tensor Core GPU GPU Memory (GiB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) g5g.xlarge 4 8 1 16 Up to 10 Up to 3.5 g5g.2xlarge 8 16 1 16 Up to 10 Up to 3.5 g5g.4xlarge 16 32 1 16 Up to 10 Up to 3.5 g5g.8xlarge 32 64 1 16 12 9 g5g.16xlarge 64 128 2 32 25 19 g5g.metal 64 128 2 32 25 19 Use Cases
Android game streaming, machine learning inference, graphics rendering, autonomous vehicle simulations
- Custom built AWS Graviton2 Processor with 64-bit Arm Neoverse cores
-
G5
-
Amazon EC2 G5 instances are designed to accelerate graphics-intensive applications and machine learning inference. They can also be used to train simple to moderately complex machine learning models.
Features:
- 2nd generation AMD EPYC processors (AMD EPYC 7R32)
- Up to 8 NVIDIA A10G Tensor Core GPUs
- Up to 100 Gbps of network bandwidth
- Up to 7.6 TB of local NVMe local storage
Instance Size GPU GPU Memory (GiB) vCPUs Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) g5.xlarge 1 24 4 16 1 x 250 NVMe SSD Up to 10 Up to 3.5 g5.2xlarge 1 24 8 32 1 x 450 NVMe SSD Up to 10 Up to 3.5 g5.4xlarge 1 24 16 64 1 x 600 NVMe SSD Up to 25 8 g5.8xlarge 1 24 32 128 1 x 900 NVMe SSD 25 16 g5.16xlarge 1 24 64 256 1 x 1900 NVMe SSD 25 16 g5.12xlarge 4 96 48 192 1 x 3800 NVMe SSD 40 16 g5.24xlarge 4 96 96 384 1 x 3800 NVMe SSD 50 19 g5.48xlarge 8 192 192 768 2x 3800 NVME SSD 100 19 G5 instances have the following specs:
- 2nd Generation AMD EPYC processors
- EBS Optimized
- Enhanced Networking†
Use Cases
Graphics-intensive applications such as remote workstations, video rendering, and cloud gaming to produce high fidelity graphics in real time. Training and inference deep learning models for machine learning use cases such as natural language processing, computer vision, and recommender engine use cases.
-
G4dn
-
Amazon EC2 G4dn instances are designed to help accelerate machine learning inference and graphics-intensive workloads.
Features:
- 2nd Generation Intel Xeon Scalable Processors (Cascade Lake P-8259CL)
- Up to 8 NVIDIA T4 Tensor Core GPUs
- Up to 100 Gbps of networking throughput
- Up to 1.8 TB of local NVMe storage
Instance GPUs vCPU Memory (GiB) GPU Memory (GiB) Instance Storage (GB) Network Performance (Gbps)*** EBS Bandwidth (Gbps) g4dn.xlarge 1 4 16 16 1 x 125 NVMe SSD Up to 25 Up to 3.5 g4dn.2xlarge 1 8
32 16 1 x 225 NVMe SSD Up to 25 Up to 3.5 g4dn.4xlarge 1 16 64 16 1 x 225 NVMe SSD Up to 25 4.75 g4dn.8xlarge 1 32 128 16 1 x 900 NVMe SSD 50 9.5 g4dn.16xlarge 1 64 256 16 1 x 900 NVMe SSD 50 9.5 g4dn.12xlarge 4 48 192 64 1 x 900 NVMe SSD 50 9.5 g4dn.metal 8 96 384 128 2 x 900 NVMe SSD 100 19
All instances have the following specs:- 2.5 GHz Cascade Lake 24C processors
- Intel AVX, Intel AVX2, Intel AVX-512, and Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
Machine learning inference for applications like adding metadata to an image, object detection, recommender systems, automated speech recognition, and language translation. G4 instances also provide a very cost-effective platform for building and running graphics-intensive applications, such as remote graphics workstations, video transcoding, photo-realistic design, and game streaming in the cloud.
- 2nd Generation Intel Xeon Scalable Processors (Cascade Lake P-8259CL)
-
G4ad
-
Amazon EC2 G4ad instances provide the best price performance for graphics intensive applications in the cloud.
Features:
- 2nd Generation AMD EPYC Processors (AMD EPYC 7R32)
- AMD Radeon Pro V520 GPUs
- Up to 2.4 TB of local NVMe storage
Instance GPUs vCPU Memory (GiB) GPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) g4ad.xlarge 1 4 16 8 1 x 150 NVMe SSD Up to 10 Up to 3 g4ad.2xlarge 1 8 32 8 1 x 300 NVMe SSD Up to 10 Up to 3 g4ad.4xlarge 1 16 64 8 1 x 600 NVMe SSD Up to 10 Up to 3 g4ad.8xlarge 2 32 128 16 1 x 1200 NVMe SSD 15 3 g4ad.16xlarge 4 64 256 32 1 x 2400 NVMe SSD 25 6
All instances have the following specs:- Second generation AMD EPYC processors
- EBS Optimized
- Enhanced Networking†
Use Cases
Graphics-intensive applications, such as remote graphics workstations, video transcoding, photo-realistic design, and game streaming in the cloud.
-
G3
-
Amazon EC2 G3 instances are optimized for graphics-intensive applications.
Features:
- High frequency Intel Xeon Scalable Processors (Broadwell E5-2686 v4)
- NVIDIA Tesla M60 GPUs, each with 2048 parallel processing cores and 8 GiB of video memory
- Enables NVIDIA GRID Virtual Workstation features, including support for 4 monitors with resolutions up to 4096x2160. Each GPU included in your instance is licensed for one “Concurrent Connected User"
- Enables NVIDIA GRID Virtual Application capabilities for application virtualization software like Citrix XenApp Essentials and VMware Horizon, supporting up to 25 concurrent users per GPU
- Each GPU features an on-board hardware video encoder designed to support up to 10 H.265 (HEVC) 1080p30 streams and up to 18 H.264 1080p30 streams, enabling low-latency frame capture and encoding, and high-quality interactive streaming experiences
- Enhanced Networking using the Elastic Network Adapter (ENA) with 25 Gbps of aggregate network bandwidth within a Placement Group
Instance GPUs vCPU Mem (GiB) GPU Memory (GiB) Network Performance (Gbps)*** g3s.xlarge 1 4 30.5 8 Up to 10 g3.4xlarge 1 16 122 8 Up to 10 g3.8xlarge 2 32 244 16 10 g3.16xlarge 4 64 488 32 25 All instances have the following specs:
- 2.3 GHz (base) and 2.7 GHz (turbo) Intel Xeon E5-2686 v4 Processor
- Intel AVX, Intel AVX2, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
3D visualizations, graphics-intensive remote workstation, 3D rendering, application streaming, video encoding, and other server-side graphics workloads.
-
Trn2
-
Amazon EC2 Trn2 instances, powered by AWS Trainium2 chips, are purpose built for high-performance generative AI training and inference of models with hundreds of billions to trillion+ parameters.
Features:
- 16 AWS Trainium2 chips
- Supported by AWS Neuron SDK
- 4th Generation Intel Xeon Scalable processor (Sapphire Rapids 8488C)
- Up to 12.8 Tbps third-generation Elastic Fabric Adapter (EFA) networking bandwidth
- Up to 8 TB local NVMe storage
- High-bandwidth, intra-instance, and inter-instance connectivity with NeuronLink
- Deployed in Amazon EC2 UltraClusters and available in EC2 UltraServers (available in preview)
- Amazon EBS-optimized
- Enhanced networking
Instance Size Available in EC2 UltraServers Trainium2 Chips Accelerator Memory (TB) vCPUs Memory (TB) Instance
Storage (TB)Network
Bandwidth (Tbps)***EBS Bandwidth
(Gbps)trn2.48xlarge No 16 1.5 192 2 4 x 1.92 NVMe SSD 3.2 80
trn2u.48xlarge Yes (Preview) 16 1.5 192 2 4 x 1.92 NVMe SSD 3.2 80 Use Cases
Training and inference of the most demanding foundation models including large language models (LLMs), multi-modal models, diffusion transformers and more to build a broad set of next-generation generative AI applications.
- 16 AWS Trainium2 chips
-
Trn1
-
Amazon EC2 Trn1 instances, powered by AWS Trainium chips, are purpose built for high-performance deep learning training while offering up to 50% cost-to-train savings over comparable Amazon EC2 instances.
Features:
- 16 AWS Trainium chips
- Supported by AWS Neuron SDK
- 3rd Generation Intel Xeon Scalable processor (Ice Lake SP)
- Up to 1600 Gbps second-generation Elastic Fabric Adapter (EFA) networking bandwidth
- Up to 8 TB local NVMe storage
- High-bandwidth, intra-instance connectivity with NeuronLink
- Deployed in EC2 UltraClusters that enable scaling up to 30,000 AWS Trainium accelerators, connected with a petabit-scale nonblocking network, and scalable low-latency storage with Amazon FSx for Lustre
- Amazon EBS-optimized
- Enhanced networking
Instance Size Trainium Chips Accelerator Memory (GB) vCPUs Memory (GiB) Instance
Storage (GB)Network
Bandwidth (Gbps)***EBS Bandwidth
(Gbps)trn1.2xlarge 1 32 8 32 1 x 500 NVMe SSD
Up to 12.5 Up to 20
trn1.32xlarge 16 512 128 512 4 x 2000 NVMe SSD
800 80 trn1n.32xlarge 16 512 128 512 4 x 2000 NVMe SSD 1600 80 Use Cases
Deep learning training for natural language processing (NLP), computer vision, search, recommendation, ranking, and more
- 16 AWS Trainium chips
-
Inf2
-
Amazon EC2 Inf2 instances are purpose built for deep learning inference. They deliver high performance at the lowest cost in Amazon EC2 for generative artificial intelligence models, including large language models and vision transformers. Inf2 instances are powered by AWS Inferentia2. These new instances offer 3x higher compute performance, 4x higher accelerator memory, up to 4x higher throughput, and up to 10x lower latency compared to Inf1 instances
Features:
- Up to 12 AWS Inferentia2 chips
- Supported by AWS Neuron SDK
- Dual AMD EPYC processors (AMD EPYC 7R13)
- Up to 384 GB of shared accelerator memory (32 GB HBM per accelerator)
- Up to 100 Gbps networking
Instance Size Inferentia2 Chips
Accelerator Memory (GB) vCPU Memory (GiB) Local Storage Inter-accelerator Interconnect Network Bandwidth (Gbps) EBS Bandwidth (Gbps) inf2.xlarge 1 32 4 16 EBS Only NA Up to 15 Up to 10 inf2.8xlarge 1 32 32 128 EBS Only NA Up to 25 10 inf2.24xlarge 6 192 96 384 EBS Only Yes 50 30 inf2.48xlarge 12 384 192 768 EBS Only Yes 100 60 Use Cases
Natural language understanding (advanced text analytics, document analysis, conversational agents), translation, image and video generation, speech recognition, personalization, fraud detection, and more.
-
Inf1
-
Amazon EC2 Inf1 instances are built from the ground up to support machine learning inference applications.
Features:
- Up to 16 AWS Inferentia Chips
- Supported by AWS Neuron SDK
- High frequency 2nd Generation Intel Xeon Scalable processors (Cascade Lake P-8259L)
- Up to 100 Gbps networking
Instance Size Inferentia chips vCPUs Memory (GiB) Instance Storage Inter-accelerator Interconnect Network Bandwidth (Gbps)*** EBS Bandwidth inf1.xlarge 1 4 8 EBS only N/A Up to 25 Up to 4.75 inf1.2xlarge 1 8 16 EBS only N/A Up to 25 Up to 4.75 inf1.6xlarge 4 24 48 EBS only Yes 25 4.75 inf1.24xlarge 16 96 192 EBS only Yes 100 19 Use Cases
Recommendation engines, forecasting, image and video analysis, advanced text analytics, document analysis, voice, conversational agents, translation, transcription, and fraud detection.
-
DL1
-
Amazon EC2 DL1 instances are powered by Gaudi accelerators from Habana Labs (an Intel company). They deliver up to 40% better price performance for training deep learning models compared to current generation GPU-based EC2 instances.
Features:
- 2nd Generation Intel Xeon Scalable Processor (Cascade Lake P-8275CL)
- Up to 8 Gaudi accelerators with 32 GB of high bandwidth memory (HBM) per accelerator
- 400 Gbps of networking throughput
- 4 TB of local NVMe storage
Instance Size
vCPU
Gaudi Accelerators
Instance Memory (GiB)
Instance Storage (GB)
Accelerator Peer-to-Peer Bidirectional
(Gbps)Network Bandwidth (Gbps)
EBS Bandwidth (Gbps)
dl1.24xlarge
96
8
768
4 x 1000 NVMe SSD
100
400
19
DL1 instances have the following specs:
- 2nd Generation Intel Xeon Scalable Processor
- Intel AVX†, Intel AVX2†, Intel AVX-512, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
Deep learning training, object detection, image recognition, natural language processing, and recommendation engines.
- 2nd Generation Intel Xeon Scalable Processor (Cascade Lake P-8275CL)
-
DL2q
-
Amazon EC2 DL2q instances , powered by Qualcomm AI 100 accelerators, can be used to cost-efficiently deploy deep learning (DL) workloads in the cloud or validate performance and accuracy of DL workloads that will be deployed on Qualcomm devices.
Features:
- 8 Qualcomm AI 100 accelerators
- Supported by Qualcomm Cloud AI Platform and Apps SDK
- 2nd Generation Intel Xeon Scalable Processors (Cascade Lake P-8259CL)
- Up to 128 GB of shared accelerator memory
- Up to 100 Gbps networking
Instance Size Qualcomm AI 100 Accelerators
Accelerator Memory (GB) vCPU Memory (GiB) Local Storage Inter-accelerator Interconnect Network Bandwidth (Gbps) EBS Bandwidth (Gbps) dl2q.24xlarge 8 128 96 768 EBS Only No 100 19 Use Cases
Run popular DL and generative AI applications, such as content generation, image analysis, text summarization, and virtual assistants.; Validate AI workloads before deploying them across smartphones, automobiles, robotics, and extended reality headsets.
-
F2
-
Amazon EC2 F2 instances offer customizable hardware acceleration with field programmable gate arrays (FPGAs).
Features:
- Up to 8 AMD Virtex UltraScale+ HBM VU47P FPGAs with 2.9 million logic cells and 9024 DSP slices
- 3rd generation AMD EPYC processor
- 64 GiB of DDR4 ECC-protected FPGA memory
- Dedicated FPGA PCI-Express x16 interface
- Up to 100 Gbps of networking bandwidth
- Supported by FPGA Developer AMI and FPGA Development Kit
Instance Name FPGAs vCPU FPGA Memory HBM / DDR4 Instance Memory (GB) Local Storage (GiB) Network Bandwidth (Gbps) EBS Bandwidth (Gbps) f2.12xlarge 2 48 32 GiB / 128 GiB 512 2x 950 25 Gbps 15 Gbps f2.48xlarge 8 192 128 GiB / 512 GiB 2,048 8x 950 100 Gbps 60 Gbps Use Cases
Genomics research, financial analytics, real-time video processing, big data search and analysis, and security.
-
F1
-
Amazon EC2 F1 instances offer customizable hardware acceleration with field programmable gate arrays (FPGAs).
Instances Features:
- High frequency Intel Xeon Scalable Processors (Broadwell E5-2686 v4)
- NVMe SSD Storage
- Support for Enhanced Networking
FPGA Features:
- Xilinx Virtex UltraScale+ VU9P FPGAs
- 64 GiB of ECC-protected memory on 4x DDR4
- Dedicated PCI-Express x16 interface
- Approximately 2.5 million logic elements
- Approximately 6,800 Digital Signal Processing (DSP) engines
- FPGA Developer AMI
Instance FPGAs vCPU Memory (GiB) Instance Storage (GB) Networking Performance (Gbps)*** f1.2xlarge 1 8 122 1 x 470 Up to 10 f1.4xlarge 2 16 244 1 x 940 Up to 10 f1.16xlarge 8 64 976 4 x 940 25 For f1.16xlarge instances, the dedicated PCI-e fabric lets the FPGAs share the same memory space and communicate with each other across the fabric at up to 12 Gbps in each direction.
All instances have the following specs:
- 2.3 GHz (base) and 2.7 GHz (turbo) Intel Xeon E5-2686 v4 Processor
- Intel AVX†, Intel AVX2†, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
Genomics research, financial analytics, real-time video processing, big data search and analysis, and security.
-
VT1
-
Amazon EC2 VT1 instances are designed to deliver low cost real-time video transcoding with support for up to 4K UHD resolution.
Features:
- 2nd Generation Intel Xeon Scalable Processors (Cascade Lake P-8259CL)
- Up to 8 Xilinx U30 media accelerator cards with accelerated H.264/AVC and H.265/HEVC codecs
- Up to 25 Gbps of enhanced networking throughput
- Up to 19 Gbps of EBS bandwidth
Instance Size
U30 Accelerators
vCPU
Memory (GiB)
Network Bandwidth (Gbps)
EBS Bandwidth
(Gbps)1080p60 Streams
4Kp60 Streams
vt1.3xlarge
1
12
24
3.125
Up to 4.75
8
2
vt1.6xlarge
2
24
48
6.25
4.75
16
4
vt1.24xlarge
8
96
192
25
19
64
16
All instances have the following specs:
- 2nd Generation Intel Xeon Scalable Processors
- Intel AVX†, Intel AVX2†, Intel AVX-512, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
Live event broadcast, video conferencing, and just-in-time transcoding.
- 2nd Generation Intel Xeon Scalable Processors (Cascade Lake P-8259CL)
Each vCPU is a thread of either an Intel Xeon core or an AMD EPYC core, except for T2 and m3.medium.
† AVX, AVX2, AVX-512, and Enhanced Networking are only available on instances launched with HVM AMIs.
* This is the default and maximum number of vCPUs available for this instance type. You can specify a custom number of vCPUs when launching this instance type. For more details on valid vCPU counts and how to start using this feature, visit the Optimize CPUs documentation page here.
*** Instances marked with "Up to" Network Bandwidth have a baseline bandwidth and can use a network I/O credit mechanism to burst beyond their baseline bandwidth on a best effort basis. For more information, see instance network bandwidth.
Storage Optimized
Storage optimized instances are designed for workloads that require high, sequential read and write access to very large data sets on local storage. They are optimized to deliver tens of thousands of low-latency, random I/O operations per second (IOPS) to applications.
-
I8g
-
I7ie
-
I4g
-
Im4gn
-
Is4gen
-
I4i
-
I3
-
I3en
-
D3
-
D3en
-
D2
-
H1
-
I8g
-
Amazon EC2 I8g instances are powered by AWS Graviton4 processors and 3rd generation AWS Nitro SSDs. They deliver the best compute and storage performance among storage-optimized Amazon EC2 instances.
Features:
- Powered by custom-built AWS Graviton4 processors
- Featuring up to 22.5 TB of local NVMe SSD instance storage with 3rd generation AWS Nitro SSDs.
- Features the latest DDR5-5600 memory
- Up to 56.25 Gbps of network bandwidth
- Up to 30 Gbps of bandwidth to Amazon Elastic Block Store (EBS)
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps) EBS Bandwidth (Gbps) i8g.large 2 16 1 x 468 GB = 468 GB Up to 10 Up to 10 i8g.xlarge 4 32 1 x 937 GB = 937 GB Up to 10 Up to 10 i8g.2xlarge 8 64 1 x 1,875 GB = 1,875 GB Up to 12 Up to 10 i8g.4xlarge 16 128 1 x 3,750 GB = 3,750 GB Up to 25 Up to 10 i8g.8xlarge 32 256 2 x 3,750 GB = 7,500 GB Up to 25 10 i8g.12xlarge 48 384 3 x 3,750 GB = 11,250 GB Up to 28.125 15 i8g.16xlarge 64 512 4 x 3,750 GB = 15,000 GB Up to 37.5 20 i8g.24xlarge 96 768 6 x 3,750 GB = 22,500 GB Up to 56.25 30 i8g.metal-24xl 96 768 6 x 3,750 GB = 22,500 GB Up to 56.25 30 All instances have the following specs:
Use Cases
I/O intensive workloads that require real-time latency access to data such as relational databases (MySQL, PostgreSQL), real-time databases, NoSQL databases (Aerospike, Apache Druid, Clickhouse, MongoDB), and real- time analytics such as Apache Spark.
- Powered by custom-built AWS Graviton4 processors
-
I7ie
-
Amazon EC2 I7ie instances are powered by 5th generation Intel Xeon Scalable processor and 3rd generation AWS Nitro SSDs. They deliver the highest local NVMe storage density in the cloud.
Features:
- Powered by up to 3.2 GHz 5th generation Intel Xeon Scalable Processors (Emerald Rapids 8559C)
- New Advance Matrix Extensions (AMX) accelerate matrix multiplication operations
- Featuring up to 120 TB of local NVMe SSD instance storage with 3rd generation AWS Nitro SSD
- Two new virtual sizes i7ie.18xlarge and i7ie.48xlarge
- Features latest DDR5-5600 memory
- Support for always-on memory encryption using Intel Total Memory Encryption (TME)
- Up to 100 Gbps of network bandwidth
- Up to 60 Gbps of bandwidth to Amazon Elastic Block Store (EBS)
- Supports Elastic Fabric Adapter (EFA) on i7ie.48xlarge
- Support for up to 128 EBS volume attachments per instance
- Powered by the AWS Nitro System, a combination of dedicated hardware and software
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps) EBS Bandwidth (Gbps) i7ie.large 2 16 1 x 1,250 GB = 1,250 GB Up to 25 Up to 10 i7ie.xlarge 4 32 1 x 2,500 GB = 2,500 GB Up to 25 Up to 10 i7ie.2xlarge 8 64 2 x 2,500 GB = 5,000 GB Up to 25 Up to 10 i7ie.3xlarge 12 96 1 x 7,500 GB = 7,500 GB Up to 25 Up to 10 i7ie.6xlarge 24 192 2 x 7,500 GB = 15,000 GB Up to 25 Up to 10 i7ie.12xlarge 48 384 4 x 7,500 GB = 30,000 GB Up to 50 15 i7ie.18xlarge 72 576 6 x 7,500 GB = 45,000 GB Up to 75 22.5 i7ie.24xlarge 96 768 8 x 7,500 GB = 60,000 GB Up to 100 30 i7ie.48xlarge 192 1,536 16 x 7,500 GB = 120,000 GB 100 60 All instances have the following specs:
- 5th generation Intel Xeon Scalable processors
- Optimized for Amazon EBS
- Enhanced networking†
Use Cases
Applications that require high throughput and real-time latency access to large amounts of data residing on instance storage such as NoSQL databases (e.g., Cassandra, MongoDB, Aerospike, HBase, RocksDB), distributed file systems, search engines, and analytics.
- Powered by up to 3.2 GHz 5th generation Intel Xeon Scalable Processors (Emerald Rapids 8559C)
-
I4g
-
Amazon EC2 I4g instances are powered by AWS Graviton2 processors and provide the best price performance for storage-intensive workloads in Amazon EC2. I4g instances deliver up to 15% better compute performance compared to similar storage-optimized instances.
Features:
- Powered by AWS Graviton2 processors
- Featuring up to 15 TB of NVMe SSD instance storage with AWS Nitro SSDs that provide up to 60% lower I/O latency and up to 75% reduced latency variability compared to I3 and I3en instances and feature always-on encryption
- Optimized for workloads that map to 8 GB of memory per vCPU
- Up to 38 Gbps of network bandwidth using Elastic Network Adapter (ENA)-based enhanced networking
- Supports Elastic Fabric Adapter (EFA) on i4g.16xlarge instances
- Up to 20 Gbps of bandwidth to the Amazon Elastic Block Store (EBS)
- Built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for torn write prevention (TWP) to facilitate additional performance and reduce latencies with database workloads such as MySQL and MariaDB.
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps) EBS Bandwidth (Gbps) i4g.large 2 16 1 x 468 GB = 468 GB Up to 10 Up to 10 i4g.xlarge 4 32 1 x 937 GB = 937 GB Up to 10 Up to 10 i4g.2xlarge 8 64 1 x 1,875 GB = 1,875 GB Up to 12 Up to 10 i4g.4xlarge 16 128 1 x 3,750 GB = 3,750 GB Up to 25 Up to 10 i4g.8xlarge 32 256 2 x 3,750 GB = 7,500 GB 18.75 10 i4g.16xlarge 64 512 4 x 3,750 GB = 15,000 GB 37.5 20 All instances have the following specs:
- Custom-built AWS Graviton2 processor with 64-bit Arm cores
- EBS Optimized
- Enhanced Networking†
Use Cases
Amazon EC2 I4g instances are optimized for I/O intensive applications and are targeted to customers using transactional databases (Amazon DynamoDB, MySQL, and PostgreSQL), Amazon OpenSearch Service, and real-time analytics such as Apache Spark.
-
Im4gn
-
Amazon EC2 Im4gn instances are powered by AWS Graviton2 processors and provide the best price performance for storage-intensive workloads in Amazon EC2. They provide up to 40% better price performance, up to 44% lower cost per TB of storage over I3 instances.
Features:
- Powered by AWS Graviton2 processors
- Featuring up to 30 TB of NVMe SSD instance storage with AWS Nitro SSDs that provide up to 60% lower I/O latency and up to 75% reduced latency variability compared to I3 and I3en instances and feature always-on encryption
- Optimized for workloads that map to 4 GB of memory per vCPU
- 2x NVMe SSD storage density per vCPU compared to I3 instances
- Up to 100 Gbps of network bandwidth using Elastic Network Adapter (ENA)-based Enhanced Networking
- Support for Elastic Fabric Adapter on im4gn.16xlarge
- Up to 38 Gbps of bandwidth to the Amazon Elastic Block Store
- Built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for Torn Write Prevention (TWP) to enable additional performance and reduce latencies with database workloads such as MySQL and MariaDB.
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) im4gn.large 2 8 1 x 937 AWS Nitro SSD Up to 25 Up to 9.5 im4gn.xlarge 4 16 1 x 1875 AWS Nitro SSD Up to 25 Up to 9.5 im4gn.2xlarge 8 32 1 x 3750 AWS Nitro SSD Up to 25 Up to 9.5 im4gn.4xlarge 16 64 1 x 7500 AWS Nitro SSD 25 9.5 im4gn.8xlarge 32 128 2 x 7500 AWS Nitro SSD 50 19 im4gn.16xlarge 64 256 4 x 7500 AWS Nitro SSD 100 38 All instances have the following specs:
- Custom built AWS Graviton2 Processor
- EBS Optimized
- Enhanced Networking†
Use Cases
These instances maximize the number of transactions processed per second (TPS) for I/O intensive and business-critical workloads which have medium size data sets and can benefit from high compute performance and high network throughput such as relational databases (MySQL, MariaDB, and PostgreSQL), and NoSQL databases (KeyDB, ScyllaDB, and Cassandra). They are also an ideal fit for workloads that require very fast access to medium size data sets on local storage such as search engines and data analytics workloads.
- Powered by AWS Graviton2 processors
-
Is4gen
-
Amazon EC2 Is4gen instances are are powered by AWS Graviton2 processors and offer the lowest cost per TB of SSD storage and the highest density of SSD storage per vCPU in Amazon EC2 for storage intensive workloads. These instances provide up to 15% lower cost per TB, up to 48% better compute performance per vCPU, compared to I3en instances.
Features:
- Powered by AWS Graviton2 processors
- Featuring up to 30 TB of NVMe SSD instance storage with AWS Nitro SSDs that provide up to 60% lower I/O latency and up to 75% reduced latency variability compared to I3 and I3en instances and feature always-on encryption
- Optimized for workloads that map to 6 GB of memory per vCPU
- 50% more NVMe SSD storage per vCPU compared to I3en
- Up to 50 Gbps of network bandwidth using Elastic Network Adapter (ENA)-based Enhanced Networking
- Up to 19 Gbps of bandwidth to the Amazon Elastic Block Store
- Built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for Torn Write Prevention (TWP) to enable additional performance and reduce latencies with database workloads such as MySQL and MariaDB.
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) is4gen.medium 1 6 1 x 937 AWS Nitro SSD Up to 25 Up to 9.5 is4gen.large 2 12 1 x 1875 AWS Nitro SSD Up to 25 Up to 9.5 is4gen.xlarge 4 24 1 x 3750 AWS Nitro SSD Up to 25 Up to 9.5 is4gen.2xlarge 8 48 1 x 7500 AWS Nitro SSD Up to 25 Up to 9.5 is4gen.4xlarge 16 96 2 x 7500 AWS Nitro SSD 25 9.5 is4gen.8xlarge 32 192 4 x 7500 AWS Nitro SSD 50 19 All instances have the following specs:
- Custom built AWS Graviton2 Processor with 64-bit Arm cores
- EBS Optimized
- Enhanced Networking†
Use Cases
These instances maximize the number of transactions processed per second (TPS) for I/O demanding workloads which have large datasets and can map to highest NVMe storage density per vCPU such as NoSQL databases (KeyDB, MongoDB, ScyllaDB, and Cassandra) which have large datasets and can map to highest NVMe storage density per vCPU. They are also an ideal fit for workloads that require higher storage density and very fast access to large data sets on local storage such as search engines (Splunk and Elasticsearch), data streaming, and large distributed file systems.
- Powered by AWS Graviton2 processors
-
I4i
-
Amazon EC2 I4i instances are powered by 3rd generation Intel Xeon Scalable processors (Ice Lake) and deliver the highest local storage performance within Amazon EC2 using AWS Nitro NVMe SSDs.
Features:
- Up to 3.5 GHz 3rd generation Intel Xeon Scalable Processors (Ice Lake 8375C)
- Up to 30% better compute price performance than I3 instances
- Up to 30 TB of NVMe storage from AWS Nitro SSDs that provide up to 60% lower storage I/O latency, and up to 75% lower storage I/O latency variability compared to I3 instances
- Up to 75 Gbps of networking speed
- Up to 40 Gbps of bandwidth to the Amazon Elastic Block Store
- A new instance size (32xlarge) with 128 vCPUs and 1,024 GiB of memory
- Supports Elastic Fabric Adapter on the 32xlarge size
- Support for always-on memory encryption using Intel Total Memory Encryption (TME)
- Built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support for new Intel Advanced Vector Extensions (AVX 512) instructions for faster execution of cryptographic algorithms
- Support for Torn Write Prevention (TWP) to enable additional performance and reduce latencies with database workloads such as MySQL and MariaDB.
Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** EBS Bandwidth (Gbps) i4i.large 2 16 1 x 468 AWS Nitro SSD Up to 10 Up to 10 i4i.xlarge 4 32 1 x 937 AWS Nitro SSD Up to 10 Up to 10 i4i.2xlarge 8 64 1 x 1875 AWS Nitro SSD Up to 12 Up to 10 i4i.4xlarge 16 128 1 x 3750 AWS Nitro SSD Up to 25 Up to 10 i4i.8xlarge 32 256 2 x 3750 AWS Nitro SSD 18.75 10 i4i.12xlarge 48 384 3 x 3750 AWS Nitro SSD 28.12 15 i4i.16xlarge 64 512 4 x 3750 AWS Nitro SSD 37.5 20 i4i.24xlarge 96 768 6 x 3750 AWS Nitro SSD 56.25 30 i4i.32xlarge 128 1,024 8 x 3750 AWS Nitro SSD 75 40 i4i.metal 128 1,024 8 x 3750 AWS Nitro SSD 75 40 All instances have the following specs:
- Up to 3.5 GHz 3rd generation Intel Xeon Scalable processors
- EBS Optimized
- Enhanced Networking†
Use Cases
These instances are designed to maximize transactions per second (TPS) for I/O demanding workloads that require very fast access to small to medium sized data sets on local storage such as transactional databases (e.g. MySQL, Oracle DB, and Microsoft SQL Server), and NoSQL databases (e.g. MongoDB, Couchbase, Aerospike and Redis). I4i instances are also an ideal fit for workloads that can benefit from high compute performance per TB of storage such as data analytics and search engines.
- Up to 3.5 GHz 3rd generation Intel Xeon Scalable Processors (Ice Lake 8375C)
-
I3
-
This instance family provides Non-Volatile Memory Express (NVMe) SSD-backed instance storage optimized for low latency, very high random I/O performance, high sequential read throughput and provide high IOPS at a low cost. I3 also offers Bare Metal instances (i3.metal), powered by the Nitro System, for non-virtualized workloads, workloads that benefit from access to physical resources, or workloads that may have license restrictions.
Features:
- High Frequency Intel Xeon Scalable Processors (Broadwell E5-2686 v4) with base frequency of 2.3 GHz
- Up to 25 Gbps of network bandwidth using Elastic Network Adapter (ENA)-based Enhanced Networking
- High Random I/O performance and High Sequential Read throughput
- Support bare metal instance size for workloads that benefit from direct access to physical processor and memory
Instance vCPU* Mem (GiB) Instance Storage (GB) Networking Performance (Gbps)*** i3.large 2 15.25 1 x 475 NVMe SSD Up to 10 i3.xlarge 4 30.5 1 x 950 NVMe SSD Up to 10 i3.2xlarge 8 61 1 x 1900 NVMe SSD Up to 10 i3.4xlarge 16 122 2 x 1900 NVMe SSD Up to 10 i3.8xlarge 32 244 4 x 1900 NVMe SSD 10 i3.16xlarge 64 488 8 x 1900 NVMe SSD 25 i3.metal 72** 512 8 x 1900 NVMe SSD 25 All instances have the following specs:
- 2.3 GHz Intel Xeon E5 2686 v4 Processor
- Intel AVX†, Intel AVX2†, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
Small to medium-scale NoSQL databases (e.g. Cassandra, MongoDB, Aerospike), in-memory databases (e.g. Redis), scale-out transactional databases, data warehousing, Elasticsearch, analytics workloads.
-
I3en
-
This instance family provides dense Non-Volatile Memory Express (NVMe) SSD instance storage optimized for low latency, high random I/O performance, high sequential disk throughput, and offers the lowest price per GB of SSD instance storage on Amazon EC2. I3en also offers Bare Metal instances (i3en.metal), powered by the Nitro System, for non-virtualized workloads, workloads that benefit from access to physical resources, or workloads that may have license restrictions.
Features:
- Up to 60 TB of NVMe SSD instance storage
- Up to 100 Gbps of network bandwidth using Elastic Network Adapter (ENA)-based Enhanced Networking
- High random I/O performance and high sequential disk throughput
- Up to 3.1 GHz Intel® Xeon® Scalable Processors (Skylake 8175M or Cascade Lake 8259CL) with new Intel Advanced Vector Extension (AVX-512) instruction set
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
- Support bare metal instance size for workloads that benefit from direct access to physical processor and memory
- Support for Elastic Fabric Adapter on i3en.24xlarge
Instance vCPU Mem (GiB) Instance Storage (GB) Network Bandwidth (Gbps)*** i3en.large 2 16 1 x 1250 NVMe SSD
Up to 25 i3en.xlarge 4 32 1 x 2500 NVMe SSD
Up to 25 i3en.2xlarge 8 64 2 x 2500 NVMe SSD
Up to 25 i3en.3xlarge 12 96 1 x 7500 NVMe SSD
Up to 25 i3en.6xlarge 24 192 2 x 7500 NVMe SSD
25 i3en.12xlarge 48 384 4 x 7500 NVMe SSD
50 i3en.24xlarge
96 768 8 x 7500 NVMe SSD
100 i3en.metal 96 768 8 x 7500 NVMe SSD 100 All instances have the following specs:
- 3.1 GHz all core turbo Intel® Xeon® Scalable (Skylake) processors
- Intel AVX†, Intel AVX2†, Intel AVX-512†, Intel Turbo
- EBS Optimized
- Enhanced Networking
Use cases
Small to large-scale NoSQL databases (e.g. Cassandra, MongoDB, Aerospike), in-memory databases (e.g. Redis), scale-out transactional databases, distributed file systems, data warehousing, Elasticsearch, analytics workloads.
-
D3
-
Amazon EC2 D3 instances are optimized for applications that require high sequential I/O performance and disk throughput. D3 instances represent an optimal upgrade path for workloads running on D2 instances that need additional compute and network performance at a lower price/TB.
Features:
- Up to 3.1 GHz 2nd Generation Intel® Xeon® Scalable Processors (Intel Cascade Lake 8259CL) with new Intel Advanced Vector Extension (AVX-512) instruction set
- Up to 48 TB of HDD instance storage
- Up to 45% higher read and write disk throughput than EC2 D2 instances
- Powered by the AWS Nitro System
Instance Size
vCPU
Memory (GiB)
Instance Storage (TB)
Aggregate Disk Throughput (MiB/s) Network Bandwidth (Gbps)***
EBS Bandwidth (Mbps)
d3.xlarge
4
32
3 x 2 HDD
580
Up to 15
850
d3.2xlarge
8
64
6 x 2 HDD
1,100
Up to 15
1,700
d3.4xlarge
16
128
12 x 2 HDD
2,300
Up to 15
2,800
d3.8xlarge
32
256
24 x 2 HDD
4,600
25
5,000
*128k block sizes, sequential read and write (rounded to nearest 100 except for xlarge)
All instances have the following specs:
- Up to 3.1 GHz 2nd Generation Intel® Xeon® Scalable (Cascade Lake) processors
- Intel AVX†, Intel AVX2†, Intel AVX-512†, Intel Turbo
- Enhanced Networking
Use Cases
Distributed File Systems (e.g., HDFS, MapReduce File Systems), Big Data analytical workloads (e.g., Elastic MapReduce, Spark, Hadoop), Massively Parallel Processing (MPP) Data warehouse (e.g. Redshift, HP Vertica), Log or data processing applications (e.g., Kafka, Elastic Search)
- Up to 3.1 GHz 2nd Generation Intel® Xeon® Scalable Processors (Intel Cascade Lake 8259CL) with new Intel Advanced Vector Extension (AVX-512) instruction set
-
D3en
-
Amazon EC2 D3en instances are optimized for applications that require high sequential I/O performance, disk throughput, and low cost storage for very large data sets. D3en instances offer the lowest dense storage costs amongst all cloud offerings.
Features:
- Up to 3.1 GHz 2nd Generation Intel® Xeon® Scalable Processors (Intel Cascade Lake 8259CL)with new Intel Advanced Vector Extension (AVX-512) instruction set
- Up to 336 TB of HDD instance storage
- Up to 75 Gbps of network bandwidth
- Up to 2x higher read and write disk throughput than EC2 D2 instances
- Powered by the AWS Nitro System
Instance Size
vCPU
Memory (GiB)
Instance Storage (TB)
Aggregate Disk Throughput (MiB/s) Network Bandwidth (Gbps)***
EBS Bandwidth (Mbps)
d3en.xlarge
4
16
2 x 14 HDD
500
Up to 25
850
d3en.2xlarge
8
32
4 x 14 HDD
1,000
Up to 25
1,700
d3en.4xlarge
16
64
8 x 14 HDD
2,000
25
2,800
d3en.6xlarge
24
96
12 x 14 HDD
3,100
40
4,000
d3en.8xlarge
32
128
16 x 14 HDD
4,100
50
5,000
d3en.12xlarge
48
192
24 x 14 HDD
6,200
75
7,000
*128k block sizes, sequential read and write (rounded to nearest 100)
All instances have the following specs:
- 3.1 GHz all core turbo 2nd Generation Intel® Xeon® Scalable (Cascade Lake) processors
- Intel AVX†, Intel AVX2†, Intel AVX-512†, Intel Turbo
- Enhanced Networking
Use Cases
Multi-node file storage systems such as Lustre, BeeGFS, GPFS, VxCFS, and GFS2. High Capacity data lakes with consistent sequential I/O performance
- Up to 3.1 GHz 2nd Generation Intel® Xeon® Scalable Processors (Intel Cascade Lake 8259CL)with new Intel Advanced Vector Extension (AVX-512) instruction set
-
D2
-
Amazon EC2 D2 instances feature up to 48 TB of HDD-based local storage, deliver high disk throughput, and offer the lowest price per disk throughput performance on Amazon EC2.
Features:
- High-frequency Intel Xeon Scalable Processors (Haswell E5-2676 v3)
- HDD storage
- Consistent high performance at launch time
- High disk throughput
- Support for Enhanced Networking
Instance vCPU* Mem (GiB) Instance Storage (GB) Network Performance d2.xlarge 4 30.5 3 x 2000 HDD Moderate d2.2xlarge 8 61 6 x 2000 HDD High d2.4xlarge 16 122 12 x 2000 HDD High d2.8xlarge 36 244 24 x 2000 HDD 10 Gbps All instances have the following specs:
- 2.4 GHz Intel Xeon E5-2676 v3 Processor
- Intel AVX†, Intel AVX2†, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
Massively Parallel Processing (MPP) data warehousing, MapReduce and Hadoop distributed computing, distributed file systems, network file systems, log or data-processing applications.
-
H1
-
Amazon EC2 H1 instances feature up to 16 TB of HDD-based local storage, deliver high disk throughput, and a balance of compute and memory.
Features:
- Powered by 2.3 GHz Intel Xeon Scalable Processor (Broadwell E5 2686 v4)
- Up to 16TB of HDD storage
- High disk throughput
- ENA enabled Enhanced Networking up to 25 Gbps
Instance vCPU* Mem (GiB) Networking Performance (Gbps)*** Instance Storage (GB) h1.2xlarge 8 32 Up to 10 1 x 2000 HDD h1.4xlarge 16 64 Up to 10 2 x 2000 HDD h1.8xlarge 32 128 10 4 x 2000 HDD h1.16xlarge 64 256 25 8 x 2000 HDD All instances have the following specs:
- 2.3 GHz Intel Xeon E5 2686 v4 Processor
- Intel AVX†, Intel AVX2†, Intel Turbo
- EBS Optimized
- Enhanced Networking†
Use Cases
MapReduce-based workloads, distributed file systems such as HDFS and MapR-FS, network file systems, log or data processing applications such as Apache Kafka, and big data workload clusters.
** i3.metal provides 72 logical processors on 36 physical cores
Looking for previous generation instances that were not listed here? Please see the Previous Generation Instances page.
HPC Optimized
High performance computing (HPC) instances are purpose built to offer the best price performance for running HPC workloads at scale on AWS. HPC instances are ideal for applications that benefit from high-performance processors such as large, complex simulations and deep learning workloads.
-
Hpc7g
-
Hpc7a
-
Hpc6id
-
Hpc6a
-
Hpc7g
-
Amazon EC2 Hpc7g instances are designed for compute-intensive high performance computing (HPC) workloads, such as computational fluid dynamics (CFD), weather forecasting, and molecular dynamics.
Features:
- Up to 64 cores of Graviton3E processors with 128 GiB of memory
- Elastic Fabric Adapter (EFA) is enabled for internode network bandwidth speeds of up to 200 Gbps, delivering increased performance for network-intensive applications
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance Size Physical Cores Memory (GiB) Instance Storage EFA Network Bandwidth (Gbps) Network Bandwidth (Gbps*) hpc7g.4xlarge 16 128 EBS-Only 200 25 hpc7g.8xlarge 32 128 EBS-Only 200 25 hpc7g.16xlarge 64 128 EBS-Only 200 25 *500 Mbps network bandwidth outside of the virtual private cloud (VPC) and Amazon Simple Storage Service (Amazon S3) -
Hpc7a
-
Amazon EC2 Hpc7a instances feature 4th Gen AMD EPYC processors and are designed for tightly coupled, compute-intensive high performance computing (HPC) workloads such as computational fluid dynamics (CFD), weather forecasting, and multiphysics simulations.
Features:
- Up to 192 cores of 4th Gen AMD EPYC processors with 768 GiB of memory (AMD EPYC 9R14)
- Elastic Fabric Adapter (EFA) is enabled for internode network bandwidth speeds of up to 300 Gbps, delivering increased performance for network-intensive applications
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance Size Physical Cores Memory (GiB) Instance Storage EFA Network Bandwidth (Gbps) Network Bandwidth (Gbps)* hpc7a.12xlarge
24
768
EBS-Only 300
25
hpc7a.24xlarge
48
768
EBS-Only 300
25
hpc7a.48xlarge
96
768
EBS-Only 300
25
hpc7a.96xlarge
192
768
EBS-Only 300
25
*500 Mbps network bandwidth outside the virtual private cloud (VPC) and Amazon Simple Storage Service (Amazon S3).
-
Hpc6id
-
Amazon EC2 Hpc6id instances are designed for memory-bound and data-intensive high performance computing (HPC) workloads such as finite element analysis (FEA) for crash simulations, seismic reservoir simulations, and structural simulations.
Features:
- Up to 3.5 GHz all-core turbo frequency, 64 cores of Intel Xeon Scalable processors with 5 GB/s per vCPU of memory bandwidth and 1024 GiB of memory
- Elastic Fabric Adapter (EFA) is enabled for inter-node network bandwidth speeds of up to 200 Gbps, delivering increased performance for network-intensive applications
- Simultaneous multi-threading is disabled to optimize performance and cluster management
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance Size Cores Memory (GiB) SSD Storage (GiB) Network Bandwidth (Gbps)* EFA Network Bandwidth (Gbps) hpc6id.32xlarge 64 1024 4 x 3800 (NVMe SSD) 25 200 *500 Mbps network bandwidth outside of the virtual private cloud (VPC) and Amazon Simple Storage Service (S3). - Up to 3.5 GHz all-core turbo frequency, 64 cores of Intel Xeon Scalable processors with 5 GB/s per vCPU of memory bandwidth and 1024 GiB of memory
-
Hpc6a
-
Amazon EC2 Hpc6a instances are optimized for tightly coupled, compute-intensive, high performance computing (HPC) workloads to deliver cost-efficient performance. Hpc6a instances are designed for workloads such as computational fluid dynamics, molecular dynamics, and weather forecasting. They are also designed for workloads that can take advantage of improved network throughput and packet-rate performance.
Features:
- Up to 3.6 GHz third-generation AMD EPYC processors (AMD EPYC 7R13)
- Elastic Fabric Adapter (EFA) is enabled for inter-node network bandwidth speeds of up to 100 Gbps, delivering increased performance for network-intensive applications
- Simultaneous multithreading is disabled to optimize performance and cluster management
- Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor
Instance Size Cores Memory (GiB) Network Bandwidth (Gbps)* EFA Network Bandwidth (Gbps) hpc6a.48xlarge 96 384 25 100 *25 Gbps networking bandwidth outside of the virtual private cloud (VPC), Amazon Simple Storage Service (S3), or Amazon Elastic Block Store (EBS). - Up to 3.6 GHz third-generation AMD EPYC processors (AMD EPYC 7R13)
Instance Features
Amazon EC2 instances provide a number of additional features to help you deploy, manage, and scale your applications.
Amazon EC2 allows you to choose between Fixed Performance instance families (e.g. M6, C6, and R6) and Burstable Performance Instance families (e.g. T3). Burstable Performance Instances provide a baseline level of CPU performance with the ability to burst above the baseline.
T Unlimited instances can sustain high CPU performance for as long as a workload needs it. For most general-purpose workloads, T Unlimited instances will provide ample performance without any additional charges. The hourly T instance price automatically covers all interim spikes in usage when the average CPU utilization of a T instance is at or less than the baseline over a 24-hour window. If the instance needs to run at higher CPU utilization for a prolonged period, it can do so at a flat additional charge of 5 cents per vCPU-hour.
T instances’ baseline performance and ability to burst are governed by CPU Credits. Each T instance receives CPU Credits continuously, the rate of which depends on the instance size. T instances accrue CPU Credits when they are idle, and use CPU credits when they are active. A CPU Credit provides the performance of a full CPU core for one minute.
For example, a t2.small instance receives credits continuously at a rate of 12 CPU Credits per hour. This capability provides baseline performance equivalent to 20% of a CPU core (20% x 60 mins = 12 mins). If the instance does not use the credits it receives, they are stored in its CPU Credit balance up to a maximum of 288 CPU Credits. When the t2.small instance needs to burst to more than 20% of a core, it draws from its CPU Credit balance to handle this surge automatically.
With T2 Unlimited enabled, the t2.small instance can burst above the baseline even after its CPU Credit balance is drawn down to zero. For a vast majority of general purpose workloads where the average CPU utilization is at or below the baseline performance, the basic hourly price for t2.small covers all CPU bursts. If the instance happens to run at an average 25% CPU utilization (5% above baseline) over a period of 24 hours after its CPU Credit balance is drawn to zero, it will be charged an additional 6 cents (5 cents/vCPU-hour x 1 vCPU x 5% x 24 hours).
Many applications such as web servers, developer environments and small databases don’t need consistently high levels of CPU, but benefit significantly from having full access to very fast CPUs when they need them. T instances are engineered specifically for these use cases. If you need consistently high CPU performance for applications such as video encoding, high volume websites or HPC applications, we recommend you use Fixed Performance Instances. T instances are designed to perform as if they have dedicated high speed processor cores available when your application really needs CPU performance, while protecting you from the variable performance or other common side-effects you might typically see from over-subscription in other environments.
Amazon EC2 allows you to choose between multiple storage options based on your requirements. Amazon EBS is a durable, block-level storage volume that you can attach to a single, running Amazon EC2 instance. You can use Amazon EBS as a primary storage device for data that requires frequent and granular updates. For example, Amazon EBS is the recommended storage option when you run a database on Amazon EC2. Amazon EBS volumes persist independently from the running life of an Amazon EC2 instance. Once a volume is attached to an instance you can use it like any other physical hard drive. Amazon EBS provides three volume types to best meet the needs of your workloads: General Purpose (SSD), Provisioned IOPS (SSD), and Magnetic. General Purpose (SSD) is the new, SSD-backed, general purpose EBS volume type that we recommend as the default choice for customers. General Purpose (SSD) volumes are suitable for a broad range of workloads, including small to medium sized databases, development and test environments, and boot volumes. Provisioned IOPS (SSD) volumes offer storage with consistent and low-latency performance, and are designed for I/O intensive applications such as large relational or NoSQL databases. Magnetic volumes provide the lowest cost per gigabyte of all EBS volume types. Magnetic volumes are ideal for workloads where data is accessed infrequently, and applications where the lowest storage cost is important.
Many Amazon EC2 instances can also include storage from devices that are located inside the host computer, referred to as instance storage. Instance storage provides temporary block-level storage for Amazon EC2 instances. The data on instance storage persists only during the life of the associated Amazon EC2 instance.
In addition to block level storage via Amazon EBS or instance storage, you can also use Amazon S3 for highly durable, highly available object storage. Learn more about Amazon EC2 storage options from the Amazon EC2 documentation.
For an additional, low, hourly fee, customers can launch selected Amazon EC2 instances types as EBS-optimized instances. EBS-optimized instances enable EC2 instances to fully use the IOPS provisioned on an EBS volume. EBS-optimized instances deliver dedicated throughput between Amazon EC2 and Amazon EBS, with options between 500 Megabits per second (Mbps) and 80 Gigabits per second (Gbps), depending on the instance type used. The dedicated throughput minimizes contention between Amazon EBS I/O and other traffic from your EC2 instance, providing the best performance for your EBS volumes. EBS-optimized instances are designed for use with all EBS volumes. When attached to EBS-optimized instances, Provisioned IOPS volumes can achieve single digit millisecond latencies and are designed to deliver within 10% of the provisioned IOPS performance 99.9% of the time. We recommend using Provisioned IOPS volumes with EBS-optimized instances or instances that support cluster networking for applications with high storage I/O requirements.
Select EC2 instances support cluster networking when launched into a common cluster placement group. A cluster placement group provides low-latency networking between all instances in the cluster. The bandwidth an EC2 instance can utilize depends on the instance type and its networking performance specification. Inter instance traffic within the same region can utilize up to 5 Gbps for single-flow and up to 100 Gbps for multi-flow traffic in each direction (full duplex). Traffic to and from S3 buckets in the same region can also utilize all available instance aggregate bandwidth. When launched in a placement group, instances can utilize up to 10 Gbps for single-flow traffic and up to 100 Gbps for multi-flow traffic. Network traffic to the Internet is limited to 5 Gbps (full duplex). Cluster networking is ideal for high performance analytics systems and many science and engineering applications, especially those using the MPI library standard for parallel programming.
Amazon EC2 instances that feature an Intel processor may provide access to the following processor features:
- Intel AES New Instructions (AES-NI): Intel AES-NI encryption instruction set improves upon the original Advanced Encryption Standard (AES) algorithm to provide faster data protection and greater security. All current generation EC2 instances support this processor feature.
- Intel Advanced Vector Extensions (Intel AVX, Intel AVX2 and Intel AVX-512): Intel AVX and Intel AVX2 are 256-bit and Intel AVX-512 is a 512-bit instruction set extensions designed for applications that are Floating Point (FP) intensive. Intel AVX instructions improve performance for applications like image and audio/video processing, scientific simulations, financial analytics, and 3D modeling and analysis. These features are only available on instances launched with HVM AMIs.
- Intel Turbo Boost Technology: Intel Turbo Boost Technology provides more performance when needed. The processor is able to automatically run cores faster than the base operating frequency to help you get more done faster.
- Intel Deep Learning Boost (Intel DL Boost): A new set of built-in processor technologies designed to accelerate AI deep learning use cases. The 2nd Gen Intel Xeon Scalable processors extend Intel AVX-512 with a new Vector Neural Network Instruction (VNNI/INT8) that significantly increases deep learning inference performance over previous generation Intel Xeon Scalable processors (with FP32), for image recognition/segmentation, object detection, speech recognition, language translation, recommendation systems, reinforcement learning and others. VNNI may not be compatible with all Linux distributions. Please check documentation before using.
Not all processor features are available in all instance types, check out the instance type matrix for more detailed information on which features are available from which instance types.
Measuring Instance Performance
Amazon EC2 allows you to provision a variety of instances types, which provide different combinations of CPU, memory, disk, and networking. Launching new instances and running tests in parallel is easy, and we recommend measuring the performance of applications to identify appropriate instance types and validate application architecture. We also recommend rigorous load/scale testing to ensure that your applications can scale as you intend.
Amazon EC2 provides you with a large number of options across ten different instance types, each with one or more size options, organized into distinct instance families optimized for different types of applications. We recommend that you assess the requirements of your applications and select the appropriate instance family as a starting point for application performance testing. You should start evaluating the performance of your applications by (a) identifying how your application needs compare to different instance families (e.g. is the application compute-bound, memory-bound, etc.?), and (b) sizing your workload to identify the appropriate instance size. There is no substitute for measuring the performance of your full application since application performance can be impacted by the underlying infrastructure or by software and architectural limitations. We recommend application-level testing, including the use of application profiling and load testing tools and services. For more information, open a support case and ask for additional network performance specifications for the specific instance types that you are interested in.