How does Amazon EBS calculate the optimal I/O size I should use to improve performance on my gp2 or io1 volume?

Last updated: 2020-06-08

I want to improve the throughput performance of my gp2 or io1 Amazon Elastic Block Store (Amazon EBS) volume. How does Amazon EBS calculate the I/O size I should use to improve performance?


For gp2 and io1 EBS volumes, Amazon EBS calculates performance in terms of input/output per second (IOPS). gp2 and io1 volumes can process a maximum I/O of 256 KiB.

The size of an I/O operation determines the throughput the EBS volume provides. Amazon EBS calculates throughput using the equation: Throughput = Number of IOPS * size per I/O operation.

The size per I/O operation varies by volume type.

  • gp2 volumes
    Max IOPS = 16,000
    Max Throughput = 250 MiB
  • io1 volumes
    Max IOPS = 64,000
    Max Throughput = 1,000 MiB

Note: Amazon EBS guarantees maximum throughput of 1,000 MiB/s only on Nitro-based instances. Other instance families guarantee up to 500 MiB/s. An older io1 volume might not see full performance until you perform a ModifyVolume action.

Amazon EBS calculates the optimal I/O size using the following equation: throughput / number of IOPS = optimal I/O size.

  • gp2 volume optimal I/O size: 250 MiB * 1024/16000 = 16 KiB
  • io1 volume optimal I/O size: 1000 MiB * 1024/64000 = 16 KiB

Amazon EBS merges smaller, sequential I/O operations that are 32 KiB or over to form a single I/O of 256 KiB before processing. Merging smaller, sequential I/O operations into larger, single I/O operations increases throughput.

For example, if the application is performing small I/O operations of 32 KiB:

  • Sequential operations: Amazon EBS merges sequential (physically contiguous) operations to the maximum I/O size of 256 KiB. In this scenario, Amazon EBS counts only 1 IOPS to perform 8 I/O operations submitted by the operating system.
  • Random operations: Amazon EBS counts random I/O operations separately. A single, random I/O operation of 32 KiB counts as 1 IOPS. In this scenario, Amazon EBS counts 8 random, 32 KiB I/O operations as 8 IOPS submitted by the OS.

As Amazon EBS tries to merge smaller I/O operations into a larger one that uses more I/O bandwidth, you might reach the throughput limit before achieving maximum IOPS. To avoid this, your application's I/O operations should be random enough that Amazon EBS counts them as a single IOPS of 16 KiB.

Amazon EBS splits I/O operations larger than the maximum 256 KiB into smaller operations. For example, if the I/O size is 500 KiB, Amazon EBS splits the operation into 2 IOPS. The first one is 256 KiB and the second one is 244 KiB.

Note: Instances or kernels that don't support indirect descriptors have an average I/O size at or near 44 KiB. Linux kernel 3.8 and below doesn't support indirect descriptors, therefore, you might find your I/O size capped at 44 KiB.