How do I benchmark network throughput between Amazon EC2 Linux instances in the same Amazon VPC?

Last updated: 2020-06-24

I want to measure the network bandwidth between Amazon Elastic Compute Cloud (Amazon EC2) Linux instances in the same Amazon Virtual Private Cloud (Amazon VPC). How can I do that?

Short description

Here are some factors that can affect Amazon EC2 network performance when the instances are in the same Amazon VPC:

  • The physical proximity of the EC2 instances. Instances in the same Availability Zone are geographically closest to each other. Instances in different Availability Zones in the same Region, instances in different Regions on the same continent, and instances in different Regions on different continents are progressively farther away from one another.
  • The EC2 instance maximum transmission unit (MTU). The MTU of a network connection is the largest permissible packet size (in bytes) that your connection can pass. All EC2 instances types support 1500 MTU. All current generation Amazon EC2 instances support jumbo frames. In addition, the previous generation instances, C3, G2, I2, M3, and R3 also use jumbo frames. Jumbo frames allow more than 1500 MTU. However, there are scenarios where your instance is limited to 1500 MTU even with jumbo frames. For more information, see Jumbo frames (9001 MTU).
  • The size of your EC2 instance. Larger instance sizes for an instance type typically provide better network performance than smaller instance sizes of the same type. For more information, see Amazon EC2 instance types.
  • Amazon EC2 enhanced networking support for Linux, except for T2 and M3 instance types. For more information, see Enhanced networking on Linux. For information on enabling enhanced networking on your instance, see How do I enable and configure enhanced networking on my EC2 instances?
  • Amazon EC2 high performance computing (HPC) support that uses placement groups. HPC provides full-bisection bandwidth and low latency, with support for up to 100-gigabit network speeds, depending on the instance type. To review network performance for each instance type, see Amazon Linux AMI instance type matrix. For more information, see Launching instances in a placement group.
  • The instance is a burstable performance instance (T3, T3a, and T2 instances). When a burstable instance performs below its network throughput performance baseline, CPU credits accrue. These credits allow burstable performance instances to temporarily burst above their network throughput baseline. For more information, see Burstable performance instances.

Because of the preceding factors, there might be significant network performance difference between different cloud environments. It's a best practice to periodically evaluate and baseline the network performance of your environment to improve application performance. Testing network performance provides valuable insight for determining the EC2 instance types, sizes, and configurations that best suit your needs. You can run network performance tests on any combination of instances you choose.

For more information, open a support case and ask for additional network performance specifications for the specific instance types that you're interested in.

Resolution

Before beginning benchmark tests, launch and configure your EC2 Linux instances:

1.    Launch two Linux instances that you can run network performance testing from.

2.    Verify that the instances support enhanced networking for Linux, and that they are in the same Amazon VPC.

3.    (Optional) If you're performing network testing between instances that don't support jumbo frames, then follow the steps in Network maximum transmission unit (MTU) for your EC2 instance to check and set the MTU on your instance.

4.    Connect to the instances to verify that you can access the instances.

Install the iperf network benchmark tool on both instances

In some distros, such as Amazon Linux, iperf is part of the Extra Packages for Enterprise Linux (EPEL) repository. To enable the EPEL repository, see How do I enable the EPEL repository for my Amazon EC2 instance running CentOS, RHEL, or Amazon Linux?

Connect to your Linux instances, and then run the following commands to install iperf.

To install iperf on RHEL 6 Linux hosts, run the following command:

# yum -y install  https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm  && yum -y install iperf

To install iperf on RHEL 7 Linux hosts, run the following command:

# yum -y install  https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm  && yum -y install iperf

To install iperf on Debian/Ubuntu hosts, run the following command:

# apt-get install -y iperf

To install iperf on CentOS 6/7 hosts, run the following command:

# yum -y install epel-release && yum -y install iperf

Test TCP network performance between the instances

By default, iperf communicates over port 5001 when testing TCP performance. However, you can configure that port by using the -p switch. Be sure to configure your security groups to allow communication over the port that iperf uses.

1.    Configure one instance as a server to listen on the default port, or specify an alternate listener port with the -p switch. Replace 5001 with your port, if different.

$ sudo iperf -s [-p 5001]

2.    Configure a second instance as a client, and run a test against the server with the desired parameters. For example, the following command initiates a TCP test against the specified server instance with two parallel connections:

#iperf -c 172.31.1.152 --parallel 2 -i 1 -t 2

Note: For a bidirectional test with iperf (version 2), use the -r option on the client side.

Using these specified iperf parameters, the output displays the interval per client stream, the data transferred per client stream, and the bandwidth used by each client stream. The following iperf output shows test results for two c5n.18xlarge EC2 Linux instances. The total bandwidth transmitted across all connections was 9.6 Gbits/second:

------------------------------------------------------------
Client connecting to 172.31.1.152, TCP port 5001
TCP window size: 4.00 MByte (default)
------------------------------------------------------------
[ 3] local 172.31.8.116 port 43582 connected with 172.31.1.152 port 5001
[ 4] local 172.31.8.116 port 43584 connected with 172.31.1.152 port 5001
[ ID] Interval Transfer Bandwidth
[ 4] 0.0- 1.0 sec 597 MBytes 5.01 Gbits/sec
[ 3] 0.0- 1.0 sec 598 MBytes 5.02 Gbits/sec
[SUM] 0.0- 1.0 sec 1.17 GBytes 10.0 Gbits/sec
[ 3] 1.0- 2.0 sec 597 MBytes 5.01 Gbits/sec
[ 3] 0.0- 2.0 sec 1.17 GBytes 5.01 Gbits/sec
[ 4] 1.0- 2.0 sec 596 MBytes 5.00 Gbits/sec
[SUM] 1.0- 2.0 sec 1.16 GBytes 10.0 Gbits/sec
[ 4] 0.0- 2.0 sec 1.17 GBytes 5.00 Gbits/sec
[SUM] 0.0- 2.0 sec 2.33 GBytes 10.0 Gbits/sec

Test UDP network performance between the instances

By default, iperf communicates over port 5001 when testing UDP performance. However, the port that you use is configurable using the -p switch. Be sure to configure your security groups to allow communication over the port that iperf uses.

Note: The default for UDP is 1 Mbit per second unless you specify a different bandwidth.

1.    Configure one instance as a server to listen on the default UDP port, or specify an alternate listener port with the -p switch. Replace 5001 with your port, if different.

$ sudo iperf -s -u [-p 5001]

2.    Configure a second instance as a client, and then run a test against the server with the desired parameters. For example, the following command initiates a UDP test against the specified server instance with the -b parameter set to 5g, the maximum network performance a c5n18xlarge instance can provide.

#iperf -c 172.31.1.152 -u -b 5g

The output shows the interval (time), the amount of data transferred, the bandwidth achieved, the jitter (the deviation in time for the periodic arrival of data grams), and the loss/total of UDP datagrams:

Output:
------------------------------------------------------------
Client connecting to 172.31.1.152, UDP port 5001
Sending 1470 byte datagrams, IPG target: 2.35 us (kalman adjust)
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 172.31.8.116 port 38942 connected with 172.31.1.152 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 5.70 GBytes 4.90 Gbits/sec
[ 3] Sent 4163756 datagrams
[ 3] Server Report:
[ 3] 0.0-10.0 sec 5.68 GBytes 4.88 Gbits/sec 0.002 ms 14124/4163756 (0.34%)