I need to measure the network bandwidth between Amazon Elastic Compute Cloud (Amazon EC2) Linux instances in the same VPC. How can I do that?

This example applies only to EC2 instances in the same VPC. Here are some factors that can affect Amazon EC2 network performance:

  • The physical proximity of the EC2 instances. Instances in the same Availability Zone are geographically closest to each other, while instances in different Availability Zones in the same Region, instances in different Regions on the same continent, and instances in different Regions on different continents are progressively farther away from one another.
  • The EC2 instance maximum transmission unit (MTU). The default interface configuration for EC2 instances uses jumbo frames if they are one of the instance sizes listed in Jumbo Frames (9001 MTU).
  • The size of your EC2 instance. Larger instance sizes for an instance type typically provide better network performance than smaller instance sizes of the same type.
  • EC2 enhanced networking support for Linux, except for T2 and M3 instance types.
  • EC2 high performance computing (HPC) support using placement groups. HPC provides full-bisection bandwidth and low latency, with support for up to 25-gigabit network speeds, depending on the instance type. To review network performance for each instance type, see Instance Types Matrix. For more information, see Launching Instances in a Placement Group.

Due to these factors, there can be a significant network performance difference between different cloud environments. It's a best practice to periodically evaluate and baseline the network performance of your environment to improve application performance. Testing network performance can provide valuable insight for determining the EC2 instance types, sizes, and configuration that best suits your needs.

Before beginning benchmark tests, launch and configure your Linux EC2 instances:

  1. Launch two Linux instances from which you can run network performance testing.
  2. Be sure that the instances support enhanced networking for Linux and are in the same VPC.
  3. (Optional) If you are performing network testing between instances in different placement groups, or that do not support jumbo frames, follow the steps in Network Maximum Transmission Unit (MTU) for Your EC2 Instance to check and set the MTU on your instance.
  4. Connect to the instances to verify that you can access the instances.

In some distros, like Amazon Linux, iperf3 is part of the EPEL repository. To enable the EPEL repository, see How do I enable the EPEL repository for my Amazon EC2 instance running CentOS, RHEL, or Amazon Linux?

Connect to your Linux instances and run the following commands to install iperf3.

To install iperf3 on RHEL 6 Linux hosts, run commands similar to the following:

# yum -y install  https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm  && yum -y install iperf3

To install iperf3 on RHEL 7 Linux hosts, run commands similar to the following:

# yum -y install  https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm  && yum -y install iperf3

To install iperf3 on Debian/Ubuntu hosts, run commands similar to the following:

# apt-get install -y iperf3

To install iperf3 on CentOS6/7 hosts, run commands similar to the following:

# yum -y install epel-release && yum -y install iperf3

iperf3 communicates over port 5201 by default when testing TCP performance. However, the port you use is configurable using the -p switch. Be sure that security groups are configured to allow communication over the port that is used by iperf3.

Configure one instance as a server to listen on the default port, or specify an alternate listener port with the -p switch:

$ sudo iperf3 -s [-p <port number>]

Configure a second instance as a client, and run a test against the server with the desired parameters. For example, the following command initiates a TCP test against the specified server instance with 10 parallel connections:

$ sudo iperf3 -c 192.0.2.0 -P 10 -i 1 -t 60 -V [-p <port number>]

Using these specified iperf3 parameters, the output displays the interval (60 seconds) per thread, the data transferred per thread, and the bandwidth used by each thread. The iperf3 output shown here was generated by testing two c4.8xlarge EC2 Linux instances colocated in the same placement group, providing HPC support. We modified the -t parameter to -t 2 instead of -t 60 and -P 10 to -P 2 for ease of display here. The total bandwidth transmitted across all connections was 9.6 Gbits/sec.:

Output:
$ iperf3 -c 192.0.2.0 -t 2 -i 1 -P 2 -R
Connecting to host 192.0.2.0, port 5201
Reverse mode, remote host 192.0.2.0 is sending
[  4] local 198.51.100.0 port 47122 connected to 192.0.2.0 port 5201
[  6] local 198.51.100.0 port 47124 connected to 192.0.2.0 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   572 MBytes  4.80 Gbits/sec
[  6]   0.00-1.00   sec   572 MBytes  4.80 Gbits/sec
[SUM]   0.00-1.00   sec  1.12 GBytes  9.60 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[  4]   1.00-2.00   sec   573 MBytes  4.80 Gbits/sec
[  6]   1.00-2.00   sec   573 MBytes  4.80 Gbits/sec
[SUM]   1.00-2.00   sec  1.12 GBytes  9.61 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-2.00   sec  1.12 GBytes  4.82 Gbits/sec    0             sender
[  4]   0.00-2.00   sec  1.12 GBytes  4.81 Gbits/sec                  receiver
[  6]   0.00-2.00   sec  1.12 GBytes  4.81 Gbits/sec    0             sender
[  6]   0.00-2.00   sec  1.12 GBytes  4.81 Gbits/sec                  receiver
[SUM]   0.00-2.00   sec  2.24 GBytes  9.63 Gbits/sec    0             sender
[SUM]   0.00-2.00   sec  2.24 GBytes  9.63 Gbits/sec                  receiver

iperf Done.

iperf3 communicates over port 5201 by default when testing UDP performance. However, the port you use is configurable using the -p switch. Be sure that security groups are configured to allow communication over the port that is used by iperf3.

Note: The default for UDP is 1 Mbit per second unless a different bandwidth is specified.

First, configure one instance as a server to listen on the default port, or specify an alternate listener port with the -p switch:

$ sudo iperf3 -s [-p <port number>]

Next, configure a second instance as a client, and run a test against the server with the desired parameters. For example, the following command initiates a UDP test against the specified server instance with a bandwidth objective of 100 Mbits/sec on the port that you're using:

$ sudo iperf3 -c 10.0.2.176 [-p <port number>] -u -b 100m

The output shows the interval (time), the amount of data transferred, the bandwidth achieved, the jitter (the deviation in time for periodic arrival of data grams), and the loss/total of UDP datagrams:

[ ID] Interval        Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  3]  0.0-10.0 sec   120 MBytes   101 Mbits/sec  ; 0.005 ms  0/85470 (0%)
[  3]  0.0-10.0 sec   1 datagrams received out-of-order
[  3] Sent 15113 datagrams

Did this page help you? Yes | No

Back to the AWS Support Knowledge Center

Need help? Visit the AWS Support Center.

Published: 2015-08-06

Updated: 2018-08-20