I need to measure network bandwidth between Amazon EC2 Linux instances. How can I do that?

Here are some factors that can affect Amazon EC2 network performance:

  • The physical proximity of EC2 instances. Instances in the same Availability Zone are geographically closest to each other, while instances in different Availability Zones in the same region, instances in different regions on the same continent, and instances in different regions on different continents are progressively geographically farther away from one another.
  • EC2 instance maximum transmission unit (MTU). The default interface configuration for EC2 instances uses jumbo frames (9001 MTU), which allows greater throughput in a single virtual private cloud (VPC). However, outside a single VPC, the maximum MTU is 1500 or less, requiring large packets to be fragmented by intermediate systems. Therefore, the default MTU value of 9001 might be inefficient, costing more in processing overhead than it saves in network throughput, especially when most of your network traffic is Internet facing. For a list of instances that support jumbo frames, see Jumbo Frames (9001 MTU). For more general information, see Networking and Storage Features.
  • The size of your EC2 instance. Larger instance sizes for an instance type typically provide better network performance than smaller instance sizes of the same type.
  • EC2 enhanced networking support for Linux, with the exception of T2 and M3 instance types.
  • EC2 high performance computing (HPC) support using placement groups. HPC provides full-bisection bandwidth and low latency, with support for 10 gigabit network speeds. For more information, see Launching Instances into a Placement Group. To review network performance for each instance type, see Instance Types Matrix.

Due to these factors, there can be a significant network performance difference between different cloud environments. It's a best practice to periodically evaluate and baseline the network performance of your environment to improve application performance. Testing network performance can provide valuable insight for determining the EC2 instance types, sizes, and configuration that best suits your needs.

Before beginning benchmark tests, launch and configure your Linux EC2 instances:

  1. Launch two Linux instances from which you can run network performance testing.
  2. Ensure that the instances support enhanced networking for Linux and are in the same VPC.
  3. (Optional) If you are performing network testing between instances in different placement groups, or that do not support jumbo frames, follow the steps in Network Maximum Transmission Unit (MTU) for Your EC2 Instance to check and set the MTU on your instance.
  4. Connect to the instances to verify that you can access the instances.

Connect to your Linux instances and run the following commands to install iperf3.

To install iperf3 on RHEL-based Linux hosts, run commands similar to the following:

$ sudo yum update
$ sudo yum install git gcc make
$ git clone https://github.com/esnet/iperf
$ cd iperf$
$ ./configure
$ sudo make
$ sudo make install
$ sudo ldconfig

To install iperf3 on Debian/Ubuntu hosts, run commands similar to the following:

$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install git gcc make
$ git clone https://github.com/esnet/iperf
$ cd iperf
$ ./configure
$ sudo make
$ sudo make install
$ sudo ldconfig

For either host type, optionally free up disk space by removing artifacts in the build tree by running the following command:

$ sudo make clean

iperf3 communicates over port 5001 by default when testing TCP performance; however, this port is configurable using the -p switch. Ensure that security groups are configured to allow communication over the port that will be used by iperf3.

Configure one instance as a server to listen on the default port, or specify an alternate listener port with the -p switch:

$ sudo iperf3 -s -p 80

Configure a second instance as a client, and run a test against the server with the desired parameters. For example, the following command initiates a TCP test against the specified server instance with 10 parallel connections on port 80:

$ sudo iperf3 -c 10.0.2.176 -i 1 -t 60 -V -p 80

Using the iperf3 parameters described, the output displays the interval (60 seconds) per TCP connection, the data transferred per connection, and the bandwidth utilized by each connection. The iperf3 output shown here was generated by testing two c4.8xlarge EC2 Linux instances co-located in the same placement group, providing HPC support. The total bandwidth transmitted across all connections was 9.60 Gbits/sec:

Output:

[ ID] Interval       Transfer     Bandwidth

[  5]  0.0-60.0 sec  6.71 GBytes   960 Mbits/sec

[  4]  0.0-60.0 sec  6.69 GBytes   957 Mbits/sec

[  6]  0.0-60.0 sec  6.55 GBytes   937 Mbits/sec

[  7]  0.0-60.0 sec  6.84 GBytes   980 Mbits/sec

[  8]  0.0-60.0 sec  6.68 GBytes   956 Mbits/sec

[  9]  0.0-60.0 sec  6.76 GBytes   968 Mbits/sec

[ 10]  0.0-60.0 sec  6.55 GBytes   938 Mbits/sec

[ 12]  0.0-60.0 sec  6.77 GBytes   969 Mbits/sec

[ 11]  0.0-60.0 sec  6.70 GBytes   960 Mbits/sec

[ 13]  0.0-60.0 sec  6.80 GBytes   973 Mbits/sec

[SUM]  0.0-60.0 sec  67.0 GBytes  9.60 Gbits/sec

iperf3 communicates over port 5201 by default when testing UDP performance; however, this port is configurable using the -p switch. Ensure that security groups are configured to allow communication over the port that will be used by iperf3.

First, configure one instance as a server to listen on the default port, or specify an alternate listener port with the -p switch:

$ sudo iperf3 -s -p 80

Next, configure a second instance as a client, and run a test against the server with the desired parameters. For example, the following command initiates a UDP test against the specified server instance with a bandwidth objective of 100 Mbits/sec on port 80:

$ sudo iperf3 -c 10.0.2.176 -p 80 -u -b 100m

The output shows the interval (time), the amount of data transferred, the bandwidth achieved, the jitter (the deviation in time for periodic arrival of data grams), and the loss/total of UDP datagrams:

[ ID] Interval        Transfer     Bandwidth       Jitter    Lost/Total Datagrams

[  3]  0.0-10.0 sec   120 MBytes   101 Mbits/sec   0.005 ms  0/85470 (0%)

[  3]  0.0-10.0 sec   1 datagrams received out-of-order

[  3] Sent 15113 datagrams

Amazon EC2 Linux, network, performance, placement groups, HPC, throughput, iperf, iperf3, MTU, VPC, UDP, TCP


Did this page help you? Yes | No

Back to the AWS Support Knowledge Center

Need help? Visit the AWS Support Center.

Published: 2015-08-06

Updated: 2017-02-10