What is RTT in Networking?
Round-trip time (RTT) in networking is the time it takes to get a response after you initiate a network request. When you interact with an application, like when you click a button, the application sends a request to a remote data server. Then it receives a data response and displays the information to you. RTT is the total time it takes for the request to travel over the network and for the response to travel back. You can typically measure RTT in milliseconds. A lower RTT improves the experience of using an application and makes the application more responsive.
What is the relationship between RTT and network latency?
Network latency is the delay in network communication. It shows the time that data takes to transfer across the network. Networks with a longer delay or lag have high latency, while those with fast response times have low latency. The term network latency usually refers to several factors that delay communication over a specific network and impact that network’s performance.
You measure network latency using the round-trip time (RTT) metric. Just like the metric for time is minutes, RTT is the specific metric for network latency.
How is RTT measured?
You can measure round-trip time (RTT) by using various network diagnostic tools, such as ping or traceroute. Such tools send Internet Control Message Protocol (ICMP) echo request packets to the intended destination. They then report how long the ICMP data packets take to reach the destination.
You can measure RTT by using the ping command as follows:
- Open the command prompt on your computer
- Type ping followed by the IP address or hostname of the destination you want to test
- Press Enter
The ping test sends data packets to the destination and reports the RTT for each one. Note that the measured RTT may vary depending on network conditions and the specific tools used to measure it. This is why estimating round-trip time is challenging.
What is a good or optimal round-trip time?
A good round-trip time (RTT) should be below 100 milliseconds for optimal performance. An RTT of 100–200 milliseconds means performance is likely affected, but your users are still able to access the service. An RTT of 200 milliseconds or more means performance is degraded and your users experience long wait or page load times. An RTT of more than 375 milliseconds commonly results in a connection being terminated.
What factors influence round-trip time?
Several factors influence round-trip time (RTT), including the following.
Physical distance affects RTT because the further away the host is from the source, the longer it takes to receive a response. So, one method to reduce RTT is moving the two communication endpoints closer together. You can also use a content delivery network (CDN) for distribution closer to your users.
Connection speed is impacted by the delivery medium. For example, optical fiber connections generally deliver data faster than copper connections, while wireless frequency connections behave differently than satellite communication.
Number of network hops
A network node is a network point of connection, such as a server or router that can send, receive, or forward data packets. The term network hop refers to the process of data packets moving from one network node to another as they move from source to destination.
As the number of network hops increases, RTT also increases. Every node takes some time to process the packet before forwarding it, adding to time delays.
RTT increases due to high traffic volumes. When a network is overloaded, the number of nodes on the network grows. This causes traffic to slow and user requests to be delayed. It can also lead to increased latency, affecting the speed of communication between nodes and making the round-trip time longer.
Server response time
Server response time directly impacts RTT. When the server receives a request, it often has to communicate with other servers, like a database server, or external APIs to process the request. Too many requests cause delays as the server may end up placing new requests in a queue while it resolves older ones.
Local area network traffic
A corporate network is often made of smaller interconnected local area networks (LAN). Data moves from your LAN to the external network and back. Internal traffic on your corporate network can cause bottlenecks, even if the external network has sufficient resources and works effectively.
For example, if multiple employees in an office access a streaming video service at once, it can impact RTT for other applications as well.
How can you reduce round-trip time?
You can use a content delivery network (CDN) to reduce round-trip time (RTT). CDNs are strategically placed servers that cache content and provide high availability by being closer to users.
CDNs reduce RTT through caching, load distribution, and scalability.
Caching is the process of storing multiple copies of the same data for faster data access. CDNs cache frequently accessed content closer to the end user.
When a geographically remote user makes the first request for content, the application server sends the response to the remote user and a response copy to the CDN. The next time this user (or any other user in that location) makes the same request, the CDN sends the response directly. This eliminates the need for a request to travel to the application server and reduces overall RTT.
Load distribution in CDNs enables user requests to be distributed across a network of servers in an efficient and balanced way. CDNs determine which server is best suited for a request based on the request’s origin and the current load on the CDN’s server infrastructure.
As a cloud-based service, CDNs are highly scalable and can process vast numbers of user requests. This helps to eliminate bottlenecks in content delivery and keep RTT to a minimum.
How can AWS help reduce the round-trip time of your applications?
How can AWS help reduce the round-trip time of your applications?
Amazon CloudFront is a content delivery network (CDN) that reduces the round-trip time (RTT) of your applications by securely delivering content at high speeds. CloudFront reduces latency by caching information across more than 450 dispersed locations, supported by automated network mapping and intelligent routing.
Here's how you can benefit from CloudFront:
- Deliver fast and secure websites to global users in milliseconds
- Accelerate dynamic content delivery and APIs
- Stream live and on-demand video content quickly and reliably
- Distribute patches and updates at scale with high transfer rates
Get started with content delivery on Amazon Web Services (AWS) by creating an account today.