Networking & Content Delivery

Introducing configurable Idle timeout for Connection tracking

Introduction

In this post, we explain how Amazon EC2 interprets idle timeouts and how to customize this configuration to optimize for your traffic patterns and workloads. We also dive into some common use-cases.

Earlier this year, Amazon Elastic Compute Cloud (Amazon EC2) announced the Conntrack Utilization Metric for EC2 instances that offers you the ability to monitor tracked connections. We covered the Conntrack Utilization Metric launch in this post. The connection tracking metrics conntrack_allowance_available and conntrack_allowance_exceeded gives you visibility into the how close you are to your connection tracking limits and the number of dropped packets when you exceed the limit. These metrics help you to manage EC2 instance capacity proactively and scale to meet your networking traffic needs.

In some cases, visibility into connection tracking isn’t enough. When running workloads with a high number of connections, the tracked connection table entries on an EC2 instance can be exhausted and you may experience failures because new connections cannot be established. Often times, these workloads use their connection tracking allowance on Amazon EC2 inefficiently because they have a high number of leaked or idle connections. For example, TCP or UDP workloads running on an EC2 instance have different idle timeouts for different states. If your workloads span multiple EC2 instances, services, and middle boxes, each of these can have their own static idle timeout, causing inconsistency in how long one connection is tracked before it is closed. If connections are not closed properly, these stale connections can build up until the instance tracked connections limit is reached. To help you manage this complexity, Amazon EC2 is offering a new feature that allows you to modify a number of these default timeout values.

Feature overview

Amazon EC2 now offers configurable idle timeouts for connection tracking. This feature allows you to configure optimal connection tracking timeouts, so you can more efficiently use your EC2 instance’s connection tracking resources. Prior to today, EC2 tracked all TCP and UDP idle connections for a pre-defined default period or, in the case of TCP connections, until they were closed. With this new feature, TCP established, UDP stream, and UDP non-stream session timeouts on EC2 instances are now configurable per Elastic Network Interface (ENI). By specifying ‘tcp-established’, ‘udp-stream’, and ‘udp-timeout’ idle timeout values for the ENIs attached to an instance, Amazon EC2 now purges these sessions at the specified timeout value. The Amazon EC2 configurable timeout is available in all AWS Commercial Regions for Nitro based instances only, and it is supported for all tracked connections, including those automatically tracked by Amazon EC2.

Connection tracking and timeouts, a quick refresher

First is a quick refresher on connection tracking, starting with TCP. A TCP connection moves through a series of states during its lifetime. Each of these states has timeouts that vary by operating system, but here we consider the most widely used values on AWS. Most of the states have short timeouts between 30 and 180 seconds, except the established state, which can last up to five days.

As previously described, Amazon EC2 tracks connections through their lifetime to make sure not only that traffic flows as expected but also that security is enforced, only permitting legitimate traffic for a connection. Once a connection is established and tracked, a timeout is set for it to expire if no further network traffic is seen on that connection. If a valid network packet that belongs to a tracked connection is seen, then it causes the timeout to reset. A tracked connection won’t expire as long as it stays busy. It expires once no further traffic has been detected for some time.

When the layer 4 protocol is TCP and the connection is currently in the middle of the TCP three-way-handshake (that is, the tracked connection has been created due to a TCP SYN segment and is waiting for a TCP SYN+ACK on the reply direction), then the timeout is short – 120 seconds. However, once a TCP connection transitions to the established state, this timeout increases up to five days. Similarly, for UDP, when a UDP datagram is sent, that flow is tracked to permit return traffic and has an idle timeout of 30 seconds. If there is subsequent bidirectional traffic on the same flow, then it is considered a UDP stream and the idle timeout increases to 180 seconds.

In some cases, this is not desirable, so we’ve listened to your feedback and created a new API that allows you to modify a number of these default.

How can you configure timeouts?

There are four ways to configure connection tracking timeouts: using the AWS Command Line Interface (AWS CLI), AWS SDKs, the AWS Management Console user interface, or through AWS CloudFormation. Here is an example of using the Amazon EC2 Management Console to modify connection tracking timeouts when creating network interfaces:

AWS Management console example to show how to modify idle connection tracking timeouts when creating network interface. This displays the options for the three states that can be edited and includes info section with default, min and max timeouts.Figure 1. Amazon EC2 Console settings to modify idle connection tracking timeout when creating ENI

You can also modify the values for existing network interfaces:

AWS Management console example to show how to modify idle connection tracking timeouts existing ENI. Figure 2. Amazon EC2 console settings to change idle connection tracking timeouts for existing ENI

Common real-world scenarios and recommendations

Many of our customers rely on architectures with numerous networking components, including Elastic Load Balancers (ELB), firewalls, Transit Gateways, NAT Gateways, proxies, and EC2 instances, all of which may have different timeouts configured. In addition to this, every customer workload is different, many run customized software and marketplace products. However, we have summarized a few where we might want to adjust the connection tracking timeouts:

  • For connections through Gateway Load Balancers (GWLB), Network Load Balancers (NLB), NAT Gateways, and VPC Endpoints, all connections are tracked. For these AWS services, the idle timeout for TCP flows is 350 seconds and for GWLB/NLB, the idle timeout for UDP flows is 120 seconds, which varies from interface level timeout values. With configurable connection tracking timeouts at the interface level, you now have the flexibility to align these timeouts with these services.
  • DNS, SIP, SNMP, Syslog, Radius, and other services that primarily use UDP to serve requests. In these cases, ‘UDP-stream’ timeout can be changed from 180 seconds to 60 seconds. This provides you with better utilization of connection tracking entries and preventing gray failures that can happen due to connection tracking exhaustion.
  • Running workloads that are expected to handle very high numbers of TCP connections. For example, firewalls, load balancers, or proxies. For these workloads, configuring security groups to avoid connection tracking can help, as can setting the connection tracking idle timeout to match that of your network appliances and that which best suits your workload. Typically, load balancers or firewalls have TCP established idle timeouts in the range of 60 to 90 minutes. Configuring a similar timeout on Amazon EC2 is advisable to clear out inactive connection tracking entries and maximize utilization.
  • If you run UDP-based authentication protocols that have delays between a request and a reply. For example, a server might take some time to reply, but not within the 30 second timeout window. In this case, increasing the UDP timeout from 30 to 60 seconds can prevent timeouts.

Let’s look more closely at the first two scenarios listed above and how reducing the connection tracking idle timeouts on EC2 can help your application to scale.

Scenario #1: TCP connections through AWS Services (NLB)

To demonstrate the impact and benefit that this feature can bring to your workload, let’s look at a common networking setup:

cenario 1 architecture: Clients connect to the customers application running in AWS. First, traffic from clients is received and inspected by the third-party firewall device running on EC2. It is then sent to a Network Load Balancer and load balanced between a number of API servers running on EC2 instances.Figure 3. Common Network topology to demonstrate the impact of idle timeouts for connection tracking

In this topology, Internet-based clients connect to your API servers and all traffic is inspected by a firewall and load balanced by an NLB. Your firewall will track the state of all connections passing through it, as will EC2, to ensure security groups are enforced. In this scenario, your firewall is dependent on the client or server closing unused idle connections. If connections are not closed, the firewall and EC2 will continue to track the connection, consuming resources unnecessarily. Most firewalls can be configured with idle timeouts, allowing customers to determine when a connection should be removed. It’s also important to note that almost all firewalls will silently remove idle connections from their state and will not initiate a close (send a TCP FIN or RST) to the client or server. The NLB has a fixed idle timeout of 350 seconds for TCP flows. Once the idle timeout is reached or a TCP connection is closed, it is removed from NLB’s connection state table. In this scenario, we’ll assume you have an idle timeout of 600 seconds (10 minutes) on the API server.

Let’s see what happens when a client establishes a TCP connection to an API server, makes a request, and then remains idle:

  1. At time t=0, the client initiates a TCP connection, which passes through the firewall and the NLB and finally to the API server. At this time, the firewall, NLB, and API server are all tracking this connection and it is in the established state.
  2. The client makes an API request and the API server responds.
  3. At t=350 seconds, the Network Load Balancer removes the connection from its connection state.
  4. At t=600 seconds, the API server attempts to close the TCP connection by sending a TCP FIN segment. As the connection had already been removed from the NLB, the NLB does not see this as a valid connection, so it sends a TCP RST in response to the API server. NLB does not send a TCP RST back towards the client as there was no traffic initiated on the idle connection from the client-side after the timeout.
  5. At t=3600 seconds, the firewall silently removes the idle connection from its state.

At this point, while the third-party firewall has removed the connection from its state, the underlying EC2 host of the firewall still considers the connection to be in the established state, as it has seen no indication that the connection should be removed. The client did not close the connection and the firewall never saw the TCP FIN from the API server or the TCP RST from the NLB.

The Amazon EC2 connection tracking entry for the connection on the firewall is in the established state and has a timeout of five days. If the client tries to send data on this connection, the NLB would send a TCP RST back to it, clearing out the connection tracking entry on the firewall. What happens if the client does not close the connection? You may have clients running on mobile devices which sporadically drop their connection, for example: cell phones when the battery is discharged, users have gone into an area with no reception, or the user cell phone has moved between base stations and is now being translated to a different IP address. With a large enough number of clients and a single static firewall, the stale connections can build up until the firewall reaches the maximum number of tracked connections supported, leading to subsequent connection failures.

To avoid scenarios like this, some customers configure their EC2 instance security groups so that connections are not tracked. However, in this case, it is not possible, as connections through NLB require automatic connection tracking. To resolve the issue, we would recommend implementing two changes:

  1. Adjust idle timeouts of the components so that the client and server can gracefully close the connection when complete. If the firewall serves other traffic that requires a 1 hour (3600 seconds) idle timeout, you can configure the EC2 TCP established idle timeout 3600 seconds to match.
  2. Implement TCP keepalive on the client and server so that transactions requiring more than 350 seconds can complete.

Scenario #2: DNS Heavy workloads

If you are running a DNS service, it is likely you have a large number of clients sending short-lived queries, each on a separate UDP flow. If a single client makes multiple DNS requests on a UDP flow, or makes a DNS request where the reply is two datagrams, that flow will be considered a UDP stream and will be tracked by EC2 for 180 seconds, even if there is no subsequent traffic. With a large number of clients rapidly sending requests, connection tracking can become a limiting factor due to an extremely high number of connections per second. In this scenario, you could more efficiently leverage your EC2 instances by reducing the UDP-stream idle timeout to 60 seconds.

Conclusion

This post demonstrates how to configure connection tracking timeouts for specific TCP and UDP states. By leveraging configurable connection tracking timeouts, you can achieve high availability in scenarios where you expect a large number of TCP connections that can last for days if not properly closed, or a large number of short-lived UDP connections where the existing timeout values are too long, causing gray failures due to connection tracking exhaustion. There are several scenarios illustrated in this post where configuring the timeout can alleviate potential network issues and gray failures. To learn more about configurable connection tracking, including CLI commands and example use cases, you can visit the connection tracking user guide.

David Schliemann headshot

David Schliemann

David Schliemann is a Principal Cloud Support Engineer at AWS Support, based out of Sydney, Australia. He has spent the past decade helping AWS customers resolve complex issues across multiple technical domains, innovating along the way to enhance the customer experience. He’s passionate about training and empowering Support engineers and customers to perform in-depth troubleshooting, with a focus on incident-response automation, monitoring and observability.

Jasmeet Sawhney Headshot

Jasmeet Sawhney

Jasmeet Sawhney is a Senior Product Manager at AWS in the VPC product team based in California. Jasmeet focuses on enhancing AWS customer experience for instance networking and Nitro encryption. Before joining AWS, she developed products and solutions for hybrid cloud, network virtualization and cloud infrastructure to meet customer’s changing networking requirements. When not working, she loves golfing, biking and traveling with her family.