Networking & Content Delivery

Introducing NLB TCP configurable idle timeout

This post guides you through configuring AWS Network Load Balancer (NLB) idle timeouts for Transmission Control Protocol (TCP) flows.

NLB is part of the Amazon Web Services (AWS) Elastic Load Balancing family, operating at Layer 4 of the Open Systems Interconnection (OSI) model. It manages client connections over TCP or User Datagram Protocol (UDP), distributing them across a set of load balancer targets.

NLB tracks a connection from its establishment until it’s closed or times out due to inactivity (idle timeout). By default, the idle timeout for TCP connections is 350 seconds, while UDP connections have a 120 second timeout.

With the new configurable idle timeout for TCP, you can now modify this attribute for existing and new NLBs, and determine how long NLB should wait before terminating an inactive connection.

Understanding TCP connection setup

Before diving in, we briefly review how the TCP protocol operates. For a deeper understanding, you can refer to the TCP RFC.

Figure 1. Stages of a TCP connection establishment

TCP connections go through several stages, such as establishment, data transfer, and graceful closure.

  1. Half open: The client sends a SYN, and the server responds, but the client doesn’t complete the handshake.
  2. Established: The three-way handshake is completed.
  3. Data transferred: After the handshake, data can be exchanged between the client and server. Note that this section of the diagram is clarified to make it easier to read.
  4. Closed: The client initiates the closure with a FIN packet, leading to a graceful shutdown.

NLB TCP connection handling

The NLB acts as a Layer 4 proxy, keeping track of each established connection in a flow table. Connections that are half-open, gracefully closed, or reset by the client or server are not tracked.

A single connection is defined by a 5-tuple, which includes the protocol (TCP), source IP address, source port, destination IP address, and destination port.

Figure 2. Sample architecture for NLB deployment

By default, if there’s no traffic between the client and the target for 350 seconds, then the connection is removed from the NLB flow table. If a client attempts to send traffic after the connection is no longer tracked, then NLB responds with a TCP RST, signaling that a new connection needs to be established.

For many applications, a connection timing out might be fine, but in some cases it can cause problems. For example, Internet-of-Things (IoT) devices that send data regularly may transfer only small amounts each time. Reopening a connection, especially an encrypted one, every time data is sent, can be resource-intensive and costly.

To prevent connections from timing out, you can set up TCP keepalives, which send a probe over an established connection at a predefined interval. Although this probe contains no data, it is enough to reset the idle timer on intermediary systems, such as the NLB. To learn more about setting up TCP keepalives, refer to our previous post.

If your application needs long-lasting, persistent TCP connections and you can’t use TCP keepalives, then you can modify the TCP idle timeout on the NLB.

Considerations when updating TCP idle timeout

You can adjust the TCP idle timeout for each NLB listener to any value between 60 and 6000 seconds. This change only affects new TCP connections, not the ones already in progress.

Before setting the idle timeout value, make sure that you understand your application’s needs and consider whether TCP keepalive could be an alternative. It’s best to set the NLB TCP idle timeout higher than your application’s TCP idle timeout. This means that your application handles connection management and timeouts, instead of the NLB.

Setting the idle timeout too high increases the risk of filling up of the flow table. If the table gets full, then it results in the NLB silently rejecting new connections. You should monitor rejected connections using the new Amazon CloudWatch metrics covered in the monitoring section. Seeing rejected connections would indicate that you should decrease the value for TCP idle timeout.

Steps to configure TCP idle timeout using AWS APIs/CLI

AWS is introducing new APIs with the launch of TCP idle timeout for NLB. The following examples show the APIs in action.

Describe the NLB listener to find out the current value for TCP idle timeout

Input:

aws elbv2 describe-listener-attributes \
          --listener-arns arn:aws:elasticloadbalancing:us-east-1:000011112222:listener/network/NLBTest/123/123

Output:

        {
            "Attributes": [         
                {
                   "Value": "350",
                   "Key": "tcp.idle_timeout.seconds"
                }
            ]
        }

Modify the value of the TCP idle timeout

Input:

aws elbv2 modify-listener-attributes \
          --listener-arn arn:aws:elasticloadbalancing:us-east-1:000011112222:listener/network/NLBTest/123/123 \
          --attributes \
              Key=tcp.idle_timeout.seconds,Value=600 

Output:

        {
            "Attributes": [       
                {
                   "Value": "600",
                   "Key": "tcp.idle_timeout.seconds"
                }
            ]
        }

Steps to configure TCP idle timeout using the AWS Management Console

The following steps show how to change the timeout value using the AWS Management Console .

1. Locate the NLB TCP listener.

Locate NLB Listener
Figure 3. NLB TCP listener

2. View the current TCP idle timeout value in the Attributes section.

View The Value
Figure 4. NLB listener attributes

3. Enter the new TCP idle timeout value in the Edit listener attributes section.


Figure 5. Idle timeout setting.

Monitoring

The launch of NLB TCP idle timeout introduces two new metrics: RejectedFlowCount (total flows rejected due to the flow table being full) and RejectedFlowCount_TCP (TCP flows rejected for the same reason). These metrics help you monitor the impact of your idle timeout settings.

We recommend setting up CloudWatch alarms to notify you of when NLB starts rejecting flows. An increase in RejectedFlowCount indicates the need to decrease the timeout, allowing NLB to clear flows sooner and prevent the flow table from filling up.

Existing NLB metrics, such as NewFlowCount, NewFlowCount_TCP, ActiveFlowCount, and ActiveFlowCount_TCP, remain unchanged.

Conclusion

Configuring TCP idle timeouts in NLB offers greater control over connection management, especially for applications with long-lasting connections. By adjusting the idle timeout and monitoring the relevant metrics, you can optimize your NLB performance and prevent potential connection issues.

About the Authors

Milind_K.jpg

Milind Kulkarni

Milind is a Principal Product Manager at Amazon Web Services (AWS). He has over 20 years of experience in networking, data center architectures, SDN/NFV, and cloud computing. He is a co-inventor of nine US Patents and has co-authored three IETF Standards.

Tom Adamski

Tom Adamski

Tom is a Principal Solutions Architect specializing in Networking. He has over 15 years of experience building networks and security solutions across various industries – from telcos and ISPs to small startups. He has spent the last 4 years helping AWS customers build their network environments in the AWS Cloud. In his spare time Tom can be found hunting for waves to surf around the California coast.