Networking & Content Delivery

Integrate your custom logic or appliance with AWS Gateway Load Balancer

We recently launched AWS Gateway Load Balancer (GWLB), a new service that helps customers deploy, scale, and manage third-party virtual network appliances such as firewalls, intrusion detection and prevention systems, analytics, visibility and others. A new addition to the Elastic Load Balancer family, AWS Gateway Load Balancer (GWLB) combines a transparent network gateway (that is, a single entry and exit point for all traffic) and a load balancer that distributes traffic and scales your virtual appliances with the demand.

This was a major milestone, because the Gateway Load Balancer opens up new frontiers to insert custom logic or third-party functions in networking, security, analytics, telecom, Internet of Things (IoT) and more into any networking path. This capability, along with offloading the problems of scale, availability, service delivery and stickiness of flows, enables partners to focus on their core expertise and innovate faster.

This write-up explains in detail how to integrate your virtual appliances or customized functions with GWLB. We will use the word “appliance” to mean a new custom logic or an existing virtual appliance. If you are interested in basics, you might want to take a look at GWLB page or another blog post, GWLB Supported architecture patterns. If you want to see all of the blogs we’ve published on GWLB, this page will show you the running list. So, let’s dive in.

What type of network appliances work with GWLB?

GWLB processes each packet at layer 3 and is agnostic to appliance states. With this behavior any custom logic or third-party appliance can be deployed in a fleet behind GWLB, as long the appliance supports Geneve encapsulation-decapsulation and GWLB metadata, which is explained in details later.

How does GWLB work?

As shown in the figure below, a GWLB is connected to Gateway Load Balancer Endpoint (GWLBE) in another VPC. One GWLB can be connected to one or more GWLBEs.

GWLB has two sides. The side that connects to GWBLE is called GWLB Frontend. The side that is connected to target appliances is called GWLB Backend. In the backend, GWLB operates as a load-balancer for routing traffic flows through one out of multiple equivalent target appliances. GWLB ensures stickiness of flows in both directions to target appliances and also reroutes flows if the selected appliance becomes unhealthy. This write-up focuses on the backend functionality – specifically on communication between GLWB and appliances.

Packets sent from source to destination do not contain the GWLB IP as the destination IP address, but they will be routed to GWLB due to route table configurations. To achieve transparent forwarding behavior (i.e. to keep the original packet contents as is), GWLB encapsulates the original packet using Geneve encapsulation and sends (/receives) packets to (/from) appliances. Appliances also need to decapsulate Geneve type-length-value (TLV) pairs to process original packet.

GWLB is a packet-in/packet-out service. It does not maintain any application states and does not perform TLS/SSL decryption/encryption. These functions are performed by the appliances themselves.

What changes to the appliance must be made for them work with GWLB?

In order to work with GWLB, appliances need to:

  1. Support Geneve protocol to exchange traffic with GWLB. Geneve encapsulation is required for transparent routing of packets between GWLB and appliances, and for sending extra information (aka metadata, explained below).
  2. Support encoding/decoding GWLB related Geneve type-length-value (TLV) pairs.
  3. Respond to TCP/HTTP/HTTPS health checks from GWLB.

We have seen users complete above tasks in one day for prototype appliances, while sophisticated appliances take couple of days/weeks depending on various factors. After completing above tasks you can test interoperability of appliances with GWLB. Outline of testing and troubleshooting is described later in this write-up.

Why do the appliances need to support Geneve encapsulation?

The need to keep the original packet intact is the fundamental requirement for a transparent behavior, which is a key function provided by GWLB. Encapsulating original packet into a new L3 packet is the only feasible solution for routing packets between GWLB and appliances. That is because source/destination IPs on such packets will not be same as the IPs on GWLB or appliances, so normal VPC routing based on those IPs will result in packets bypassing the GWLB or appliances.

In addition, to support multi-tenant appliances with overlapping CIDRs, appliances need to know the source of the traffic. GWLB also needs to keep track of flows and avoid intermixing of user traffic. GWLB can achieve this by sending extra information (such as GWLBE ENI ID, Attachment ID, Flow Cookie) using Type-Length-Value (TLV) triplet for every packet to the appliance.

Geneve protocol (RFC 8926) is flexible and allows passing this extra information. This extensible and customizable Layer 3 encapsulation mechanism allows supporting broad set of use cases and simplifying customer experience because they require zero changes to source and destination devices. See Geneve TLV format later in this doc. As an alternative to Geneve encapsulation, VXLAN and GRE were evaluated. But they couldn’t meet above requirements due to its fixed size fields.

How does GWLB select the appliance it needs to send the flow to?

When GWLB receives a new TCP/UDP flow, it selects a healthy appliance from a target group using 5-tuple flow hash – Source IP, Destination IP, Transport Protocol, Source Port, Destination Port. Subsequently, GWLB routes all packets of that flow (both forward and reverse directions) to the same appliance (stickiness). For non-TCP/UDP flows, GWLB still uses 3-tuple (Source IP, Destination IP, Transport Protocol) to make forwarding decision.

How does GWLB perform health checks for appliances?

GWLB runs health checks periodically based on user defined time interval. GWLB performs these health checks by sending TCP/HTTP/HTTPS packets to appliance. The appliance needs to respond to TCP/HTTP/HTTPS packets as described below:

  • TCP: Establishing the connection is considered pass.
  • HTTP: The GWLB will send an HTTP request to the appliance over a new TCP connection, with the path specified by the user. The appliance must establish TCP connection and reply with a status code in the range 200 to 399. If the TCP connection fails to establish, or if the appliance replies with some other status code, or if the appliance simply doen’t reply, the health check fails.
  • HTTPS: Same as HTTP behavior, but over TLS. GWLB doesn’t do host name verification on the certificate, so any valid certificate (non-expired or self-signed) will work.

Appliances must finish the entire check within GWLB timeouts. These checks assume that an appliance that responded correctly to TCP/HTTP/HTTPS packets, usually from its control plane, is also capable of forwarding packets to destinations via its data plane. Note that the health check packets are not Geneve encapsulated. See this documentation on GWLB health checks.

How does the packet flow between GWLB and appliance?

Figure below explains the flow of the packet as it traverses from source (IP address A.B.C.D) to destination (IP address E.F.G.H). The packet from source is sent to GWLBE using next hop in route table.

Step 1: When GWLBE receives the packet from the source, it sends the packet to GWLB using the underlying PrivateLink technology. The packet stays on AWS network and reaches GWLB.

Step 2: GWLB uses 5-tuple (Src IP, Dest IP, Src Port, Dest Port, Protocol) of the incoming packet and chooses a specific appliance as a target. GWLB then encapsulates original packet (shown in yellow color) using Geneve header and embeds the metadata in form of Type, Length, Value triplets, also known as TLVs (shown in green color). TLVs are explained in Step 4. In this example, GWLB IP address is, and the IP address of the appliance is

Step 3: GWLB forwards encapsulated packet to the specific appliance. GWLB will stick that 5-tuple flow to that specific appliance in both directions of traffic for the life of that flow.

Step 4: The appliance must be configured with an IP interface that can accept UDP/IP packets. All packets forwarded to the appliance are forwarded on that IP interface.

The appliance should parse and use the information in TLVs for any decision making. The packet may contain one or more of the following Geneve TLVs:

  • GWLBE ENI ID: This is the 64-bit ENI ID assigned to the GWLBE. Appliances may use this identifier to associate packets to their configuration, for example. This GWLBE ENI ID should be used to determine the source VPC of the traffic. Each GWLBE can belong to only one VPC. Vending the GWLBE to VPC mappings to appliances is the responsibility of the appliance partner’s management software.
  • Attachment ID: In cases where GWLB interfaces with TGW or VPNGW or Direct Connect, a 64-bit Attachment ID TLV is sent to the appliance to determine the sending source VPC ID. This is an optional TLV and will be present only when there is an attachment to another AWS gateway device.
  • Flow Cookie: Flow Cookie is a 32-bit random number generated for a flow when the GWLB initially creates a flow entry in its flow table. All entries that belong to the same flow may share the same cookie. Appliances must pass this cookie back “as-is”. When the GWLB receives a packet from the appliance, the packet is forwarded further only if the cookie in the packet received from the appliance matches the one assigned to that flow. If the cookie does not match or if there is no cookie, the packet is dropped.

After processing the TLVs, the appliance may choose to drop the packet or allow it to go forward.

When the appliance intends to forward the packet, it must do the following:

  1. encapsulate the original packet inside Geneve header
  2. swap the source and destination IP addresses in outer IPv4 header (i.e. Source IP = appliance IP address. Destination IP = GWLB IP address)
  3. preserve original ports and must not swap the source and destination ports in outer IPv4 header
  4. update the IP checksum in outer IPv4 header
  5. return the packet to GWLB with the TLVs intact for the given 5-tuple of the original inside packet.

Step 5: The appliance encapsulates original packet (shown in yellow color) using Geneve header and embeds the same metadata (shown in green color) it originally received for that flow.

Step 6: Upon receiving the packet from appliance, GWLB removes Geneve encapsulation. It then validates 5-tuple and flow cookie. Only when the 5-tuple in the flow and the flow cookie match, GWLB forwards the packet to GWLBE.

Step 7: Packet traverses to GWLBE using the underlying PrivateLink technology. The packet stays on AWS network and reaches GWLBE, which delivers it to destination using route-table next hop.

What is the packet format that the GWLB and appliance exchange?

The packet format below shows the packet received at the appliance using Geneve encapsulation. For explanation of Geneve headers, please refer to Geneve protocol (RFC 8926).

Outer IPv4 Header:
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
|Version|  IHL  |Type of Service|          Total Length         |
|         Identification        |Flags|      Fragment Offset    |
|  Time to Live |Protocol=17 UDP|         Header Checksum       |
|                     Outer Source IPv4 Address                 |
|                   Outer Destination IPv4 Address              |
|       Source Port = xxxx      |    Dest Port = 6081 Geneve    |
|          UDP length           |         UDP Checksum          |

Outer Geneve Header:
|V=0|Opt Len = 8|O|C|    Rsvd.  |      Protocol Type = 0x0800   |
|    Virtual Network Identifier (VNI) = 0       |    Reserved   |

Outer Geneve Options: AWS Gateway Load Balancer TLVs
|    Option Class = 0x0108 (AWS)|    Type = 1   |R|R|R| Len = 2 |
|                                                               |
|                      64-bit GWLBE ENI ID                      |
|    Option Class = 0x0108 (AWS)|    Type = 2   |R|R|R| Len = 2 |
|                                                               |
|              64-bit Customer Visible Attachment ID            |
|    Option Class = 0x0108 (AWS)|    Type = 3   |R|R|R| Len = 1 |
|                     32-bit Flow Cookie                        |

Original IPv4 Packet in Inner Ethernet Packet follows…
| Ethertype of Original Payload |                               |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+                               |
|                                  Original Ethernet Payload    |
|                                                               |

What is the testing and debugging process for appliance integration?

Once the appliance software supports Geneve protocol, encoding/decoding of GWLB TLVs, and responds to health checks it is time to test.

Create VPCs and required components. GWLB, GWLBE and add appliance in target group. You can start with single appliance as the simplest test case. See if the appliance is responding to health checks. You can turn on packet capture on the appliance to see the packet flow. Verify the packets are in expected format as shown in previous section. When health check is working, enable VPC Flow Logs in the GWLB subnet in the Appliance VPC. You can enable VPC Flow Logs in the customer VPC in GWLB endpoint subnet. Check for the incoming and outgoing packets from each direction via the flow logs.


Our goal is for customers and partners to use Gateway Load Balancer as an easy way to insert new functions into any networking path. Offload the problems of scale, availability, service delivery and stickiness of flows to AWS, so that you can focus on core expertise and innovate faster. We hope to open up new possibilities for security, analytics, telecom, and Internet of Things (IoT) use cases, as well as entirely new applications.

This post has focused on how to integrate your virtual appliances or customized functions with GWLB. I hope we have sparked your imagination about what can be done!

Here are some useful links if you are interested in learning more:


picture of Milind Kulkarni

Milind Kulkarni

Milind is a Senior Product Manager at Amazon Web Services (AWS). He has over 20 years of experience in networking, data center architectures, SDN/NFV, and cloud computing. He is a co-inventor of nine US Patents and has co-authored three IETF Standards.