Networking & Content Delivery

Scaling network traffic inspection using AWS Gateway Load Balancer

Updated “Cross-zone load balancing and appliance failures” section on 25th March, 2021

Organizations use next-generation firewalls (NGFW) and intrusion prevention systems (IPS) as part of their defense in depth strategy. In an on-premises network, these often take the form of dedicated hardware or software or virtual “appliances.” As companies move to the cloud, they want to add virtual appliances to their AWS environments. While spinning up these appliances from the AWS Marketplace is a relatively straight forward process, architecting for high availability and scalability are not always easy. The new AWS Gateway Load Balancer (GWLB) service is designed specifically to address these architectural challenges and make deploying, scaling, and running virtual appliances easier.

Gateway Load Balancer is a new type of load balancer that operates at layer 3 of the OSI model and is built on Hyperplane, which is capable of handling several thousands of connections per second. In conjunction with a new type of VPC endpoint, referred to as a Gateway Load Balancer endpoint, Gateway Load Balancer exhibits characteristics of both a router and a load balancer. Gateway Load Balancer maintains stickiness and flow symmetry for traffic sent through it, performs health checks and allows auto scaling groups as targets. What this means for customers is that security teams and ISVs can now expose network security as a service for example, without all the heavy lifting of having to create or use complex mechanisms to manage availability and scaling of the appliance fleets they use for inspection. Customers can now focus on building out their applications and security policies without all the overhead. An additional benefit for partners and third-party appliance vendors, Gateway Load Balancer enables new ways to deliver Network Function Virtualization (NFV) solutions as a managed service.

Industry-leading partners

Because virtual appliances are intrinsically linked to the value Gateway Load Balancer provides, a number of industry-leading AWS partners worked with us during its beta testing. These companies gave us valuable feedback, and have built solutions to run behind a Gateway Load Balancer today. Here are blog posts and videos that share their experiences:

In our previous blog, How to integrate third-party firewall appliances into an AWS environment, we looked at multiple architectures for deploying or implementing firewalls running as virtual appliances. In this post, we introduce new options made possible by Gateway Load Balancer. Specifically, we show how to scale these virtual appliances horizontally with Gateway Load Balancer to inspect traffic from and to your Amazon Virtual Private Cloud (VPC). By definition, these use cases include AWS partner products that work in conjunction with VPC constructs like VPC Security Groups and network access control lists (NACLs).

In this blog, we go over the details of a distributed architecture comprising of Gateway Load Balancer and Gateway Load Balancer endpoints. Gateway Load Balancer and virtual appliances are deployed into a centralized appliance VPC. Gateway Load Balancer endpoints are configured in spoke VPCs originating or receiving traffic from the Internet. This architecture allows you to perform inline inspection of traffic from multiple spoke VPCs in a simplified and scalable fashion while still centralizing your virtual appliances.

Before we discuss building this architecture, check out the blog post from the AWS News team, Introducing AWS Gateway Load Balancer, easy deployment, scalability, and high-availability for partner appliances, to get a perspective on what Gateway Load Balancer is, and how it helps customers. For a broad introduction to the service, Introduction to AWS Gateway Load Balancer might also be of interest.

Architecture overview:

GWLB Blog Distributed Architecture - Figure 1

Figure 1: Distributed Architecture using Gateway Load Balancer and Gateway Load Balancer endpoints

In this architecture, Gateway Load Balancer endpoints are deployed into each Availability Zone (AZ) in the spoke VPC. The virtual appliances which can be next-generation firewalls, intrusion prevention systems etc. are deployed behind the Gateway Load Balancer in the centralized appliance VPC. This appliance VPC can be in the same AWS account as the spoke VPC or different AWS account. Virtual appliances can be configured to use auto-scaling groups and are registered automatically with the Gateway Load Balancer, allowing metric-based scaling of the security layer. These virtual appliance can be managed by accessing their management interfaces through an Internet Gateway (IGW) or using a bastion host setup in the appliance VPC.

When using interface VPC endpoints, customers use a domain name system (DNS) based mechanism to route traffic to them. With the launch of Gateway Load Balancer endpoints, customers can now set a Gateway Load Balancer endpoint as a route target in the VPC subnet route table. To route traffic to the security fleet behind the Gateway Load Balancer, you need to edit appropriate route tables of the spoke VPC to point to the Gateway Load Balancer endpoint as the next hop. Traffic, routed through Gateway Load Balancer endpoint, is delivered securely and privately to the Gateway Load Balancer using AWS PrivateLink.

We will now dive into how each of the route tables are configured and walk through the life of a packet as it makes its way to the Internet, is inspected by these appliances and back from a source in the spoke VPC.

Application packet flow to the Internet from the spoke VPC:

Figure 2 for GWLB Blog: Scaling network traffic inspection using AWS Gateway Load Balancer

Figure 2: Gateway Load Balancer Distributed Architecture – outbound packet flow

  • (1) An application in AZ1 (usw2-az1) wants to communicate with a resource on the Internet. It uses the default route (0.0.0.0/0) to send traffic to the Gateway Load Balancer endpoint: vpce-1xxx
  • (2) Once the traffic hits the Gateway Load Balancer endpoint, using AWS PrivateLink, traffic is delivered securely and privately to the Gateway Load Balancer without the need to configure any other route tables
  • (3) The Gateway Load Balancer uses 3-tuples or 5-tuples of an IP packet to pick an appliance for the life of that flow. This allows virtual appliance to maintain state
  • (4) The Gateway Load Balancer encapsulates the original IP traffic with a GENEVE header and forwards it to the appliance over UDP port 6081
    • This encapsulation allows all IP traffic to be delivered to the appliances for inspection, without specifying listeners for every port and protocol
    • Gateway Load Balancer supports a maximum transmission unit (MTU) size of 8500 bytes, the same as AWS Transit Gateway (TGW). Gateway Load Balancer’s GENEVE encapsulation adds 64 bytes over the original IP header and doesn’t count towards the overall MTU limit.
  • (5) The virtual appliance (in this case we assume this being a firewall or an IPS device) behind the Gateway Load Balancer decapsulates the GENEVE header, inspects the original packet and depending on the security policy configured it decides to either forward or drop the packet
  • (6) Assuming the traffic is allowed, the virtual appliance then re-encapsulates the packet with GENEVE header and forwards it to the Gateway Load Balancer
  • (7) The Gateway Load Balancer removes the GENEVE header and forwards traffic to appropriate Gateway Load Balancer endpoint
    • The Gateway Load Balancer uses metadata in the GENEVE headers to find the matching Gateway Load Balancer endpoint the traffic was sourced from
  • (8) Once at the Gateway Load Balancer endpoint, traffic destined to the Internet uses the route table associated with Gateway Load Balancer endpoint (gwlbe-rtb1) to egress out the Internet Gateway

In this example, the application instance is associated to an Elastic IP address (EIP), which is the IP address the client/service on the Internet would then use to send response traffic to.

Return packet flow from the Internet to the application in the spoke VPC:

Figure 3 for GWLB Blog: Scaling network traffic inspection using AWS Gateway Load Balancer

Figure 3: Gateway Load Balancer Distributed Architecture – return packet flow

  • (1) When the return packet makes it back to the Internet Gateway, using Amazon VPC Ingress Routing, traffic destined for application-subnet1 is steered towards the Gateway Load Balancer endpoint: vpce-1xxx as shown in the route table associated with IGW (igw-rtb)
  • (2) Once the traffic hits the Gateway Load Balancer endpoint, using AWS PrivateLink, traffic is delivered securely and privately to the Gateway Load Balancer
  • (3) Since this return packet is associated with an existing flow, the Gateway Load Balancer encapsulates the original IP traffic with a GENEVE header and forwards it to the virtual appliance it had chosen for this flow
  • (4) The virtual appliance behind the Gateway Load Balancer decapsulates the GENEVE header, inspects the original packet and depending on the security policy configured it decides to either forward or drop the packet
  • (5) Assuming the traffic is allowed, the virtual appliance then re-encapsulates the packet with GENEVE header and forwards it to the Gateway Load Balancer
  • (6) The Gateway Load Balancer removes the GENEVE header and forwards traffic to appropriate Gateway Load Balancer endpoint
  • (7) Once at the Gateway Load Balancer endpoint, since the destination of the packet is within the VPC CIDR range, traffic makes it back to the application instance that had initiated the flow

Cross-zone load balancing and appliance failures:

By default, each load balancer deployed in an AZ distributes traffic across the registered targets within the same AZ only. This is called AZ affinity.  If you enable cross-zone load balancing, Gateway Load Balancer distributes traffic across all registered and healthy targets in all enabled Availability Zones. If all targets across all AZ’s are unhealthy, Gateway Load Balancer fails open. While there are no healthy targets, GWLB picks a target at random based on 5-tuple / 3-tuple flow hash. GWLB then forwards traffic to this target/appliance for the life of the flow until it is reset or timed out. Since traffic is forwarded to unhealthy target, traffic can be dropped/discarded at the target. It is the target owner’s responsibility to restore the unhealthy target(s).

As you start to build out the architecture above, there are a few things to consider regarding cross-zone load balancing and how Gateway Load Balancer handles virtual appliance failures. Gateway Load Balancer endpoints and Gateway Load Balancers are zonal entities, and as such, traffic from each Availability Zone (AZ) is maintained within the same AZ. However, as we deploy virtual appliances behind the load balancer, we need to consider how appliance failures impact traffic flows, and, how best to leverage cross-zone load balancing.

The following Table 1 and Table 2 shows how traffic is handled under different scenarios where Gateway Load Balancer and Gateway Load Balancer endpoints are deployed across AZ’s, and one or more appliances are registered targets per AZ.

Cross-zone load balancing disabled:

Cross-Zone Load Balancing Healthy Target Count (AZ1) Healthy Target Count (AZ2) GWLB Behavior for traffic received by GWLB endpoint in AZ1 Recovery Traffic at the Target
Disabled >0 in AZ1 (however a target has failed in AZ1) >0 in AZ2 GWLB sends existing flows to same unhealthy target. New flows are sent to healthy target(s) in the same AZ – AZ1. New flows are sent to healthy target in the same AZ – AZ1. Existing flows will need to reconnect. It is Customer’s responsibility to restore unhealthy target. Existing flows need to be reset by the client or existing flow times out. New flows are sent to healthy targets in AZ – AZ1.
Disabled 0 in AZ1 >0 in AZ2 No healthy targets in AZ1. In this scenario GWLB fails open. While there are no healthy targets, GWLB picks a target at random and forwards traffic to this target/appliance for the life of the flow until it is either reset or timed out. Since traffic is being forwarded to unhealthy target, traffic is being dropped/discarded at the target. Restore targets in AZ1 to a healthy state. Traffic may be dropped/discarded by target(s) until targets are restored.

Table 1 – Cross-zone load balancing disabled. Target(s) = Virtual Appliance(s).

 

Cross-zone load balancing enabled:

Cross-Zone Load Balancing Healthy Target Count (AZ1) Healthy Target Count (AZ2) GWLB Behavior for traffic received by GWLB endpoint in AZ1 Recovery Traffic at the Target
Enabled 0 in AZ1 >0 in AZ2 GWLB distributes traffic across the healthy targets in all enabled AZs – AZ1 and AZ2 (in this case we only have healthy targets in AZ2.) With cross-zone load balancing enabled, GWLB distributes across all the targets across all enabled AZs. Once the target in AZ1 has recovered, it will be included in the pool of healthy targets. Note: Traffic sent by GWLB to other AZs incurs inter-AZ data transfer charges. Existing flows need to be reset by client or timed out. New flows are distributed to healthy targets in AZ2.
Enabled >0 in AZ1 (however a target has failed in AZ1) >0 in AZ2 GWLB distributes traffic across the healthy virtual appliances in all enabled AZs – AZ1 and AZ2. With cross-zone load balancing enabled, GWLB distributes across all the targets across all enabled AZs. Once the target in AZ1 has recovered, it will be included in the pool of healthy targets. Note: Traffic send by GWLB to other AZs incurs inter-AZ data transfer charges. Existing flows need to be reset by client or timed out. New flows are distributed across the healthy targets in all enabled AZs – AZ1 and AZ2.
Enabled 0 in AZ1 0 in AZ2 No healthy targets in AZ1 and AZ2. In this scenario GWLB fails open. Similar to the scenario when there were no healthy targets in a single AZ and cross zone was disabled, GWLB will pick and forward traffic to any of the targets behind it. Since traffic is being forwarded to unhealthy target, traffic is being dropped/discarded at the target. Restore targets to a health state Traffic may be dropped/discarded by the target(s) until at least a target is restored.

Table 2 – Cross-zone load balancing enabled. Target(s) = Virtual Appliance(s).

Code samples

We have created a GitHub repository for code examples that can help accelerate your development of AWS Gateway Load Balancer. The repository has samples for AWS CloudFormation, Python (Boto3), Go, and the CLI.

Summary:

AWS Gateway Load Balancer and Gateway Load Balancer endpoints are new additions to the Elastic Load Balancing (ELB) and VPC endpoints families and help make appliances fleets easier to deploy and scale. Combined with other networking services, such as AWS Transit Gateway, building centralized inspection capabilities into your network is greatly simplified – an architecture that we will detail on our next post. We have just scratched the surface of what Gateway Load Balancer can do to help you operationalize scalable and high-performing network security and visibility for your AWS environments.

To learn more, checkout the Gateway Load Balancer page, information on partner solutions, and the documentation. You might also want to check out this video demo that walks through five steps of setup and testing. Be on the lookout for more posts on other use cases for Gateway Load Balancer in the future!

 

Sameer Kumar Headshot1.jpg

Sameer Kumar Vasanthapuram

Sameer is a Partner Solutions Architect at AWS. He works with security partners to build solutions and capabilities that help customers as they move to the cloud. Previous to AWS, Sameer has designed secure managed networks for Carriers and MSPs, implemented content delivery mechanisms for media companies and helped build and operate distributed networks for large enterprises.

Pratik Mankad Headshot1.jpgPratik R. Mankad

Pratik is a Solutions Architect at AWS with a background in network engineering. He is passionate about network technologies and loves to innovate to help solve customer problems. He enjoys architecting solutions and providing technical guidance to help partners and customers achieve their business objectives.