Networking & Content Delivery

Centralized inspection architecture with AWS Gateway Load Balancer and AWS Transit Gateway

In our conversations with customers, we are often asked about the best way to architect centralized inspection architectures. Since the launch of AWS Gateway Load Balancer (GWLB), those discussions increasingly revolve around how to use AWS Transit Gateway, Gateway Load Balancer and Gateway Load Balancer Endpoints (GWLBE) together. In this post, we explain how to use Transit Gateway to send network traffic to a scalable fleet of virtual appliances that are configured as targets behind a Gateway Load Balancer.

In our previous post, Scaling network traffic inspection using AWS Gateway Load Balancer, we discussed how to architect distributed network security architecture for Amazon Virtual Private Cloud (VPC)-to-Internet traffic using third-party virtual appliances, GWLB and GWLBE. We showed you how these virtual appliances can be delivered as a service. We also briefly mentioned how GWLB can be integrated with Transit Gateway. Visit this page to view all of the blogs we have published on GWLB so far.

In this post, we take a closer look at centralized architecture for East-West (VPC-to-VPC) and/or North-South (Internet egress, on-premises) traffic. We explain in detail how to integrate virtual appliances, GWLB and GWLBE with Transit Gateway. The post walks you through:

  • Architecture Overview
  • Life of a packet when an application in VPC communicates with resource on the Internet
  • Maintaining flow symmetry using Transit Gateway appliance mode

Architecture Overview:

AWS Transit Gateway is a regional highly available and scalable service that enables customers to connect multiple VPCs with each other, as well as with the on-premises networks over Site-to-Site VPN and/or Direct Connect using a single centralized gateway. Customer can use Transit Gateway for centralize traffic inspection and egress control between VPCs, between VPCs and Internet and between VPCs and on-premises network. VPCs can be in same or different AWS accounts.

Prior to GWLB, network and security appliances were deployed behind a Transit Gateway using either VPC attachments or Virtual Private Network (VPN) attachments, which allowed for inspection of all traffic. As described in the Advanced Architectures with AWS Transit Gateway – AWS Online Tech Talks, when using VPC attachments, customers need to have mechanisms in place to detect virtual appliance failures and modify route tables. VPN attachments provides the capability to detect and handle failures, but Internet Protocol Security (IPsec) adds overhead and has bandwidth limits.

The AWS Gateway Load Balancer is designed specifically to address these architectural challenges and make deploying, scaling, and running virtual appliances easier. Since GWLB Endpoints are a routable target, you can route traffic moving to and from Transit Gateway to the fleet of virtual appliances that are configured as targets behind a GWLB.

Centralized Inspection Architecture using AWS GWLB and AWS Transit Gateway Blog Figure1

Figure 1: Centralized inspection architecture using AWS Gateway Load Balancer and AWS Transit Gateway

As shown in Figure 1: Centralized inspection architecture using AWS Gateway Load Balancer and AWS Transit Gateway:

  • Spoke VPCs that need their network traffic inspected are connected to the Transit Gateway using a VPC attachment. In each AZ, Spoke VPCs consists of two subnets, one for application and one for Transit Gateway Attachment. These Spoke VPCs have the default route with Transit Gateway as the next-hop.
  • The Transit Gateway consists of two route tables:
    1. Egress Route Table associated with Spoke VPCs. Egress Route Table have the default route with Appliance VPC Attachment as the next-hop.
    2. Transit Route Table associated with the Appliance VPC. Transit Route Table have the routes for Spoke VPCs network address with appropriate Spoke VPC Attachment as the next hop.
  • GWLBE, GWLB, virtual appliances and NAT gateways are deployed in a centralized Appliance VPC which is connected to Transit Gateway using VPC attachment.
  • Appliance VPC, in each AZ, consists of:
    1. Transit Gateway Subnet associated with Transit Gateway Route Table for Transit Gateway attachment.
    2. Appliance Subnet associated Appliance Route Table for GWLBE, GWLB and virtual appliances.
    3. NAT Gateway Subnet associated with NAT Gateway Route Table for NAT gateway.

Life of a Packet from Spoke VPC to Internet and back:

Now that we understand the architecture, let’s walk through a life of a packet when application in Spoke VPC communicates with resources on the Internet.

Application Traffic flow to the Internet:

Figure 2: Application traffic flow to the Internet

  • Step 1: An application in Spoke1 VPC wants to communicate with a resource on the Internet. Application uses the default route (0.0.0.0/0) in Spoke1 VPC Route Table A to send traffic to Transit Gateway.
  • Step 2: Since Spoke1 VPC is associated with the Egress Route Table, Transit Gateway uses default route in Egress Route Table to send traffic to Appliance VPC.
  • Step 3: In Appliance VPC, Transit Gateway Subnet A uses default route in Transit Gateway Route Table A to send traffic to GWLBE A (vpce-az-a-id) in the same Availability Zone (AZ).
  • Step 4: GWLBE A, using AWS PrivateLink, routes traffic to GWLB. Traffic is routed securely over Amazon network without any additional configuration.
  • Step 5: GWLB uses 5-tuples or 3-tuples of an IP packet to pick an appliance for the life of that flow. This creates session stickiness to an appliance for the life of a flow required for stateful appliances like firewalls.
  • Step 5(a): GWLB encapsulates the original IP traffic with a GENEVE header and forwards it to the appliance over UDP port 6081.
    • This encapsulation allows all IP traffic to be delivered to the appliances for inspection, without specifying listeners for every port and protocol.
  • Step 6: The virtual appliance (in this case we assume this being a firewall or an IPS device) behind the GWLB decapsulates the GENEVE header and makes a decision to allow the traffic based on the security policy configured
  • Step 7: The virtual appliance then re-encapsulates the traffic and forwards it to the GWLB.
  • Step 8: GWLB, based on GENEVE TLV, selects GWLBE A, removes GENEVE header and forwards traffic to GWLBE A.
  • Step 9: GWLBE A uses default route in Appliance Route Table A and routes traffic to NAT Gateway A (nat-az-a-id).
  • Step 10: NAT Gateway A uses default route in NAT Gateway Route Table A, performs source IP address translation and routes traffic to Internet Gateway (igw-id). From there traffic egresses out to Internet.

Return traffic from Internet to application in the Spoke VPC:

Figure 3: Return traffic flow to the application in the Spoke VPC from the Internet

  • Step 1: When the return traffic arrives at the Internet Gateway, since NAT Gateway A had translated source IP address, the Internet Gateway routes the traffic back to NAT Gateway A.
  • Step 2: NAT Gateway A uses Spoke1 VPC’s network address route in NAT Gateway Route Table A and sends traffic to GWLBE A.
  • Step 3: GWLBE A, using AWS PrivateLink, routes traffic to GWLB securely over Amazon Network.
  • Step 4: Since this return packet is associated with an existing flow, GWLB encapsulates the original IP traffic with a GENEVE header and forwards it to the virtual appliance it had chosen for this flow.
  • Step 5: The virtual appliance behind the GWLB decapsulates the GENEVE header, inspects the traffic and depending on the security policy configured, decides how to handle the traffic.
    • The addition of GENEVE headers doesn’t count towards the overall MTU limit of GWLB.
  • Step 6: Assuming the traffic is allowed, the virtual appliance then re-encapsulates with GENEVE headers and forwards the traffic to the GWLB.
  • Step 7: GWLB, based on GENEVE TLV, selects GWLBE A, removes GENEVE header and forwards traffic to GWLBE A.
  • Step 8: GWLBE A uses Spoke1 VPC’s network address route in Appliance Route Table A and routes the traffic to Transit Gateway.
  • Step 9: Since Appliance VPC is associated with the Transit Route Table, Transit Gateway uses the Spoke1 VPC’s network address route in the Transit Route Table to send traffic to Spoke1 VPC.
  • Step 10: Finally, once the traffic is at the Spoke1 VPC, the destination of the packet is within the VPC CIDR range, where the local route is used to deliver traffic to the application instance that sourced the traffic.

Maintain flow symmetry using Transit Gateway appliance mode:

A previous post, Introducing AWS Gateway Load Balancer: Supported architecture patterns, discussed how Transit Gateway appliance mode addresses asymmetric routing issues that could cause problems for stateful devices like firewalls.

GWLB’s ability to use 5-tuples or 3-tuples of an IP packet to select specific appliance behind it for life of that flow combined with Transit Gateway appliance mode, provides session stickiness irrespective of source and destination AZ. This includes the AZs that the Transit Gateway attachments and GWLB are deployed in – while still providing autoscaling and automatic health checks. In addition, this means firewalls will no longer need to perform source IP address translation (SNAT) to maintain flow symmetry.

Let’s understand how this works when you have instances deployed in two different VPC in two different AZs, and they are trying to communicate with each other through Transit Gateway with attachments that are not in the same AZs.

Figure 4: AWS Transit Gateway appliance mode

As shown in Figure 4, traffic is sourced from an instance in AZ A of Spoke1 VPC. The destination instance is in AZ C of Spoke2 VPC and a scalable fleet of virtual appliances are in AZ A and AZ B of the Appliance VPC. Transit Gateway, using VPC attachment, is attached to Spoke1 VPC through AZ A and AZ B, Spoke2 VPC through AZ A and AZ C and Appliance VPC through AZ A and AZ B. You will notice route table configuration remains the same. However, we have AZ misalignment between the VPCs.

Prior to Transit Gateway appliance mode, when traffic is routed between VPC attachments, Transit Gateway will keep the traffic in the same AZ as it originated until it reaches its destination. Traffic crosses AZ’s between attachments only if there is an AZ failure or if there are no subnets associated with a VPC attachment in that AZ. As a result, in Figure 4, traffic from AZ A arrives at Transit Gateway in AZ A while being forwarded to destination in AZ C, but, since there is no subnet associated in Appliance VPC in AZ C, return traffic may arrive at Transit Gateway attachment in AZ B.

To ensure flow symmetry, Transit Gateway appliance mode is enabled on the Appliance VPC’s attachment. Transit Gateway appliance mode can be setup during attachment creation or by modifying the TGW attachment. Replace red italicized text with appropriate values:

aws ec2 create-transit-gateway-vpc-attachment \
    --transit-gateway-id tgw-0262a0e521EXAMPLE \
    --vpc-id vpc-07e8ffd50f49335df \
    --subnet-id subnet-0752213d59EXAMPLE
    --options ApplianceModeSupport=enable

aws ec2 modify-transit-gateway-vpc-attachment \
    --transit-gateway-attachment-id tgw-attach-0253EXAMPLE
    --options ApplianceModeSupport=enable

When appliance mode is enabled, Transit Gateway, using 4-tuples of an IP packet, selects a single Transit Gateway ENI in the Appliance VPC for the life of a flow to send traffic to. Once at the Transit Gateway ENI, traffic is routed to the GWLBE and then on to the GWLB, in the same AZ, that provides stickiness to the flows as described above. For return traffic, Transit Gateway ensures symmetry by using the same selected Transit Gateway ENI. This ensures bi-directional flow is processed by the same appliance behind the GWLB irrespective of the AZs’ of all the three entities – source, destination and appliances. This further removes additional extraneous effort customers put in to align their Transit Gateway deployments across AZs, whether it was across accounts or in cases where Spoke VPCs deployed only in specific AZs.

Customers have used Transit Gateway to connect multiple VPCs with each other, as well as with the on-premises networks using a single centralized gateway. They have used Transit Gateway route tables to achieve desired traffic segmentation. With the addition of GWLBE as a routable target for the Transit Gateway attachment in the subnet route table and GWLB handling the scaling, we now have a better mechanism to easily scale your virtual appliance behind a Transit Gateway deployment.

Code samples:

We have created a GitHub repository for code examples that can help accelerate your development of AWS Gateway Load Balancer. The repository has samples for AWS CloudFormation, Python (Boto3), Go, and the CLI. Visit this page to launch the solution describe in this post using AWS CloudFormation.

Conclusion:

In this post, we took a closer look at centralized architecture. We explain in detail how to integrate virtual appliances, Gateway Load Balancer and Gateway Load Balancer Endpoint with Transit Gateway. While the write up walked you through life of a packet from Spoke VPC to Internet and back, the architecture can be easily extended to create patterns to inspect traffic between VPCs and between VPCs and on-premises resources. The post also discussed configuring Transit Gateway appliance mode to maintain flow symmetry.

Gateway Load balancer combined with Gateway Load Balancer Endpoint provides customers with a highly available next hop for Transit Gateway VPC attachments in the Appliance VPC.

Gateway Load Balancer’s ability to check appliance health, use auto scaling groups as targets, and remain transparent to network traffic, makes it easier to centralize and scale fleets of firewalls and other virtual appliances. As a result, customers no longer need to create complex configurations, scaling mechanisms, and relying on manual health checks.

To learn more, checkout the Gateway Load Balancer page, information on partner solutions, and the documentation. You might also want to check out this video demo that walks through five steps of setup and testing.

Pratik R. Mankad

Pratik is a Partner Solutions Architect at AWS with a background in network engineering. He is passionate about network technologies and loves to innovate to help solve customer problems. He enjoys architecting solutions and providing technical guidance to help partners and customers achieve their business objectives.

Sameer Kumar Headshot1.jpg

Sameer Kumar Vasanthapuram

Sameer is a Partner Solutions Architect at AWS. He works with security partners to build solutions and capabilities that help customers as they move to the cloud. Previous to AWS, Sameer has designed secure managed networks for Carriers and MSPs, implemented content delivery mechanisms for media companies and helped build and operate distributed networks for large enterprises.