Integrating sub-1 Gbps hosted connections with AWS Transit Gateway
AWS Transit Gateway provides you with the ability to connect multiple VPCs, VPNs and scale up to 5,000 attachments. It simplifies management and reduces operational costs of networks within your AWS environments and connectivity from on-premises networks.
AWS has added support for integrating AWS Transit Gateway with AWS Direct Connect gateways. AWS Direct Connect gateway integration requires a new type of virtual interface called a transit virtual interface. Transit virtual interfaces are only available over dedicated connections or hosted connections with speeds of 1 Gbps or greater. Transit virtual interfaces are not available for hosted AWS Direct Connect connections with speeds of 500 Mbps and below, also known as a sub-1 Gbps hosted AWS Direct Connect connection.
In this post, I explain how to integrate a sub-1 Gbps hosted Direct Connect connection with AWS Transit Gateway without a transit virtual interface by using the following methods:
- A site to site VPN over public virtual interface
- A private virtual interface for sub-1 Gbps hosted Direct Connect connection provisioned over a MPLS L3 VPN.
Readers of this blog post should be familiar with Border Gateway Protocol (BGP) and the following AWS services:
- Amazon EC2
- Amazon VPC
- AWS Managed VPN
- AWS Direct Connect
- AWS Transit VPC
- AWS Marketplace Amazon Machine Images (AMI) for Network Infrastructure
For this walkthrough, you should have the following:
- An AWS account
- Existing AWS Direct Connect Hosted Connection
- EC2 router AMI of your choice
Integrating a sub-1 Gbps hosted Direct Connect connection and AWS Transit Gateway using a public virtual interface
This method is similar to attaching a VPN to AWS Transit Gateway. In this case, you establish a VPN to AWS Transit Gateway over AWS Direct Connect. The following diagram depicts the scenario and the solution.
Figure 1: Connecting to transit gateway over a public VIF
Because this is a hosted connection, you don’t have to create a connection in your AWS Direct Connect console. Your AWS Direct Connect Partner creates and delegates the connection to your account.
- Create a public virtual interface on the hosted connection. Enter your customer gateway device public IP address and your on-premises public network prefixes you may want to advertise. With a public virtual interface, you automatically receive IP addresses of site-to-site VPN endpoints including all AWS public IP address. I recommend that you use filters on your edge router connecting to AWS Direct Connect to accept prefixes of targeted endpoints only, along with necessary ports. In this case, use only the two public addresses of the AWS Transit Gateway VPN endpoint and customer gateway.
- Create a new VPN attachment for AWS Transit Gateway. Use the same customer gateway’s public IP address that you used in the previous step. You can configure the customer gateway in Border Gateway Protocol (BGP) with an Autonomous System Number (ASN).
- Configure your router (customer gateway) using sample configuration suggested on the AWS Management Console. The sample configuration can be downloaded by selecting a vendor, platform, and software that corresponds to your customer gateway device or software in the Download Configuration dialog box in the Site-to-Site VPN Connections section of the AWS Management console.
This architecture works well for point-to-point connections between AWS and the customer’s on-premises network. However, it proves suboptimal for scenarios where the customer’s network consists of multiple sites connected over an MPLS network in a fully meshed manner.
MPLS L3 VPN provides the flexibility of connecting multiple sites privately. To integrate MPLS network with AWS, you must use a Provider Edge (PE) router managed by the MPLS service provider. Due to the scale and multi-tenant nature of these PE routers, VPN tunnels are generally not configured on PE routers as that increases complexity and poses operational risks to this layer.
Integrating AWS Transit Gateway with MPLS L3 VPNs
You can use dynamic or static routing for integrating MPLS L3 VPNs to AWS Transit Gateway. Both methods are explained below.
Option 1: Using dynamic routing protocol between AWS Transit Gateway and MPLS L3 VPN
The following diagram depicts the architecture used to integrate AWS Transit Gateway to a sub-1 Gbps hosted Direct Connect connection using a dynamic routing protocol.
Figure 2: Connecting to AWS Transit Gateway over a private virtual interface via L3 MPLS using BGP
The following steps provide end-to-end connectivity between on-premises networks and VPCs behind an AWS Transit Gateway. This architecture also manages the failover dynamically using BGP. Multiple sets of BGP peering enable prefix exchanges between VPCs and on-premises networks. The following list outlines the BGP peerings:
- Between the EC2 instances running router AMI and AWS Transit Gateway. These peerings exchange VPC prefixes to an edge transit VPC and vice versa.
- Between the PE routers and the floating virtual private gateway. These peerings exchange on-premises prefixes to AWS and vice versa.
- Between the floating virtual private gateway and the EC2 instances running router Amazon Machine Image (AMI). These peerings exchange prefixes received from AWS Transit Gateway towards on-premises by way of the edge transit VPC and vice versa.
In the procedure that follows, you don’t have to create a connection in your AWS Direct Connect console for this hosted connection. The AWS Direct Connect Partner creates and delegates the connection to your account.
- Set up a transit gateway and associate a few VPCs with it. For steps, see Use an AWS Transit Gateway to Simplify Your Network Architecture on the AWS News Blog.
- Create a Virtual Private Gateway (VGW) and do not attach it to any VPC. This creates a floating virtual private gateway.
- Create a private virtual interface. During creation, choose the floating virtual private gateway created in step 2. At this point, if you configured your PE router correctly, the BGP peering comes up automatically between the PE router and the floating virtual private gateway. This peering exchanges prefixes between your on-premises network and AWS. The following diagram illustrates these steps in detail. Figure 3: Connecting a private virtual interface to a floating virtual private gateway
- Create a transit VPC with two public subnets in two different Availability Zones. Attach an internet gateway to the VPC and deploy an EC2 instance running a router AMI of your choice from AWS Marketplace in each subnet. This is your edge transit VPC. Add rules in security group of these EC2 instances to allow ISAKMP (UDP 500), NAT Traversal (UDP 4500), and ESP (IP Protocol 50) from the VPN gateway and AWS Transit Gateway VPN endpoint public IP addresses.
- Create two customer gateways using the Elastic IP addresses of the EC2 instances launched in the previous step. Follow the steps in similar process found at Create a Customer Gateway.
- Create a dynamic route-based VPN between the floating virtual private gateway and the customer gateways you created in the previous steps. For step, see How do I create a secure connection between my office network and Amazon Virtual Private Cloud.
- Create a VPN attachment for the transit gateway using the customer gateways created in the previous step. This VPN must be a dynamic route-based VPN. For steps, see Transit Gateway VPN Attachments. The following diagram depicts these steps in detail.
Figure 4: Connecting floating virtual private gateway to AWS Transit Gateway using BGP via Edge Transit VPC
At this point, you will have end-to-end connectivity between VPCs and on-premises networks.
Option 2: Using static routing between transit gateway and MPLS L3 VPN
This option uses static routing between the EC2 instance running a router AMI in the edge transit VPC and the transit gateway, instead of dynamic routing. It does not use VPN tunnels between EC2 instances with router AMI and AWS Transit Gateway, allowing you to optimize cost by not incurring cost for the VPN connection. This option still uses VPN tunnels between the floating virtual private gateway and EC2 instance running a router AMI. The following diagram depicts this scenario and the steps to deploy it.
Figure 5: Connecting AWS Transit Gateway over a private VIF via L3 MPLS using static routes in transit gateway
To deploy this architecture, follow steps 1 through 6 under Option1: Using dynamic routing protocol between transit gateway and MPLS L3 VPN and then follow these additional steps.
- Attach the edge transit VPC to AWS Transit Gateway. For steps, see Transit Gateway Attachments to a VPC.
- Modify the edge transit VPC route tables to use the elastic network interface (ENI) of EC2 instances with router AMI as the target for on-premises prefixes. Also, for VPC IP address ranges behind AWS Transit Gateway, set the target as the transit gateway endpoint in the respective Availability Zone.
- In the route tables of AWS Transit Gateway, add static routes that point to the edge transit VPC attachment for the destination prefixes of on-premises networks. The following diagram depicts these steps in detail.
Figure 6: Connecting floating virtual private gateway to AWS Transit Gateway via Edge Transit VPC using static routes
In this option, you must manage failover manually, or use automation to detect a failover and then switch the paths by modifying the route tables programmatically.
In this post, I described various ways of integrating AWS Transit Gateway with your on-premises networks over sub-1 Gbps hosted AWS Direct Connect using point-to-point and MPLS L3 VPN.
I hope this blog post helps you use AWS Transit Gateway with sub-1 Gbps hosted AWS Direct Connect connection and simplify the connectivity with your AWS environment. If you have any questions or feedback, please leave a comment.
|Blog: Using AWS Client VPN to securely access AWS and on-premises resources|
|Learn about AWS VPN services|
|Watch re:Invent 2019: Connectivity to AWS and hybrid AWS network architectures|