Networking & Content Delivery
Integrating AWS Transit Gateway with AWS PrivateLink and Amazon Route 53 Resolver
I want to take some time to dive more deeply into a use case outlined in NET301 Best Practices for AWS PrivateLink. The use case involves using AWS Transit Gateway, along with Amazon Route 53 Resolver, to share AWS PrivateLink interface endpoints between multiple connected Amazon virtual private clouds (VPCs) and an on-premises environment. We’ve seen a lot of interest from customers in this architecture. It can greatly reduce the number of VPC endpoints, simplify VPC endpoint deployment, and help cost-optimize when deploying at scale.
Architecture overview
For VPC endpoints that you use to connect to endpoint services (services you create using AWS PrivateLink behind a Network Load Balancer) the architecture is fairly straightforward. Since the DNS entries for the VPC endpoint are public, you just need layer-three connectivity between a VPC and its destination using either VPC peering, transit gateway, or a VPN. Where the architecture becomes more complex is when you want to share endpoints for AWS services and AWS PrivateLink SaaS.
When you create a VPC endpoint to an AWS service or AWS PrivateLink SaaS, you can enable Private DNS. When enabled, the setting creates an AWS managed Route 53 private hosted zone (PHZ) for you. The managed PHZ works great for resolving the DNS name within a VPC however, it does not work outside of the VPC. This is where PHZ sharing and Route 53 Resolver come into play to help us get unified name resolution for shared VPC endpoints. We’ll now dig into how you can make this name resolution work from VPC to VPC and from on-premises.
Custom PHZ
In both the VPC-to-VPC and on-premises scenarios our first step is to disable private DNS on the VPC endpoint. From the VPC console, we’ll choose Endpoints and select the endpoint. For Enable Private DNS Name, we’ll clear the check box.
After we disable Private DNS for the VPC endpoint, we create a Route 53 PHZ with the full service endpoint name. In this example I’m creating an endpoint for AWS CodeBuild in us-east-1, so the endpoint is codebuild.us-east-1.amazonaws.com. For the full list of endpoint names, see AWS Regions and Endpoints.
We then create an alias record to the regional VPC endpoint (the record that has the region name in it), as shown in the following screenshot.
VPC to VPC
To share a VPC endpoint with other VPCs they will need layer-three connectivity through a transit gateway or VPC peering. Once the VPCs have layer-three connectivity to the VPC endpoint the PHZ we created for the service will need to be shared. For VPCs within the same account this can be done directly through the Route 53 console. The screenshot below shows this configuration.
If the VPCs reside in different accounts, the PHZs can be shared through the AWS CLI or SDK. After this sharing is setup the VPC will resolve the AWS service name or the AWS PrivateLink SaaS to the VPC endpoint private IP addresses. This architecture is depicted in the diagram below.
On premises
To share a VPC endpoint to an on-premises environment we use Route 53 Resolver to facilitate hybrid DNS. We again need layer-three connectivity through AWS VPN or AWS Direct Connect to a transit gateway (note this could also be a virtual private gateway). Then we create an inbound Route 53 Resolver endpoint in the same VPC as the VPC endpoint. Since the VPC endpoint and inbound endpoint reside in the same VPC, the private IP addresses of the VPC endpoint will be returned for the service. Our final step is to create a conditional forward in the on-premises DNS server for the service name that points to the inbound Route 53 Resolver endpoint IP addresses. The following diagram depicts this architecture.
Design Considerations
As a best practice, you should try to avoid any single points of failure. VPC endpoints should use two or more Availability Zones (AZs) for high availability. The same is true for the Route 53 Resolver inbound endpoint, it should use two or more AZs.
We strongly recommend continuing to use the .2 resolver (AmazonProvidedDNS) as the DNS resolver for all instances inside Amazon EC2. The .2 VPC resolver provides the highest level of availability and scalability with lowest latency.
As with any architecture, there are limits and pitfalls to keep in mind.
Limits to Keep in Mind
- 10,000 DNS queries per second per Route 53 inbound endpoint elastic network interface
- Note this can scale by adding additional elastic network interfaces (one per AZ) to an endpoint
- 100 associated VPCs per PHZ (you can request this limit be increased)
- 20 VPC endpoints per VPC (you can increase this to 40)
- 10 Gbps throughput per VPC endpoint elastic network interface, although this can burst higher
Pitfalls to Avoid
- Route 53 Resolver rules to the “.2 resolver”
- The “.2 resolver” can only handle 1024 queries per second
- Using Route 53 Resolver endpoints to forward DNS queries between VPCs
- Using Route 53 resolver for VPC to PHZ name resolution
- Pointing Route 53 inbound and outbound endpoints at each other
- Configuring EC2 instances to use inbound endpoints for DNS resolution
Conclusion
In this post I’ve laid out a scalable architecture for sharing interface VPC endpoint between VPCs and on-premises environments. Hopefully you have found this post informative. I look forward to any comments, and happy architecting on AWS!
Blog: Using AWS Client VPN to securely access AWS and on-premises resources | ||
Learn about AWS VPN services | ||
Watch re:Invent 2019: Connectivity to AWS and hybrid AWS network architectures |