Networking & Content Delivery

Streamline DNS management for AWS PrivateLink deployment with Amazon Route 53 Profiles

by Ankush Goyal, Kunj Thacker, and Salman Ahmed on Permalink Share

Introduction

For large enterprises adopting AWS PrivateLink interface endpoints, the key challenges revolve around streamlining deployment processes, minimizing the number of endpoints, and optimizing costs at scale. A proven approach to address these challenges is using AWS Transit Gateway alongside Amazon Route 53 Resolver, enabling the efficient sharing of AWS PrivateLink interface endpoints across multiple Amazon Virtual Private Cloud (VPCs) and on-premises environment. It allows enterprises to minimize the number of required interface endpoints, resulting in cost savings and lower operational overhead.

PrivateLink facilitates private connectivity between your VPC and supported AWS services, software as a service (SaaS) applications, or third-party services hosted on AWS or on-premises. PrivateLink uses VPC Interface Endpoints, which establish secure connections between your VPC and the target service. However, as organizations expand and introduce more VPCs and accounts, deploying these Interface Endpoints across thousands of VPCs, especially in multi-account environments, can become increasingly complex and costly.

Amazon Route 53 Profiles provides a new opportunity to revisit this architecture and enhance it even further. Integrating Route 53 Profiles allows you to simplify and centralize DNS management across a vast number of VPCs across multiple AWS accounts, making your PrivateLink deployment more scalable.

In this post, we show you how PrivateLink enables secure, private connectivity between your VPCs, whether they are within the same account, across multiple accounts, or integrated with on-premises environments, and AWS services. Whether you’re scaling your infrastructure or optimizing your architecture, this post provides a practical, step-by-step guide to mastering PrivateLink deployments.

Solution overview

Adopting a centralized deployment of PrivateLink in a hub and spoke model addresses the challenges associated with scaling PrivateLink across numerous VPCs and accounts. In this set up shown in Figure 1, PrivateLink VPC endpoints are centralized and deployed within a Shared Services VPC. Spoke VPCs in Dev and Prod accounts can access these centralized endpoints by connecting to Shared Services VPC through a Transit Gateway or AWS Cloud WAN. An on-premises data center can access these centralized PrivateLink VPC endpoints by establishing hybrid connectivity with the AWS environment through AWS Direct Connect or AWS Site-to-Site VPN.

Figure1-Centralized-VPC-endpoint-in-a-Shared-Services-VPC.png

Figure 1: Centralized VPC endpoint in a Shared Services VPC

DNS management is a critical component when implementing a centralized deployment model. When creating a VPC Interface Endpoint for any PrivateLink enabled service, you have the option to enable private DNS by choosing the Enable DNS name option during the endpoint set up process. Enabling this feature creates an AWS-managed which resolves the public DNS name of the AWS service to the private IP address of the VPC Endpoint. However, this managed PHZ is only accessible within the hub VPC that hosts the VPC Endpoint and can’t be shared with other spoke VPCs. To overcome this, we use custom PHZ, which we discuss in the following section.

Custom PHZ for PrivateLink DNS resolution

For VPC-to-VPC and on-premises connectivity, we start with disabling private DNS for the VPC endpoint.

  1. In the VPC console, choose Endpoints and choose the endpoint.
  2. Choose Actions and then choose Modify private DNS name.
  3. Under Modify private DNS name settings uncheck Enable for this endpoint.
  4. Choose Save changes.

Figure2-DisablePrivateDNS-1

Figure 2: Modify private DNS name

After you have disabled Enable private DNS names, then you can create a Route 53 PHZ. You use the service endpoint name and configure an alias record that points to the AWS service’s VPC endpoint name.

Figure3-AliasRecord

Figure 3: Create Route 53 alias record

In this example, we are creating an endpoint for AWS Lambda in the us-east-1 AWS Region, thus the endpoint ends with lambda.us-east-1.vpce.amazonaws.com.

When this custom PHZ is created in the hub VPC, you can associate it with other spoke VPCs. This approach makes sure that all spoke VPCs can resolve the AWS service’s public DNS name to the private IP address of the endpoint, enabling seamless connectivity across multiple VPCs.

Typically, to enable DNS resolution for VPC Endpoints across multiple VPCs, you would need to manually associate the PHZ for each VPC Endpoint with every spoke VPC. If both the hub and spoke VPCs reside within the same AWS account, then this association can be performed through the AWS Management Console. However, if the VPCs are in different accounts, then you would need to use the AWS Command Line Interface (AWS CLI) or SDK to complete the association. This process is described in the Route 53 developer guide.

Figure4-Centralized-VPC-endpoint-in-a-Shared-Services-VPC-using-cross-account-PHZ-association

Figure 4: Centralized VPC endpoint in a Shared Services VPC using cross account PHZ association

To streamline this process and make it more scalable, Route 53 Profiles can be used. In the following section, we explore how Route 53 Resolver Profiles can be used to enhance the existing solution.

VPC to VPC PrivateLink DNS resolution using Route 53 Profiles

The architecture diagram in Figure 5 shows a single-Region workload. We have deployed Amazon VPCs named Dev VPC in a Dev account and a Prod VPC in a Prod account. As stated previously, these VPCs are connected using either Transit Gateway or AWS Cloud WAN. This architecture facilitates the use of the VPC endpoint in the Shared Services VPC by Amazon Elastic Compute Cloud (Amazon EC2) instances. These instances, residing in either the Dev VPC or the Prod VPC, can privately access Amazon Kinesis and Lambda.

Figure5-Centralized-VPC-endpoint-in-a-Shared-Services-VPC-using-Route-53-Profiles-for-DNS-resolution

Figure 5: Centralized VPC endpoint in a Shared Services VPC using Route 53 Profiles for DNS resolution

The following steps go through the deployment process and show how Route 53 Profiles streamline this process.

  1. In the Shared Services VPC, we have created the VPC Interface endpoints to securely access Kinesis and Lambda using PrivateLink.
  2. We configure PHZ for each of these endpoints.
  3. We create a Route 53 Profile in the Shared Services Account. After the profile is created, we must associate it with the Shared Services VPC.
  4. Associate both the PHZ for Kinesis and for Lambda with this Route 53 Profile.
  5. To extend this newly created Route 53 Profile to the Dev and Prod accounts, we share the profile with both accounts using AWS Resource Access Manager (AWS RAM).
  6. When it has been shared, you can navigate to the Dev and Prod accounts and associate the Route 53 Profile with each VPC in these respective accounts.

The implementation of VPC endpoints for Kinesis and Lambda mean that all the VPCs can resolve the public DNS names for these services to the corresponding private IP addresses of their respective VPC endpoints. Therefore, all resources within these spoke VPCs can now access Kinesis and Lambda services securely through either Transit Gateway or AWS Cloud WAN. Then do so via the VPC endpoint in the Shared Services VPC, without the need to traverse the public internet.

Moving forward, when you create new VPC endpoint for any other supported AWS services, the only step necessary is to associate the PHZ for each of the VPC endpoints with the centralized Route 53 Profile. When this association is established, all the VPCs linked to this Route 53 Profile can resolve the DNS name to these newly created VPC endpoints.

Similarly, when you provision new VPCs in existing or new accounts, you associate those VPCs with the Shared Route 53 Profile. You also provide layer-three connectivity with Shared Services VPC using Transit Gateway or AWS Cloud WAN. As a result, all the new VPCs automatically become associated with all the PHZs in the Shared Services Account, providing seamless DNS resolution to the respective VPC endpoints.

PrivateLink DNS resolution with on-premises networks

In this scenario shown in Figure 6, we establish a Layer 3 connectivity between the AWS environment and an external network. On-premises resources are needing to reach AWS services, such as Kinesis and Lambda, so we must implement a solution for an on-premises DNS resolution.

Figure6-Centralized VPC endpoint in a Shared Services VPC using Route 53 Profiles for DNS resolution with on-premises

Figure 6: Centralized VPC endpoint in a Shared Services VPC using Route 53 Profiles for DNS resolution with on-premises

  1. Layer 3 connectivity is established to the existing Transit Gateway or AWS Cloud WAN by using either Direct Connect or Site-to-Site VPN.
  2. Route 53 Resolver inbound endpoint is deployed in the Shared Services VPC.
  3. On-premises DNS resolver is configured with a forwarding rule to send DNS queries for Kinesis and Lambda to the IP address of the Route 53 Resolver inbound endpoint.
  4. The PHZs directly associated with the Shared Services VPC, where the Route 53 Resolver inbound endpoint is created, take precedence over the Route 53 Profile associated with the VPC when resolving a query.

Considerations and best practices

  • The VPC endpoint should span across multiple Availability Zones (AZs), ideally two or more, to achieve high availability. Similarly, the Route 53 Resolver inbound endpoint should be configured to operate across multiple AZs, thereby mitigating the potential impact of an AZ-level failure.
  • When you associate PHZs with the Route 53 Profile, and the profile is associated to your VPC, you don’t need to explicitly associate the PHZ with those VPCs.
  • You can share Route 53 Profiles with a specific Organizational Unit (OU) or your entire organization, rather than sharing it with individual accounts. If your Route 53 Profile is shared with an OU or your entire organization, then each existing and newly created account within that scope automatically have access to this Route 53 Profile. This eliminates the need to manually share the Route 53 Profile with each individual AWS account.
  • As described on the Route 53 pricing page, VPC Profiles are charged with an hourly rate for each Profile-VPC association. Creating a large number of profiles can result in higher costs.
  • If a VPC is associated with both a PHZ and a Route 53 Profile, then Route 53 Resolver prioritizes and uses the direct PHZ association first. It is outlined in the documentation on How Route 53 Profile settings are prioritized.
  • Each interface endpoint currently supports a specific bandwidth for sustained and burst traffic per AZ as documented in the AWS PrivateLink quotas documentation. Consider using this solution to support higher bandwidth requirements.

Conclusion

In this post, we discussed how Amazon Route 53 Profile can easily be integrated to help with DNS management when using a centralized model using AWS Transit Gateway or AWS Cloud WAN for AWS PrivateLink deployment. To get started, visit the AWS PrivateLink and Amazon Route 53 Profile page.

About the authors

Kunj

Kunj Thacker

Kunj is a Technical Account Manager at AWS and is based out of Vancouver, Canada. He has an extensive background in Network and Infrastructure engineering prior to this role. He is passionate about new technologies and enjoys helping customers build, implement, and optimize their cloud infrastructure on AWS.

Salman Ahmed

Salman Ahmed

Salman Ahmed is a Senior Technical Account Manager in AWS Enterprise Support. He enjoys helping customers in the travel and hospitality industry to design, implement, and support cloud infrastructure. With a passion for networking services and years of experience, he helps customers adopt various AWS networking services. Outside of work, Salman enjoys photography, traveling, and watching his favorite sports teams.

Ankush Goyal

Ankush Goyal

Ankush Goyal is a Senior Technical Account Manager at AWS Enterprise Support, specializing in helping customers in the travel and hospitality industries optimize their cloud infrastructure. With over 20 years of IT experience, he focuses on leveraging AWS networking services to drive operational efficiency and cloud adoption. Ankush is passionate about delivering impactful solutions and enabling clients to streamline their cloud operations.