Networking & Content Delivery

Centralize access using VPC interface endpoints to access AWS services across multiple VPCs

Security and cost are always a top priority for AWS customers when designing their network. Amazon Virtual Private Cloud (Amazon VPC),  and it’s related networking components, offer many tools for implementing network connectivity. One such tool is VPC endpoints. Powered by AWS PrivateLink, VPC endpoints are private connections between your VPC and another AWS service without sending traffic over the internet, through a NAT instance, a VPN connection, or AWS Direct Connect. In this blog post, I present hub and spoke design where all the spoke VPCs use an interface VPC endpoint provisioned inside the hub (shared services) VPC. This architecture may help reduce the cost and maintenance for multiple interface VPC endpoints across different VPCs.

Since its launch in 2015, VPC endpoints have been used to privately access AWS services, AWS API endpoints, and SaaS applications. VPC endpoints are horizontally scaled, redundant, and highly available VPC components. They allow communication between instances in your VPC and services without imposing availability risks. With VPC endpoints, your VPCs don’t need to have Internet Gateway or NAT Gateway for EC2 instances to access AWS services and endpoints. There are two types of VPC endpoints – Gateway endpoints and interface endpoints. Gateway endpoints can be used to access regional S3 bucket and DynamoDB tables and interface endpoints can be used to access AWS service endpoints or VPC endpoint services. As number of VPCs in your account grow, centralizing the interface endpoints might be cost efficient solution. Review the information on AWS PrivateLink pricing to learn more about how this is priced and if it’s a good fit for you.

Architecture

Centralize access using VPC interface endpoint

  Centralize VPC interface endpoint

There are a few things to consider when evaluating VPC connectivity options and DNS resolution.

Considerations for this architecture

How to connect spoke VPCs to access an interface endpoint inside a hub VPC?

For this, we use VPC peering connections. (As an alternative, you could use a AWS Transit Gateway, which is described in the blog article Integrating AWS Transit Gateway with AWS PrivateLink and Amazon Route 53 Resolver.)

How do we resolve the DNS for AWS service endpoint from the spoke VPCs?

We can enable the Private DNS for an interface endpoint and with that we can resolve the AWS service endpoint DNS from within the same VPC (for example, sqs.us-east-1.amazonaws.com). However, the AWS service endpoint does not resolve from the peered VPCs. For this, we can create a Private Hosted Zone (for example, sqs.us-east-1.amazonaws.com) and associate it with the peered VPCs.

How do you access the interface endpoints from outside VPC?

You can also access the interface endpoints over AWS Site-to-Site VPN or AWS Direct Connect connection however this connection must terminate in the shared services VPC.

Setup steps (Part 1)

  1. Assuming the workload (spoke) VPCs already exist, create a new Hub VPC, and a private subnet to host an interface VPC endpoint.
  2. Create an interface VPC endpoint for required AWS service (for example, Amazon SQS) and select the subnet created in Step 1. In the security group, make sure to add inbound rule for HTTPS traffic from spoke VPC CIDRs. No changes are required for the security group outbound rules as traffic is never initiated by the interface endpoint elastic network interface (ENI).
  3. For VPC endpoint policy, we select the default policy, however, in the production environment you may restrict access to the specific AWS Principals.
  4. Set up a VPC peering connection between the spoke and hub VPCs.
  5. Configure the route tables in the hub and spoke VPCs, and add a rule to route the traffic for corresponding VPC CIDRs by way of a VPC peering connection.

            Example spoke VPC route table:

Spoke VPC Route Table

                                                         Route to hub VPC CIDR via VPC peering.

             Example hub VPC route table:

Hub VPC route table

                                                   Route to spoke VPCs CIDR via VPC peering.

In the preceding configuration, the spoke VPCs should be able to access the AWS service endpoint, but, it cannot resolve the public DNS of AWS service endpoint. If you are using AWS CLI or SDK from within the spoke VPC, you must use the –endpoint-url parameter to override the AWS service endpoint and use an interface VPC endpoint DNS as shown in the following:

[ec2-user@ip-10-10-1-69 ~]$ aws sqs send-message --queue-url https://sqs.us-east-1.amazonaws.com/xxxxxxxxxxxx/demo-queue --message-body "Test1" --region us-east-1 --endpoint-url https://vpce-07f0fxxxxxxxxxxxx-xxxxxxxx.sqs.us-east-1.vpce.amazonaws.com

Output:
{
    "MD5OfMessageBody": "e1b849f9631ffc1829b2e31402373e3c",
    "MessageId": "498e2df4-012d-42ed-9516-e49f9f612519"
}

Setup steps (part 2)

If you want to resolve the AWS service endpoint natively from within spoke VPCs, then you must perform these additional steps:

  1. Disable the Private DNS for an interface VPC endpoint in the hub VPC (if it’s enabled).
  2. Create a Private Hosted Zone with same name as AWS service endpoint (for example, sqs.us-east-1.amazonaws.com) and create an A record (alias) to point to an interface VPC endpoint DNS.
Create Private Hosted Zone for AWS service

Route 53 alias record for Amazon Simple Queue Service pointing to interface endpoint DNS

3. Attach the Private Hosted zone to all spoke VPCs

Associate spoke VPCs to Private Hosted Zone

Associating spoke VPCs with Private Hosted Zone

Note: If the spoke VPC is in different account, then you must use a programmatic method such as AWS CLI to associate it with a Private Hosted Zone.

With this configuration, you should be able to resolve the AWS service endpoint without additional parameters (for example, you don’t need to use –endpoint-url parameter for AWS CLI).

[ec2-user@ip-10-10-1-69 ~]$ aws sqs send-message --queue-url https://sqs.us-east-1.amazonaws.com/xxxxxxxxxxxx/demo-queue 
--message-body "Test2" --region us-east-1

Output:
{
    "MD5OfMessageBody": "c454552d52d55d3ef56408742887362b",
    "MessageId": "c7dccfab-9649-4fc7-892d-db41d150ba60"
}

Note: You must have AWS CLI version 2. Older CLI use the legacy endpoints (for example, queue.amazonaws.come instead of sqs.us-east-1.amazonaws.com) With earlier CLI version the preceding command will not work for US-East-1 (N.Virginia) Region.

Design considerations

  • To improve resiliency of your design, your interface VPC endpoints should use two or more Availability Zones (AZs).

Limits to consider for large-scale setup

  • 50 active VPC peering connections per VPC (this limit can be increased to 125).
  • 100 associated VPCs per Private Hosted Zone (you can request that this limit be increased).
  • 50 interface VPC endpoints per VPC (you can request this limit is increased).
  • 10-Gbps throughput per VPC endpoint elastic network interface, although this can burst higher.

Sample code

To assist in the deployment of this solution, we have provided the AWS Cloud Development Kit (CDK) code on the AWS Labs GitHub repository, https://github.com/aws-samples/amazon-centralize-vpc-interface-endpoints, together with a step-by-step  guide for how to deploy the code.

Cleanup

Be sure to clean up the resources you created to avoid ongoing charges to your account.

  • Terminate EC2 instances in spoke VPCs that you launched for this setup.
  • Delete an interface endpoint in shared services (hub) VPC.
  • Delete all VPC peering connections.
  • Delete Route 53 Private Hosted Zone.
  • Delete the AWS service resource you created (for example, Amazon Simple Queue Service).

Conclusion

In this blog post, I have shown how a VPC hub and spoke model can be used to centralize access to an interface VPC endpoint in order to access the AWS services endpoints privately from spoke VPCs. This helps secure access and reduce the cost of multiple interface endpoints. For advanced use cases where you also must access interface endpoints from an on-premises network, you can implement a similar solution using AWS Transit Gateway and Amazon Route 53 resolver. For this, refer to the blog post Integrating AWS Transit Gateway with AWS PrivateLink and Amazon Route 53 Resolver.

Chetan Agrawal.png

Chetan Agrawal

Chetan is Solutions Architect with AWS and supports Global Automotive customers. He comes with 15 years of industry experience in the field of Development, DevOps and Cloud. His specialization includes AWS Networking, Security, and Application modernization. He likes to help customers build solutions which are well architected.