AWS Architecture Blog

Reduce Cost and Increase Security with Amazon VPC Endpoints

This blog explains the benefits of using Amazon VPC endpoints and highlights a self-paced workshop that will help you learn more about them. Amazon Virtual Private Cloud (Amazon VPC) enables you to launch Amazon Web Services (AWS) resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center. The additional benefit is the ability to use the scalable infrastructure of AWS.

A VPC endpoint allows you to privately connect your VPC to supported AWS services. It also doesn’t require an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Endpoints are virtual devices that are horizontally scaled, redundant, and highly available VPC components. They allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic. VPC endpoints use IP addresses allocated from within your VPC address space. Using this network path allows you to create an isolated VPC that is closed from a public internet. You can simplify your network design by removing internet gateway from your network architecture, avoid incurring cost associated with NAT Gateway access, and maintaining firewalls. VPC endpoints also provide you with much finer control over how users and applications access AWS services.

There are two types of VPC endpoints: interface endpoints and gateway endpoints. Amazon Simple Storage Service (S3) and Amazon DynamoDB are accessed using gateway endpoints. You can configure resource policies on both the gateway endpoint and the AWS resource that the endpoint provides access to. A VPC endpoint policy is an AWS Identity and Access Management (AWS IAM) resource policy that you can attach to an endpoint. It is a separate policy for controlling access from the endpoint to the specified service. This enables granular access control and private network connectivity from within a VPC. For example, you could create a policy that restricts access to a specific DynamoDB table. This policy would only allow certain users or groups to access the table through a VPC endpoint.

Figure 1: Accessing Amazon S3 via a Gateway VPC endpoint

Figure 1: Accessing Amazon S3 via a Gateway VPC endpoint

Interface endpoints allow you to connect to services powered by AWS PrivateLink. This includes a large number of AWS services. It also can include services hosted by other AWS customers, and AWS Partner Network (APN) partners in their own VPCs, By using AWS partner services through AWS PrivateLink, you no longer have to rely on access to the public internet. Data transfer charges for traffic from Amazon EC2 to the internet vary based on volume. After the first 1 GB / month ($0.00 per GB), transfers are charged at a rate of $ 0.09/GB (for AWS US-East 1 Virginia). Like gateway endpoints, interface endpoints can be secured using resource policies on the endpoint itself and the resource that the endpoint provides access to. Interface endpoints allow the use of security groups to restrict access to the endpoint.

Figure 2: Accessing QLDB via an Interface VPC endpoint

Figure 2: Accessing QLDB via an Interface VPC endpoint

In larger multi-account AWS environments, network design can vary considerably. Consider an organization that has built a hub-and-spoke network with AWS Transit Gateway. VPCs have been provisioned into multiple AWS accounts, perhaps to facilitate network isolation or to enable delegated network administration. For distributed architectures, you can build a “shared services” VPC, which provides access to services required by workloads in each of the VPCs. This might include directory services or VPC endpoints. Sharing resources from a central location instead of building them in each VPC may reduce administrative overhead and cost. This approach was outlined by my colleague Bhavin Desai in their blog post, Centralized DNS management of hybrid cloud with Amazon Route 53 and AWS Transit Gateway.

Figure 3: Centralized VPC endpoints (multiple VPCs)

Figure 3: Centralized VPC endpoints (multiple VPCs)

Alternatively, an organization may have centralized its network and chosen to leverage VPC sharing to enable multiple AWS accounts to create application resources. Such approach allows aggregating Amazon EC2 instances, Amazon Relational Database Service (RDS) databases, and AWS Lambda functions into a shared, centrally managed network. With either pattern, establishing a granular set of controls to limit access to resources is critical to support organizational security and compliance objectives. At the same time, it helps maintain operational efficiency.

Figure 4: Centralized VPC endpoints (shared VPC)

Figure 4: Centralized VPC endpoints (shared VPC)

Learn how with the VPC Endpoint Workshop

Understanding how to appropriately restrict access to endpoints and the services they connect with can be confusing. Learn more by taking the VPC Endpoint Workshop. Improve the security posture of your cloud workloads by using network controls and VPC endpoint policies to manage access to your AWS resources.

Nigel Harris

Nigel Harris

Nigel Harris is an Enterprise Solutions Architect at Amazon Web Services. He works with AWS customers to provide guidance and technical assistance on AWS architectures.

Marcin Bednarz

Marcin Bednarz

Marcin Bednarz is a Senior Solutions Architect at Amazon Web Services. He works with AWS customers to provide guidance and technical assistance, helping them improve the value of their solutions when using AWS.