AWS Architecture Blog
Reduce Cost and Increase Security with Amazon VPC Endpoints
This blog explains the benefits of using Amazon VPC endpoints and highlights a self-paced workshop that will help you learn more about them. Amazon Virtual Private Cloud (Amazon VPC) enables you to launch Amazon Web Services (AWS) resources into a virtual network that you’ve defined. This virtual network resembles a traditional network that you’d operate in your own data center. The additional benefit is the ability to use the scalable infrastructure of AWS.
A VPC endpoint allows you to privately connect your VPC to supported AWS services. It doesn’t require you to deploy an internet gateway, network address translation (NAT) device, Virtual Private Network (VPN) connection, or AWS Direct Connect connection. Endpoints are virtual devices that are horizontally scaled, redundant, and highly available VPC components. VPC endpoints allow communication between instances in your VPC and services, without imposing availability risks or bandwidth constraints on your network traffic.
You can optimize the network path by avoiding traffic to internet gateways and incurring cost associated with NAT gateways, NAT instances or maintaining firewalls. VPC endpoints also provide you with much finer control over how users and applications access AWS services. There are three types of VPC endpoints: gateway load balancer endpoints, gateway endpoints, and interface endpoints. Let’s take a look at each type of endpoint and how it is used.
The first type of endpoint, a Gateway Load Balancer endpoint, allows you to intercept traffic and route it to a network or security service that you’ve configured using a Gateway Load Balancer. Gateway load balancers enable you to deploy, scale, and manage virtual appliances, such as firewalls, intrusion detection and prevention systems, and deep packet inspection systems. Our colleague Justin Davies has written an excellent blog post on supported architectural patterns using AWS Gateway Load Balancers.
The second type of endpoint, a Gateway endpoint, allows you to provide access to Amazon Simple Storage Service (S3) and Amazon DynamoDB. You can configure resource policies on both the gateway endpoint and the AWS resource that the endpoint provides access to. A VPC endpoint policy is an AWS Identity and Access Management (AWS IAM) resource policy that you can attach to an endpoint. It is a separate policy for controlling access from the endpoint to the specified service. This enables granular access control and private network connectivity from within a VPC. For example, you could create a policy that restricts access to a specific DynamoDB table. This policy would only allow certain users or groups to access the table through a VPC endpoint.
The third type of endpoint, an Interface endpoint, allows you to connect to services powered by AWS PrivateLink. This includes a large number of AWS services. It also can also include services hosted by other AWS customers, and AWS Partner Network (APN) partners in their own VPCs. By using AWS partner services through AWS PrivateLink, you no longer have to rely on access to the public internet. Data transfer charges for traffic from Amazon EC2 to the internet vary based on volume. After the first 1 GB / month ($0.00 per GB), transfers are charged at a rate of $ 0.09/GB (for AWS US-East 1 Virginia). Like gateway endpoints, interface endpoints can be secured using resource policies on the endpoint itself, and the resource that the endpoint provides access to. Interface endpoints allow the use of security groups to restrict access to the endpoint.
An organization’s existing network design may influence where VPC Endpoints are deployed. In larger multi-account AWS environments, network design can vary considerably. Consider an organization that has built a hub-and-spoke network with AWS Transit Gateway. VPCs have been provisioned into multiple AWS accounts, perhaps to facilitate network isolation or to enable delegated network administration.
For distributed architectures, you can build a “shared services” VPC, which provides centralized access to shared services required by workloads in each of the VPCs. These shared services may include resources such as directory services or VPC endpoints. Sharing resources from a central location instead of building them in each VPC may reduce administrative overhead and cost.
This approach was outlined by our colleague Bhavin Desai in his blog post, Centralized DNS management of hybrid cloud with Amazon Route 53 and AWS Transit Gateway. Instead of centralizing VPC endpoint deployment, a network designer may choose to deploy endpoints within a spoke VPC to ensure it is proximate to a single workload that will use the endpoint. This may support workload specific security or performance considerations. Each approach, centralizing and decentralizing, offers benefits. It is common to use both to meet their specific requirements.
Alternatively, an organization may have centralized its network and chosen to leverage VPC sharing to enable multiple AWS accounts to create application resources. Such an approach allows aggregating Amazon EC2 instances, Amazon Relational Database Service (RDS) databases, and AWS Lambda functions into a shared, centrally managed network. With either pattern, establishing a granular set of controls to limit access to resources is critical to support organizational security and compliance objectives. At the same time, it helps maintain operational efficiency.
Learn how with the VPC Endpoint Workshop
Understanding how to appropriately restrict access to endpoints and the services they connect with can be confusing. Learn more by taking the VPC Endpoint Workshop. Improve the security posture of your cloud workloads by using network controls and VPC endpoint policies to manage access to your AWS resources.