Networking & Content Delivery

How to securely publish Internet applications at scale using Application Load Balancer and AWS PrivateLink

If you have applications spread across multiple Virtual Private Clouds (VPCs) and want to expose those applications to the Internet, you can choose from different approaches. One option is to give each VPC its own dedicated connectivity to the Internet through an attached Internet gateway. Another approach is to centralize access from the Internet through a single VPC. This VPC acts as a buffer/DMZ between the Internet and the other application VPCs that don’t have Internet gateways attached to them. Here we focus on the second approach.

Traditionally deploying the centralized model required that you use and manage proxies hosted on EC2. The proxies were deployed in the Internet-facing VPC, which connected to application-hosting VPCs over VPC peering. With this model, apart from having to manage the proxy servers, you had to ensure that communication over VPC peering was locked down to only approved services. In addition, you had to be sure that the VPCs did not use overlapping IP address ranges.

The introduction of PrivateLink has provided a new way to expose applications between VPCs, eliminating the previous restrictions.

In this post we explore how you can combine PrivateLink with Application Load Balancer to publish web applications to the Internet without the need for proxies hosted on EC2 or VPC peering.

 

Architecture Overview

Let’s have a look at the target architecture and how PrivateLink can be combined with Internet-facing Application Load Balancer. In the setup I use three different VPCs.

The first two VPCs host web-based apps named blue and green. Those apps have a Network Load Balancer as the frontend, and are PrivateLink service providers. Let’s name the VPCs after the service they host – blue and green, respectively. To keep it simple I use EC2 instances to host my application, but they could also be container-based applications. Blue and green VPCs do not have an Internet gateway attached to them. Therefore they can’t communicate directly with the Internet.

The third VPC, called DMZ VPC, is a PrivateLink service consumer hosting the PrivateLink interface VPC endpoints. That VPC has an Internet gateway attached to it and a single Internet-facing Application Load Balancer. External clients connect to the Application Load Balancer, which in turn forwards the request to the PrivateLink elastic network interfaces for the appropriate service. This allows for end-to-end connectivity from a client on the Internet to the applications inside the blue and green VPCs. I configure the Application Load Balancer to route traffic based on the URL path. If users request the /blue path, they are directed to the blue service. And if /green is requested, traffic is sent to the green service in the green VPC.

The diagram below shows all of the components:

 

Using PrivateLink in this scenario provides a number of benefits:

  • The VPC CIDR ranges can overlap between any VPCs. In comparison VPC peering can’t be established between VPCs with overlapping IP address ranges.
  • PrivateLink exposes only the specific service for which it was created: For a web application listening on port 80 (HTTP), the PrivateLink elastic network interface only accepts HTTP connections. No other traffic is allowed from the consumer VPC to the service VPC.
  • Connections can be initiated only in one direction, from consumer to the service VPC. Applications in the service VPC can’t initiate connections to the consumer (DMZ) VPC.

The Internet-facing Application Load Balancer acts as an intelligent reverse proxy, which means there is no need for an additional proxy layer hosted on EC2. The Application Load Balancer should also be combined with AWS Web Application Firewall to protect web applications from common web exploits.

 

Enabling PrivateLink

I’ve already created internal Network Load Balancers in the blue and green VPCs to front my EC2 application servers. I’ve named them respectively blue-nlb and green-nlb. See the Elastic Load Balancing Documentation to find out more about setting up Network Load Balancers.

Since I have the Network Load Balancers already available, I can start creating PrivateLink services for my blue and green applications. I can also deploy the PrivateLink endpoints for them into the DMZ VPC.

The first step is to create an endpoint service in the VPC configuration. When I create a new endpoint service, I get the option to select the Network Load Balancer that already fronts my application. We follow steps for creating an endpoint service for my blue application first:

 

I’ve left the Acceptance required check box selected as I want to be able to approve every time a new endpoint gets created for my service.

The PrivateLink service is now ready and I can start creating endpoints for it. Before I move to the next step I need to copy the service name, which uniquely identifies my service.

I can now go into the Endpoints section in the VPC configuration and create a new VPC endpoint within the DMZ VPC.

First, I need to identify which service I’m creating an endpoint for. I’ve previously copied the service name for my blue PrivateLink service. I need to enter this information into the service name search box.

I can also choose which VPC should be hosting the PrivateLink endpoint. This could be a VPC in another AWS account, but in this case I select the DMZ VPC in the same account. This VPC has an Internet gateway attached to it, and it will have an Application Load Balancer deployed at a later stage.

Before the endpoint becomes available I need to go into the Endpoint Service section and accept the request to create it in the DMZ VPC.

Once the service is ready, I go into the configuration of the endpoint. There I can find out what IP addresses were allocated for my blue service PrivateLink endpoints in the DMZ VPC. I can find the information I need in the Subnets section. I make a note of these IP addresses as I will need them later to set up my Application Load Balancer.

I repeat the exact same process for my green service to create a PrivateLink endpoint for it in my DMZ VPC.

Deploying the Application Load Balancer

Now that my PrivateLink endpoints for both blue and green applications are ready in the DMZ VPC, I can create an Internet-facing Application Load Balancer in my DMZ VPC. I then use those endpoints as targets.

I start off by creating my target groups for the blue and green application. Below is the process for the blue application. The setup for the green follows the same steps.

 

I need to make sure I’m using ip as the target type and not instance. For my IP targets I add the IP addresses of the blue endpoints in my DMZ VPC that I copied earlier. This way the Application Load Balancer service can forward traffic to my blue application in the blue VPC.

I repeat the same process for my green service.

Once completed I can create my Internet-facing Application Load Balancer in the DMZ VPC. It’s configured to listen on port 80 for web requests from clients anywhere on the Internet. To find out more about setting up an Application Load Balancer, see the Elastic Load Balancer Documentation.

When the Application Load Balancer is created, I can adjust its routing rules for the port 80 (web) listener. I add two additional entries to specify that users connecting to the /blue directory should be sent to the blue PrivateLink endpoints (and effectively the blue application). In addition users connecting to the /green directory should be sent to the green PrivateLink endpoints (green application).

Before testing let’s make sure that our security groups are configured correctly for each component in the path – Application Load Balancer, PrivateLink endpoint, and the application EC2 instances.

Here is what each security group should be allowing in the inbound direction:

  • Application Load Balancer (DMZ VPC) – HTTP access from the Internet
  • PrivateLink endpoints (DMZ VPC) – HTTP access from the Application Load Balancer security group only
  • EC2 application instances (blue/green VPCs) – HTTP access from the IP address range of their local Network Load Balancer

 

Now that it’s ready we can test whether everything works as expected. I’ve configured my web app to return the name of the service it belongs to as well as some additional metadata about the incoming connection.

PrivateLink by default doesn’t preserve the real source IP address of the client. This means that my green and blue applications don’t see the real IP address of the client in the packet. Instead they see the local Network Load Balancer IP. However, the Internet-facing Application Load Balancer in my DMZ VPC can see the real IP of the client and copy it into the X-Forwarded-For HTTP header that is sent to the end application.

Both applications show us the real IP address of the source of the connection (from the IP packet) as well as the information in the X-Forwarded-For HTTP Header. Here is the result:

Conclusion

In multi-VPC environments, using Application Load Balancers and PrivateLink together can help you to securely publish your applications to the Internet, especially in VPC environments that have overlapping IP address ranges. In the blog I’ve shown an example of two web applications hosted in VPCs that do not have Internet gateways attached. By using Application Load Balancer and PrivateLink, I was able to publish these applications to the Internet through a centralized VPC. This approach can be easily applied to multiple applications and multiple Application Load Balancers. In scenarios where multiple teams manage the environment, it allows for clear separation of duties. One team could be responsible for managing the front end (Internet access) and different ones could be responsible for the back end application development.

This is a second from a series of PrivateLink blog posts. See the previous post to learn how to use AWS PrivateLink to secure and scale web filtering using explicit proxy.

Blog: Using AWS Client VPN to securely access AWS and on-premises resources
Learn about AWS VPN services
Watch re:Invent 2019: Connectivity to AWS and hybrid AWS network architectures