Containers

AWS PrivateLink ECR cross account Fargate deployment

AWS PrivateLink is a networking technology designed to enable access to AWS services in a highly available and scalable manner. It keeps all the network traffic within the AWS network. When you create AWS PrivateLink endpoints for Amazon Elastic Container Registry (ECR) and Amazon Elastic Container Service (ECS), these service endpoints appear as elastic network interfaces with a private IP addresses in your VPC.

Before the release of AWS PrivateLink, your Amazon EC2 instances had to use an internet gateway to download Docker images stored in ECR or communicate to the ECS control plane. Instances in a public subnet with a public IP address used the internet gateway directly. Instances in a private subnet used a network address translation (NAT) gateway hosted in a public subnet. The NAT gateway would then use the internet gateway to talk to ECR and ECS.

Now, instances in both public and private subnets can use AWS PrivateLink to get private connectivity to download images from ECR. Instances can also communicate with the ECS control plane via AWS PrivateLink endpoints without needing an internet gateway or NAT gateway.
In this post, we show how AWS PrivateLink can be used while deploying an AWS Fargate Service accessing and downloading Docker container images shared by another account in the same region using AWS PrivateLink. We are using two accounts to demonstrate how a multi-account strategy can be used to achieve isolation between development and production and as a security implementation. The same implementation would be identical should you operate out of a single account. This provides an example of private container development and deployment using AWS PrivateLink to a private AWS Fargate cluster accessible via a public facing Application Load Balancer.

Architecture Diagram

Image 1: Architecture Diagram

Above, AWS PrivateLink endpoints are deployed into our public subnets. Private DNS for the endpoints is enabled in order to allow you to make requests to the service using its default DNS hostname.

We are creating the following endpoints to allow for private communications to ECS and ECR:

  • Gateway VPC endpoint for Amazon S3. This allows instances to download the image layers from the underlying private Amazon S3 buckets that host them.
  • AWS PrivateLink endpoints for ECS. ECS requires two VPCE interface endpoints to successfully communicate with ECS for this example. Containers with logging also need Amazon CloudWatch Logs endpoint. VPCE Interface endpoints being deployed are:
    • com.amazonaws.region.ecr.api
    • com.amazonaws.region.ecr.dkr
    • com.amazonaws.region.logs

Optionally, we are also creating AWS PrivateLink endpoints to allow for connectivity to an EC2 instance that will be deployed into once of our private subnets. We use AWS Systems Manager to connect to this instance and show the internal DNS resolution for the ECR endpoint.

  • AWS PrivateLink endpoints for SSM. SSM communicates with ECS for this example. VPCE Interface endpoints being deployed are:
    • com.amazonaws.region.ssm
    • com.amazonaws.region.ssmmessages
    • com.amazonaws.region.ec2
    • com.amazonaws.region.ec2messages

Getting started

Prerequisites

In order to follow along, you need two AWS accounts. We start with setting up our accounts. You must switch between accounts during the setup process. For simplicity, we refer to this as Account A and Account B.

Account A contains our ECR, and we publish an example container to this registry for our demo.

Account B contains our VPC infrastructure, including our AWS PrivateLink configurations.

You will also need a working Docker environment to execute Docker commands.

Implementation

Pull the GitHub repository at github.com/aws-samples/amazon-ecr-privatelink-blog. We use the contents of this repository to set up accounts using AWS CloudFormation and to build our example Docker container.

git clone git@github.com:aws-samples/amazon-ecr-privatelink-blog.git

Before proceeding, configure AWS CLI profiles for Account A and Account B. Ensure to set the same region in both profiles.  For this walkthrough, set the region to us-east-2, as all AWS CloudFormation launches are pointing to us-east-2.  For more information on setting account profiles, see the documentation.

It is assumed you are using a profile with enough privileges to operate all calls involved.

aws configure --profile accounta

Do the same for Account B (accountb).

Account A setup

The following four steps are applicable to Account A

Step one: Create an Amazon Elastic Container Registry repository in Account A

Create an ECR repository, which has permission set for Account B. Click the link below and provide the repository name and account id for deployment, click through and deploy the stack.

For this stack use the following parameter information, ensure you are logged into the Account A console.

ServiceAccount:   Account ID for Fargate Service deployment (Account B)

 

Create ECR stack
Image 2: Specify CloudFormation Stack Details
Account A ECR Stack: Launch Stack

The following repository permissions are granted for Account B, which allow for listing and retrieval of container images.

  • ecr:GetDownloadUrlForLayer
  • ecr:BatchGetImage
  • ecr:BatchCheckLayerAvailability
  • ecr:ListImages

Once this AWS CloudFormation completes, go to ECS in the console and click on the repository created and make note of the repository URI. We use this to tag our Docker image when pushing to our Amazon ECR repository.

ECR Repository ecrprivatelink

Image 3: ECR repository creation

Next, using the AWS CLI and Docker, we will log in, build, tag, and push a sample container into our newly created repository.

Step two: Docker log in for Amazon Elastic Container Registry repositories (Docker)

First log in to the repository within Account A. The command below will log in you into to the account Amazon Elastic Container Registry repository area. Use your account number from Account A for the registry id.

aws --profile accounta ecr get-login --no-include-email --registry-ids 111111111111 | bash

Step three: Build, tag, and push sample NGINX container

Using the Docker command line interface, build our sample container (NGINX) and push it to the repository for use within our region, cross account demo. Our example container is based on nginx:mainline-alpine. Moving into the Docker folder within the pulled repository:

cd docker
docker build -t hello-world .

Proceed once you see the the indication of a successful local container build:

Successfully built affc176e1b60
Successfully tagged hello-world:latest

Next we tag and push this image for our repository. Here we are adding the ‘v1’ tag.

docker tag hello-world 111111111111.dkr.ecr.us-east-2.amazonaws.com/ecrprivatelink:v1
docker push 111111111111.dkr.ecr.us-east-2.amazonaws.com/ecrprivatelink:v1

You can see the newly published image version from the ECS console, by clicking on the repository.

ECR container v1 uploaded

Image 5: Container pushed to ECR

Account B setup

The following steps are applicable to Account B. The account is broken out into three CloudFormation templates; networking, service, and an optional EC2 instance with SSM stack so that we can verify that the ECR DNS resolution is internal addressing. Ensure you have switched to Account B before launching the below stacks.

Step one: Deploy AWS CloudFormation for networking

Account B Networking Stack: Launch Stack

This AWS CloudFormation stack defaults to name: accountb-networking-stack.

This generates the VPC architecture in Image 1. Two public, two private subnets, AWS PrivateLink endpoints and a public facing Application Load Balancer, which is used to serve up our application in our private subnets.  Make note of the outputs for this stack, specifically ExternalUrl. We use this URL to verify that our Fargate Service is up and running.

Step two: Deploy AWS CloudFormation for Fargate Service

Account B Service Stack: Launch Stack

The AWS CloudFormation stack defaults to name: accountb-service-stack, and AccountNetworkingStackName defaults to accountb-networking-stack.

Provide the container image url from Account A.

ImageUrl: 111111111111.dkr.ecr.us-east-2.amazonaws.com/ecrprivatelink:v1

Once we have deployed these two stacks, our demo service with two tasks running in a Fargate cluster load balanced across two private subnets should be up and running. Observe at the ECS console.

Fargate ECS cluster in action, 1 service and 2 tasks

Image 6: ECS Fargate cluster – deployed

You can now navigate to the Application Load Balancer public endpoint (ExternalUrl output from the accountb-networking-stack) to observe this service executing:

Fargate service deployed priv networking, public access

 

Image 7: Sample NGINX application Deployed

Conclusion

In this post, I showed you how to add AWS PrivateLink endpoints to your VPC for ECS and ECR, including an S3 gateway for ECR layer downloads.

The instances in your ECS cluster residing in private subnets can communicate directly with the ECS control plane. They should be able to download Docker images directly without needing to make any connections outside of your VPC using an internet gateway or NAT gateway. All container orchestration traffic stays inside the VPC.  Also shown is the ability to privately develop and deploy containers with public access to end application.  Additionally, the outline about could be entirely private, using internal ALB and no public subnets.

If you have questions or suggestions, please comment below.

Darren Ball

Darren Ball

Darren Ball is a Solutions Architect at AWS. His current focus is on helping Greenfied customers with their cloud migration journey, as well as building solutions at the crossroads of all things cloud and on premise with an emphasis on DevOps and containers.