A Guide to Locally Testing Containers with Amazon ECS Local Endpoints and Docker Compose
This post is contributed by Wesley Pettit, Software Engineer at AWS.
As more companies adopt containers, developers need easy, powerful ways to test their containerized applications locally, before they deploy to AWS. Today, the containers team is releasing the first tool dedicated to this: Amazon ECS Local Container Endpoints. This is part of an ongoing open source project designed to improve the local development process for Amazon Elastic Container Service (ECS) and AWS Fargate. This first step allows you to locally simulate the ECS Task Metadata V2 and V3 endpoints and IAM Roles for Tasks.
In this post, I will walk you through the following testing scenarios enabled by Amazon ECS Local Endpoints and Docker Compose:
- Testing a container that needs credentials to interact with AWS Services
- Testing a container which uses Task Metadata
- Testing a multi-container app which uses the awsvpc or host network mode on Docker For Mac and Docker For Windows (in linux mode)
- Testing multiple containerized applications using local service discovery
Your local testing toolkit consists of Docker, Docker Compose, and awslabs/amazon-ecs-local-container-endpoints. To follow along with the scenarios in this post, you will need to have locally installed the Docker Daemon, the Docker Command Line, and Docker Compose.
Once you have the dependencies installed, create a Docker Compose file called
docker-compose.yml. The Compose file defines the settings needed to run your application. If you have never used Docker Compose before, check out Docker’s Getting Started with Compose tutorial. This example file defines a web application:
Make sure to save your
docker-compose.yml file: it will be needed for the rest of the scenarios.
Our First Scenario: Testing a container which needs credentials to interact with AWS Services
Say I have a container which I want to test locally that needs AWS credentials. I could accomplish this by providing credentials as environment variables on the container, but that would be a bad practice. Instead, I can use Amazon ECS Local Endpoints to safely vend credentials to a local container.
The following Docker Compose override file template defines a single container that will use credentials. This should be used along with the
docker-compose.yml file you created in the setup section. Name this file
docker-compose.override.yml, (Docker Compose will know to automatically use both of the files).
docker-compose.override.yml file should look like this:
To test your container locally, run:
Your container will now be running and will be using temporary credentials obtained from your default AWS Command Line Interface Profile.
NOTE: You should not use your production credentials locally. If you provide the ecs-local-endpoints with an AWS Profile that has access to your production account, then your application will be able to access/modify production resources from your local testing environment. We recommend creating separate development and production accounts.
How does this work?
In this example, we have created a User Defined Docker Bridge Network which allows the Local Container Endpoints to listen at the IP Address
169.254.170.2. We have also defined the environment variable
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI on our application container. The AWS SDKs and AWS CLI are all designed to retrieve credentials by making HTTP requests to
169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI. When containers run in production on ECS, the ECS Agent vends credentials to containers via this endpoint; this is how IAM Roles for Tasks is implemented.
Amazon ECS Local Container Endpoints vends credentials to containers the same way as the ECS Agent does in production. In this case, it vends temporary credentials obtained from your default AWS CLI Profile. It can do that because it mounts your
.aws folder, which contains credentials and configuration for the AWS CLI.
Gotchas: Things to Keep in Mind when using ECS Local Container Endpoints and Docker Compose
- Make sure every container in the
credentials_networkhas a unique IP Address. If you don’t do this, Docker Compose can incorrectly try to assign
ecs-local-endpointscontainer IP) to one of the application containers. This will cause your Compose project to fail to start.
- On Windows, replace
$HOME/.aws/in the volumes declaration for the endpoints container with the correct location of the AWS CLI configuration directory, as explained in the documentation.
- Notice that the application container is named ‘app’ in both of the example file templates. You must make sure the container names match between your
docker-compose.override.yml. When you run
docker-compose up, the files will be merged. The settings in each file for each container will be merged, so it’s important to use consistent container names between the two files.
Scenario Two: Testing using Task IAM Role credentials
The endpoints container image can also vend credentials from an IAM Role; this allows you to test your application locally using a Task IAM Role.
NOTE: You should not use your production Task IAM Role locally. Instead, create a separate testing role, with equivalent permissions scoped to testing resources. Modifying the trust boundary of a production role will expand its scope.
In order to use a Task IAM Role locally, you must modify its trust policy. First, get the ARN of the IAM user defined by your
default AWS CLI Profile (replace default with a different Profile name if needed):
aws --profile default sts get-caller-identity
Then modify your Task IAM Role so that its trust policy includes the following statement. You can find instructions for modifying IAM Roles in the IAM Documentation.
To use your Task IAM Role in your docker compose file for local testing, simply change the value of the AWS container credentials relative URI environment variable on your application container:
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI: "/role/<name of your role>"
For example, if your role is named
ecs_task_role, then the environment variable should be set to
"/role/ecs_task_role". That is all that is required; the
ecs-local-endpoints container will now vend credentials obtained from assuming the task role. You can use this to validate that the permissions set on your Task IAM Role are sufficient to run your application.
Scenario Three: Testing a Container that uses Task Metadata endpoints
The Task Metadata endpoints are useful; they allow a container running on ECS to obtain information about itself at runtime. This enables many use cases; my favorite is that it allows you to obtain container resource usage metrics, as shown by this project.
With Amazon ECS Local Container Endpoints, you can locally test applications that use the Task Metadata V2 or V3 endpoints. If you want to use the V2 endpoint, the Docker Compose template shown at the beginning of this post is sufficient. If you want to use V3, simply add another environment variable to each of your application containers:
This is the environment variable defined by the V3 metadata spec.
Scenario Four: Testing an Application that uses the AWSVPC network mode
Thus far, all or our examples have involved testing containers in a
bridge network. But what if you have an application that uses the awsvpc network mode. Can you test these applications locally?
Your local development machine will not have Elastic Network Interfaces. If your ECS Task consists of a single container, then the bridge network used in previous examples will suffice. However, if your application consists of multiple containers that need to communicate, then awsvpc differs significantly from bridge. As noted in the AWS Documentation:
“containers that belong to the same task can communicate over the
This is one of the benefits of awsvpc; it makes inter-container communication easy. To simulate this locally, a different approach is needed.
If your local development machine is running linux, then you are in luck. You can test your containers using the host network mode, which will allow them to all communicate over localhost. Instructions for how to set up iptables rules to allow your containers to receive credentials and metadata is documented in the ECS Local Container Endpoints Project README.
If you are like me, and do most of your development on Windows or Mac machines, then this option will not work. Docker only supports host mode on Linux. Luckily, this section describes a workaround that will allow you to locally simulate awsvpc on Docker For Mac or Docker For Windows. This also partly serves as a simulation of the host network mode, in the sense that all of your containers will be able to communicate over localhost (from a local testing standpoint, host and awsvpc are functionally the same, the key requirement is that all containers share a single network interface).
awsvpc is implemented by first launching a single container, which we call the ‘pause container‘. This container is attached to the Elastic Network Interface, and then all of the containers in your task are launched into the pause container’s network namespace. For the local simulation of
awsvpc, a similar approach will be used.
First, create a Dockerfile with the following contents for the ‘local’ pause container.
This Dockerfile defines a container image which sets some iptables rules and then sleeps forever. The routing rules will allow requests to the credentials and metadata service to be forwarded from 169.254.170.2:80 to localhost:51679, which is the port ECS Local Container Endpoints will listen at in this setup.
Build the image:
docker build -t local-pause:latest .
Now, edit your
docker-compose.override.yml file so that it looks like the following:
Several important things to note:
ECS_LOCAL_METADATA_PORTis set to 51679; this is the port that was used in the iptables rules.
network_modeis set to
container:local-pausefor all the containers, which means that they will use the networking stack of a container named local-pause.
ECS_CONTAINER_METADATA_URIis set to
http://169.254.170.2/ v3/containers/app. This is important. In bridge mode, the local endpoints container can determine which container a request came from using the IP Address in the request. In simulated
awsvpc, this will not work, since all of the containers share the same IP Address. Thus, the endpoints container supports using the container name in the request URI so that it can identify which container the request came from. In this case, the container is named app, so app is appended to the value of the environment variable. If you copy the app container configuration to add more containers to this compose file, make sure you update the value of
ECS_CONTAINER_METADATA_URIfor each new container.
- Remove any port declarations from your
docker-compose.ymlfile. These are not valid with the network_mode settings that you will be using. The text below explains how to expose ports in this simulated awsvpc network mode.
Before you run the compose file, you must launch the local-pause container. This container can not be defined in the Docker Compose file, because in Compose there is no way to define that one container must be running before all the others. You might think that the
depends_on setting would work, but this setting only determines the order in which containers are started. It is not a robust solution for this case.
One key thing to note; any ports used by your application containers must be defined on the local-pause container. You can not define ports directly on your application containers because their network mode is set to
container:local-pause. This is a limitation imposed by Docker.
Assuming that your application containers need to expose ports 8080 and 3306 (replace these with the actual ports used by your applications), run the local pause container with this command:
docker run -d -p 8080:8080 -p 3306:3306 --name local-pause --cap-add=NET_ADMIN local-pause
Then, simply run the docker compose files, and you will have containers which share a single network interface and have access to credentials and metadata!
Scenario Five: Testing multiple applications with local Service Discovery
Thus far, all of the examples have focused on running a single containerized application locally. But what if you want to test multiple applications which run as separate Tasks in production?
Docker Compose allows you to set up DNS aliases for your containers. This allows them to talk to each other using a hostname.
For this example, return to the compose override file with a bridge network shown in scenarios one through three. Here is a
docker-compose.override.yml file which implements a simple scenario. There are two applications, frontend and backend. Frontend needs to make requests to backend.
In this tutorial, you have seen how to use Docker Compose and awslabs/amazon-ecs-local-container-endpoints to test your Amazon ECS and AWS Fargate applications locally before you deploy.
You have learned how to:
- Construct docker-compose.yml and docker-compose.override.yml files.
- Test a container locally with temporary credentials from a local AWS CLI Profile.
- Test a container locally using credentials from an ECS Task IAM Role.
- Test a container locally that uses the Task Metadata Endpoints.
- Locally simulate the awsvpc network mode.
- Use Docker Compose service discovery to locally test multiple dependent applications.
To follow along with new developments to the local development project, you can head to the public AWS Containers Roadmap on GitHub. If you have questions, comments, or feedback, you can let the team know there!