AWS Compute Blog

A Guide to Locally Testing Containers with Amazon ECS Local Endpoints and Docker Compose

This post is contributed by Wesley Pettit, Software Engineer at AWS.

As more companies adopt containers, developers need easy, powerful ways to test their containerized applications locally, before they deploy to AWS. Today, the containers team is releasing the first tool dedicated to this: Amazon ECS Local Container Endpoints. This is part of an ongoing open source project designed to improve the local development process for Amazon Elastic Container Service (ECS) and AWS Fargate.  This first step allows you to locally simulate the ECS Task Metadata V2 and V3 endpoints and IAM Roles for Tasks.

In this post, I will walk you through the following testing scenarios enabled by Amazon ECS Local Endpoints and Docker Compose:

  • Testing a container that needs credentials to interact with AWS Services
  • Testing a container which uses Task Metadata
  • Testing a multi-container app which uses the awsvpc or host network mode on Docker For Mac and Docker For Windows (in linux mode)
  • Testing multiple containerized applications using local service discovery

Setup

Your local testing toolkit consists of Docker, Docker Compose, and awslabs/amazon-ecs-local-container-endpoints.  To follow along with the scenarios in this post, you will need to have locally installed the Docker Daemon, the Docker Command Line, and Docker Compose.

Once you have the dependencies installed, create a Docker Compose file called docker-compose.yml. The Compose file defines the settings needed to run your application. If you have never used Docker Compose before, check out Docker’s Getting Started with Compose tutorial. This example file defines a web application:

version: "2"
services:
  app:
    build:
      # Build an image from the Dockerfile in the current directory
      context: .
    ports:
      - 8080:80
    environment:
      PORT: "80"

Make sure to save your docker-compose.yml file: it will be needed for the rest of the scenarios.

Our First Scenario: Testing a container which needs credentials to interact with AWS Services

Say I have a container which I want to test locally that needs AWS credentials. I could accomplish this by providing credentials as environment variables on the container, but that would be a bad practice. Instead, I can use Amazon ECS Local Endpoints to safely vend credentials to a local container.

The following Docker Compose override file template defines a single container that will use credentials. This should be used along with the docker-compose.yml file you created in the setup section. Name this file docker-compose.override.yml, (Docker Compose will know to automatically use both of the files).

Your docker-compose.override.yml file should look like this:

version: "2"
networks:
    # This special network is configured so that the local metadata
    # service can bind to the specific IP address that ECS uses
    # in production
    credentials_network:
        driver: bridge
        ipam:
            config:
                - subnet: "169.254.170.0/24"
                  gateway: 169.254.170.1
services:
    # This container vends credentials to your containers
    ecs-local-endpoints:
        # The Amazon ECS Local Container Endpoints Docker Image
        image: amazon/amazon-ecs-local-container-endpoints
        volumes:
          # Mount /var/run so we can access docker.sock and talk to Docker
          - /var/run:/var/run
          # Mount the shared configuration directory, used by the AWS CLI and AWS SDKs
          # On Windows, this directory can be found at "%UserProfile%\.aws"
          - $HOME/.aws/:/home/.aws/
        environment:
          # define the home folder; credentials will be read from $HOME/.aws
          HOME: "/home"
          # You can change which AWS CLI Profile is used
          AWS_PROFILE: "default"
        networks:
            credentials_network:
                # This special IP address is recognized by the AWS SDKs and AWS CLI 
                ipv4_address: "169.254.170.2"
                
    # Here we reference the application container that we are testing
    # You can test multiple containers at a time, simply duplicate this section
    # and customize it for each container, and give it a unique IP in 'credentials_network'.
    app:
        depends_on:
            - ecs-local-endpoints
        networks:
            credentials_network:
                ipv4_address: "169.254.170.3"
        environment:
          AWS_DEFAULT_REGION: "us-east-1"
          AWS_CONTAINER_CREDENTIALS_RELATIVE_URI: "/creds"

To test your container locally, run:

docker-compose up

Your container will now be running and will be using temporary credentials obtained from your default AWS Command Line Interface Profile.

NOTE: You should not use your production credentials locally. If you provide the ecs-local-endpoints with an AWS Profile that has access to your production account, then your application will be able to access/modify production resources from your local testing environment. We recommend creating separate development and production accounts.

How does this work?

In this example, we have created a User Defined Docker Bridge Network which allows the Local Container Endpoints to listen at the IP Address 169.254.170.2. We have also defined the environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI on our application container. The AWS SDKs and AWS CLI are all designed to retrieve credentials by making HTTP requests to  169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI. When containers run in production on ECS, the ECS Agent vends credentials to containers via this endpoint; this is how IAM Roles for Tasks is implemented.

Amazon ECS Local Container Endpoints vends credentials to containers the same way as the ECS Agent does in production. In this case, it vends temporary credentials obtained from your default AWS CLI Profile. It can do that because it mounts your .aws folder, which contains credentials and configuration for the AWS CLI.

Gotchas: Things to Keep in Mind when using ECS Local Container Endpoints and Docker Compose

  • Make sure every container in the credentials_network has a unique IP Address. If you don’t do this, Docker Compose can incorrectly try to assign 169.254.170.2 (the ecs-local-endpoints container IP) to one of the application containers. This will cause your Compose project to fail to start.
  • On Windows, replace $HOME/.aws/ in the volumes declaration for the endpoints container with the correct location of the AWS CLI configuration directory, as explained in the documentation.
  • Notice that the application container is named ‘app’ in both of the example file templates. You must make sure the container names match between your docker-compose.yml and docker-compose.override.yml. When you run docker-compose up, the files will be merged. The settings in each file for each container will be merged, so it’s important to use consistent container names between the two files.

Scenario Two: Testing using Task IAM Role credentials

The endpoints container image can also vend credentials from an IAM Role; this allows you to test your application locally using a Task IAM Role.

NOTE: You should not use your production Task IAM Role locally. Instead, create a separate testing role, with equivalent permissions scoped to testing resources. Modifying the trust boundary of a production role will expand its scope.

In order to use a Task IAM Role locally, you must modify its trust policy. First, get the ARN of the IAM user defined by your default AWS CLI Profile (replace default with a different Profile name if needed):

aws --profile default sts get-caller-identity

Then modify your Task IAM Role so that its trust policy includes the following statement. You can find instructions for modifying IAM Roles in the IAM Documentation.

    {
      "Effect": "Allow",
      "Principal": {
        "AWS": <ARN of the user found with get-caller-identity>
      },
      "Action": "sts:AssumeRole"
    }

To use your Task IAM Role in your docker compose file for local testing, simply change the value of the AWS container credentials relative URI environment variable on your application container:

AWS_CONTAINER_CREDENTIALS_RELATIVE_URI: "/role/<name of your role>"

For example, if your role is named ecs_task_role, then the environment variable should be set to "/role/ecs_task_role". That is all that is required; the ecs-local-endpoints container will now vend credentials obtained from assuming the task role. You can use this to validate that the permissions set on your Task IAM Role are sufficient to run your application.

Scenario Three: Testing a Container that uses Task Metadata endpoints

The Task Metadata endpoints are useful; they allow a container running on ECS to obtain information about itself at runtime. This enables many use cases; my favorite is that it allows you to obtain container resource usage metrics, as shown by this project.

With Amazon ECS Local Container Endpoints, you can locally test applications that use the Task Metadata V2 or V3 endpoints. If you want to use the V2 endpoint, the Docker Compose template shown at the beginning of this post is sufficient. If you want to use V3, simply add another environment variable to each of your application containers:

ECS_CONTAINER_METADATA_URI: "http://169.254.170.2/v3"

This is the environment variable defined by the V3 metadata spec.

Scenario Four: Testing an Application that uses the AWSVPC network mode

Thus far, all or our examples have involved testing containers in a bridge network. But what if you have an application that uses the awsvpc network mode. Can you test these applications locally?

Your local development machine will not have Elastic Network Interfaces. If your ECS Task consists of a single container, then the bridge network used in previous examples will suffice. However, if your application consists of multiple containers that need to communicate, then awsvpc differs significantly from bridge. As noted in the AWS Documentation:

“containers that belong to the same task can communicate over the localhost interface.”

This is one of the benefits of awsvpc; it makes inter-container communication easy. To simulate this locally, a different approach is needed.

If your local development machine is running linux, then you are in luck. You can test your containers using the host network mode, which will allow them to all communicate over localhost. Instructions for how to set up iptables rules to allow your containers to receive credentials and metadata is documented in the ECS Local Container Endpoints Project README.

If you are like me, and do most of your development on Windows or Mac machines, then this option will not work. Docker only supports host mode on Linux. Luckily, this section describes a workaround that will allow you to locally simulate awsvpc on Docker For Mac or Docker For Windows. This also partly serves as a simulation of the host network mode, in the sense that all of your containers will be able to communicate over localhost (from a local testing standpoint, host and awsvpc are functionally the same, the key requirement is that all containers share a single network interface).

In ECS, awsvpc is implemented by first launching a single container, which we call the ‘pause container‘. This container is attached to the Elastic Network Interface, and then all of the containers in your task are launched into the pause container’s network namespace. For the local simulation of awsvpc, a similar approach will be used.

First, create a Dockerfile with the following contents for the ‘local’ pause container.

FROM amazonlinux:latest
RUN yum install -y iptables

CMD iptables -t nat -A PREROUTING -p tcp -d 169.254.170.2 --dport 80 -j DNAT --to-destination 127.0.0.1:51679 \
 && iptables -t nat -A OUTPUT -d 169.254.170.2 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 51679 \
 && iptables-save \
 && /bin/bash -c 'while true; do sleep 30; done;'

This Dockerfile defines a container image which sets some iptables rules and then sleeps forever. The routing rules will allow requests to the credentials and metadata service to be forwarded from 169.254.170.2:80 to localhost:51679, which is the port ECS Local Container Endpoints will listen at in this setup.

Build the image:

docker build -t local-pause:latest .

Now, edit your docker-compose.override.yml file so that it looks like the following:

version: "2"
services:
    ecs-local-endpoints:
        image: amazon/amazon-ecs-local-container-endpoints
        volumes:
          - /var/run:/var/run
          - $HOME/.aws/:/home/.aws/
        environment:
          ECS_LOCAL_METADATA_PORT: "51679"
          HOME: "/home"
        network_mode: container:local-pause

    app:
        depends_on:
            - ecs-local-endpoints
        network_mode: container:local-pause
        environment:
          ECS_CONTAINER_METADATA_URI: "http://169.254.170.2/v3/containers/app"
          AWS_CONTAINER_CREDENTIALS_RELATIVE_URI: "/creds"

Several important things to note:

  • ECS_LOCAL_METADATA_PORT is set to 51679; this is the port that was used in the iptables rules.
  • network_mode is set to container:local-pause for all the containers, which means that they will use the networking stack of a container named local-pause.
  • ECS_CONTAINER_METADATA_URI is set to http://169.254.170.2/ v3/containers/app. This is important. In bridge mode, the local endpoints container can determine which container a request came from using the IP Address in the request. In simulated awsvpc, this will not work, since all of the containers share the same IP Address. Thus, the endpoints container supports using the container name in the request URI so that it can identify which container the request came from. In this case, the container is named app, so app is appended to the value of the environment variable. If you copy the app container configuration to add more containers to this compose file, make sure you update the value of ECS_CONTAINER_METADATA_URI for each new container.
  • Remove any port declarations from your docker-compose.yml file. These are not valid with the network_mode settings that you will be using. The text below explains how to expose ports in this simulated awsvpc network mode.

Before you run the compose file, you must launch the local-pause container. This container can not be defined in the Docker Compose file, because in Compose there is no way to define that one container must be running before all the others. You might think that the depends_on setting would work, but this setting only determines the order in which containers are started. It is not a robust solution for this case.

One key thing to note; any ports used by your application containers must be defined on the local-pause container. You can not define ports directly on your application containers because their network mode is set to container:local-pause. This is a limitation imposed by Docker.

Assuming that your application containers need to expose ports 8080 and 3306 (replace these with the actual ports used by your applications), run the local pause container with this command:

docker run -d -p 8080:8080 -p 3306:3306 --name local-pause --cap-add=NET_ADMIN local-pause

Then, simply run the docker compose files, and you will have containers which share a single network interface and have access to credentials and metadata!

Scenario Five: Testing multiple applications with local Service Discovery

Thus far, all of the examples have focused on running a single containerized application locally. But what if you want to test multiple applications which run as separate Tasks in production?

Docker Compose allows you to set up DNS aliases for your containers. This allows them to talk to each other using a hostname.

For this example, return to the compose override file with a bridge network shown in scenarios one through three. Here is a docker-compose.override.yml file which implements a simple scenario. There are two applications, frontend and backend. Frontend needs to make requests to backend.

version: "2"
networks:
    credentials_network:
        driver: bridge
        ipam:
            config:
                - subnet: "169.254.170.0/24"
                  gateway: 169.254.170.1
services:
    # This container vends credentials to your containers
    ecs-local-endpoints:
        # The Amazon ECS Local Container Endpoints Docker Image
        image: amazon/amazon-ecs-local-container-endpoints
        volumes:
          - /var/run:/var/run
          - $HOME/.aws/:/home/.aws/
        environment:
          HOME: "/home"
          AWS_PROFILE: "default"
        networks:
            credentials_network:
                ipv4_address: "169.254.170.2"
                aliases:
                    - endpoints # settings for the containers which you are testing
    frontend:
        image: amazonlinux:latest
        command: /bin/bash -c 'while true; do sleep 30; done;'
        depends_on:
            - ecs-local-endpoints
        networks:
            credentials_network:
                ipv4_address: "169.254.170.3"
        environment:
          AWS_DEFAULT_REGION: "us-east-1"
          AWS_CONTAINER_CREDENTIALS_RELATIVE_URI: "/creds"
    backend:
        image: nginx
        networks:
            credentials_network:
                # define an alias for service discovery
                aliases:
                    - backend
                ipv4_address: "169.254.170.4"

With these settings, the frontend container can find the backend container by making requests to http://backend.

Conclusion

In this tutorial, you have seen how to use Docker Compose and awslabs/amazon-ecs-local-container-endpoints to test your Amazon ECS and AWS Fargate applications locally before you deploy.

You have learned how to:

  • Construct docker-compose.yml and docker-compose.override.yml files.
  • Test a container locally with temporary credentials from a local AWS CLI Profile.
  • Test a container locally using credentials from an ECS Task IAM Role.
  • Test a container locally that uses the Task Metadata Endpoints.
  • Locally simulate the awsvpc network mode.
  • Use Docker Compose service discovery to locally test multiple dependent applications.

To follow along with new developments to the local development project, you can head to the public AWS Containers Roadmap on GitHub. If you have questions, comments, or feedback, you can let the team know there!