Containers

Migrating from Docker Swarm to Amazon ECS with Docker Compose

Introduction

By leveraging Docker Compose for Amazon Elastic Container Services (Amazon ECS), applications defined in a Compose file can be deployed on to Amazon ECS. Compose is an open specification, with one of its goals to be infrastructure or cloud service agnostic, allowing developers to define an application once for development and then use that same workload definition all the way through to production. Compose has been around for many years with lots of organizations already leveraging it to deploy workloads on to local machines and to Docker Swarms.

In this blog post, we show Compose’s flexibility by using it as a migration tool from Docker Swarm to Amazon ECS. To do this, we plan to deploy an application on to Docker Swarm using an existing Compose file. Then, by leveraging Docker Compose for Amazon ECS, we will take the same Compose file, change the Docker CLI deployment context, and deploy the same workload to Amazon ECS.

Background

Optional reading

As the number of containers deployed by an organization grows, the complexity to manage these workloads also increases, often with manual or home grown tools being built to track containerized workloads and to deploy containers on to remote machines. Container orchestration tools became the next logical step to manage this complexity, with scheduling, service discovery, and high availability built in. Kubernetes, Docker Swarm, and Amazon ECS have emerged as leading orchestrators to handle the scheduling of containers across a fleet of servers.

Through their design, each container orchestrator is opinionated. They each have opinions on what they schedule (containers vs tasks vs pods), how those workloads should communicate (overlay networks, network address translation, and micro segmentation), and how the workload is defined (Helm Charts, manifest files, task definitions, stack files, etc). These differences in the container orchestrators add a level of opinion on top of the container image standard, as workload definitions would need to be defined for each orchestrator.

In April 2020, Docker announced the Compose Specification, a specification broken out of the popular Docker Compose tool. “The Compose Specification is a developer-focused standard for defining cloud and platform agnostic container-based applications”[1]. With one of the specification goals to provide abstraction from the container platform, an application defined in Compose could have the portability between container orchestrators that is often associated with container images.

Also, in mid-2020, Docker and AWS co-announced that applications defined in the Compose Specification could be deployed to Amazon ECS through Docker Compose for Amazon ECS. In previous AWS blogs, we explored Amazon ECS for Docker Compose, and then automated the deployment of Docker Compose for Amazon ECS via a pipeline. In this blog post, we will look at Amazon ECS for Docker Compose again, but this time in the context of migrations and portability, specifically from Docker Swarm to Amazon ECS.

Workloads for the container orchestrator Docker Swarm are commonly defined in a Compose file, specifically the v3 file format of Docker Compose. A v3 Docker Compose file would be deployed to a Docker Swarm cluster through the docker stack deploy command. This command creates a Docker Swarm stack, logically grouping together all of the services defined within that Compose file, and deploying them on to a Docker Swarm cluster.

The recently announced Compose Specification merges the schemas of both Compose v2 and Compose v3 file into a single Compose schema. Workloads defined in Compose v3 for Docker Swarm should be conformant to the Compose Specification, and can therefore be deployed to Amazon ECS through the Docker Compose CLI with minimal to no modifications to the workload definition file.

Deploying the voting app on Docker Swarm

As the source of our migration is Docker Swarm, the first step would be to deploy a sample application on to a Docker Swarm cluster. The workload that will be migrated during this walk through from Docker Swarm to Amazon ECS is the popular voting app from Docker.

If you intend to migrate workloads from Docker Swarm to Amazon ECS it is likely that you are running a production Docker Swarm environment deployed across a number of EC2 instances (or other infrastructures). However, for the purpose of this blog, we will deploy a single node Docker Swarm cluster to mimic a production endpoint.

To create a single node Docker Swarm cluster, the only prerequisite required is access to a single Docker Engine. Docker Swarm is embedded within the Docker Engine, therefore a MacOs or Windows 10 workstation running Docker Desktop or a Linux Machine with the Docker Engine is all that is required to have a full functioning single node Docker Swarm cluster.

1) Create a Docker Swarm cluster.

# Initiate a Docker Swarm Cluster
$ docker swarm init
Swarm initialized: current node (p6jrlxky3iksu81njsjj69xed) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-5qjrn0chobljke6ml1o9qri8cj1q5qcq3bydtlwndm2hdqxff5-61z08dqaye1axgwbvu40hngxv 192.168.65.3:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

# Verify that the Swarm has been created correctly
$ docker node list
ID                            HOSTNAME         STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
p6jrlxky3iksu81njsjj69xed *   docker-desktop   Ready     Active         Leader           20.10.6

2) Clone the sample application.

To deploy the sample application on to the Docker Swarm cluster we can use a Compose v3 file. One has already been created in the upstream repository, and is stored alongside the source code for the sample application. The upstream repository is also used for a wide range of demonstrations within the Docker ecosystem, therefore to avoid confusion we are also going to remove all but one of the Compose files currently within that repository.

# Clone the repository
$ git clone https://github.com/dockersamples/example-voting-app
Cloning into 'example-voting-app'..

# Navigate into the Repository
$ cd example-voting-app

# Remove all existing workload definitions 
$ rm ./docker-compose*
$ rm ./*/docker-compose*
$ rm ./docker-stack-windows*
$ rm ./docker-stack.yml

# Ensure there is only 1 Compose file in the root of the directory
$ find . -name 'docker-compose*' -o -name 'docker-stack*'
./docker-stack-simple.yml

# Rename the remaining Compose file
$ mv docker-stack-simple.yml docker-compose.yml

2) Build the container images.

Before we deploy the workload, we need to build the container images for each micro service in the voting app and push the resulting images to Amazon Elastic Container Registry (Amazon ECR).

The following commands assume that the AWS CLI v2 has been installed and configured on the local system. They also assume an IAM user with permissions to create ECR repositories has also been configured. If these prerequisites have not been met, the AWS CLI v2 can be installed following this documentation and configured through this guide.

# Create a new ECR repository
$ export AWS_REGION="eu-west-1"
$ export ECR_URI=$(aws ecr create-repository \
  --repository-name voting-app \
  --region $AWS_REGION \
  --query 'repository.repositoryUri' \
  --output text)

# Login to ECR
$ aws ecr get-login-password --region $AWS_REGION| docker login --username AWS --password-stdin $ECR_URI

# Build and Push Images
for SERVICE in vote result worker;
do
  docker image build -t $ECR_URI:$SERVICE $SERVICE/
  docker image push $ECR_URI:$SERVICE
done

# Verify all of the image are now succesfully pushed to the ECR Repository
$ aws ecr list-images --repository-name voting-app | jq '.imageIds | .[].imageTag'
"vote"
"worker"
"result"

The existing Compose file needs to be updated to point to the new images that have just been pushed to Amazon ECR. Docker Compose supports the use of environment variables, therefore we will replace all references to the previous versions of the container images, with an environment variable of our upstream repository.

# For Linux Users
$ sed -i 's#dockersamples/examplevotingapp_worker#${ECR_URI}:worker#g' docker-compose.yml
$ sed -i 's#dockersamples/examplevotingapp_result:before#${ECR_URI}:result#g' docker-compose.yml
$ sed -i 's#dockersamples/examplevotingapp_vote:before#${ECR_URI}:vote#g' docker-compose.yml

# For MacOS Users
$ sed -i "" 's#dockersamples/examplevotingapp_worker#${ECR_URI}:worker#g' docker-compose.yml
$ sed -i "" 's#dockersamples/examplevotingapp_result:before#${ECR_URI}:result#g' docker-compose.yml
$ sed -i "" 's#dockersamples/examplevotingapp_vote:before#${ECR_URI}:vote#g' docker-compose.yml

We can now deploy the voting app to our local Docker Swarm cluster.

$ docker stack deploy \
  --with-registry-auth \
  --compose-file docker-compose.yml \
  voting-app
  
Creating network voting-app_backend
Creating network voting-app_frontend
Creating service voting-app_db
Creating service voting-app_vote
Creating service voting-app_result
Creating service voting-app_worker
Creating service voting-app_redis

After a few minutes, the Swarm services should have successfully started on your local machine.

$ docker service list
ID             NAME                MODE         REPLICAS   IMAGE                                                            PORTS
fiexmedo7vms   voting-app_db       replicated   1/1        postgres:9.4
kamhy0edlulo   voting-app_redis    replicated   1/1        redis:alpine                                                     *:30000->6379/tcp
rf2fskw2apvs   voting-app_result   replicated   1/1        111122223333.dkr.ecr.eu-west-1.amazonaws.com/voting-app:result   *:5001->80/tcp
uopmoan7vvd8   voting-app_vote     replicated   1/1        111122223333.dkr.ecr.eu-west-1.amazonaws.com/voting-app:vote     *:5000->80/tcp
klis3kck8pi0   voting-app_worker   replicated   1/1        111122223333.dkr.ecr.eu-west-1.amazonaws.com/voting-app:worker

Opening up a web browser to http://localhost:5000 and http://localhost:5001, we should now see the voting and results pages associated with the web application.

We have now proven the sample application runs successfully on Docker Swarm, and have a workload definition file that can be used as the source of our migration to Amazon ECS.

Mapping Docker Swarm concepts to Amazon ECS

Leveraging security groups for workload segmentation

Docker Swarm implements networking segmentation via overlay networks, with all communications on an overlay allowed. To segment a particular workload, i.e. a database, the workload should be placed on its own overlay network with only the micro services that require a connection to the workload attached to that overlay. In the voting app’s Compose file, these overlay networks have been defined and labelled “Frontend” and “Backend”.

$ cat docker-compose.yml
<snippet>
 networks:
  frontend:
  backend:

In AWS, security groups are used instead to isolate workloads. Docker Compose for Amazon ECS can provision security groups on our behalf and attach those security groups to the ECS tasks. A security group is mapped to the concept of a Network in the Compose Specification. The Compose syntax is the same as it is for a Docker Swarm service. When the deployment target changes from Docker Swarm to Amazon ECS, the Compose CLI creates a security group and attaches the tasks associated with the “Vote” and “Redis” services to that security group. An example of this can be seen here:

services:
    vote:
      image: ${ECR_URI}:vote
      networks:
        - frontend
    
    redis:
      image: redis:alpine
      networks:
        - frontend

networks:
  frontend:

Exposing the voting app on Amazon ECS

A second networking component that is exposed through the Compose Schema is how workloads are made available to end users. In the Compose file, a published and target port is defined as part of a service definition, with the published port referring to an externally accessible port and the target port referring to the port within the container where the application is running.

# Port 5000 is the published port, Port 80 is the Target Port.
services:
    vote:
      image: ${ECR_URI}:worker
      ports:
       - 5000:80

Within Docker Swarm, when a workload is published, the published port is exposed on every node in the cluster. An internal overlay network, called the ingress network, routes traffic from the node that received that request, to the node the Container is running on. This is often referred to as Swarm’s ingress network or routing mesh.

Amazon ECS, on the other hand, takes a more traditional approach to exposing services. As ECS task has a network interface that is attached to the AWS VPC, it is common to front a group of tasks with an Elastic Load Balancer. Docker Compose for ECS abstracts the creation of load balancers, listeners, and target groups away from the end user. If ports are defined within the service definition, the Compose CLI will create an Elastic Load Balancer, create a listener for the published port, and map that to target group for the target port.

At the time of writing, Docker Compose for Amazon ECS requires the published port of a Compose service has to match the target port. Therefore, before we deploy the voting app onto Amazon ECS, the target ports in the Compose file need to be updated. Additionally, the Redis service does not need and should not be exposed externally, and therefore, the published ports currently defined in the docker-compose.yml file for that service can be removed.

<snippet>
    vote:
    image: ${ECR_URI}:vote
    ports:
      - 80:80

   redis:
    image: redis:alpine
    networks:
      - frontend

<snippet>
  result:
    image: ${ECR_URI}:result
    ports:
      - 80:80

There is a second consideration when exposing applications in a Compose file. Docker Compose for Amazon ECS will reuse the same Elastic Load Balancer for all services defined within the Compose file, therefore if two services are exposed on the same port there will be a conflict. To workaround this issue when migrating the voting app from Docker Swarm, a custom CloudFormation resource can be defined in the Compose file to override the ELB Listener Port through the use of Compose overlays. In this case, we are overlaying the value for the ELB listener for the results service, exposing the ELB listener on port 8080. The following x-aws-cloudformation can be placed at the bottom of the Compose file.

<snippet>
x-aws-cloudformation:
  Resources:
    ResultTCP80Listener:
      Properties:
        Port: 8080
    Backend8080Ingress:
      Type: AWS::EC2::SecurityGroupIngress
      Properties:
        CidrIp: 0.0.0.0/0
        Description: result:8080/tcp on backend network
        GroupId:
          Ref: BackendNetwork
        IpProtocol: TCP
        ToPort: 8080
        FromPort: 8080

Service placement and rollout behavior

The final adjustments required to the voting app’s Compose file is to remove a number of the Docker Swarm service scheduling flags. Each container orchestrator has variables to control how a service is rolled out, for example, variables configured on the voting app’s Compose file include the number of replicas to update in parallel during a rolling update, as well as pinning the database service to specific nodes in the cluster to ensure the data volumes are available.

Docker Compose for Amazon ECS does not share all of the same Compose keys as Docker Swarm, therefore the voting app’s deploy section for each microservice should be updated to resemble the following example.

# The Docker Swarm deploy variables
services:
  redis:
    deploy:
      replicas: 1
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure

# The Amazon ECS deploy variables
services:
  redis:
    deploy:
      replicas: 1
      update_config:
        parallelism: 1

The changes made above to the Redis service will also need to be made to all five services (Redis, database, vote, result, and worker) defined in the docker-compose.yml.

Stateful workloads

Container overlay filesystems are ephemeral. When that container is stopped, any data stored in its filesystem is lost. Therefore to provide state to an application container, data volumes can be used to mount an external directory into the container’s filesystem.

volumes are a concept in a Compose file that abstract away the underlying storage technology. When leveraging Docker Swarm, the underlying storage technology is a Docker Swarm data volume. The default Docker Swarm data volume driver creates a directory on the underlying container host, this directory is then mounted into the Swarm service. This directory on the host will not be removed if the container is stopped, as its lifecycle is managed by a Docker Swarm volume not a Docker Swarm service.

In the voting app, the Postgres database is a stateful service. A volume has been defined in the Compose file and attached to the db service. When this is deployed to a Docker Swarm cluster, the db-data volume is a data volume on the container host, and that underlying data volume is mounted into the Postgres container at /var/lib/postgresql/data. This can be seen in the existing Compose file with the db-data volume being defined as a top level object towards the bottom of the file. The volumes subkey is then used within the db service to specify a target directory within the container.

services:
<snippet>
  db:
    image: postgres:9.4
    environment:
      POSTGRES_USER: "postgres"
      POSTGRES_PASSWORD: "postgres"
    volumes:
      - db-data:/var/lib/postgresql/data
    networks:
      - backend
    deploy:
      replicas: 1
      update_config:
        parallelism: 1

<snippet>
volumes:
  db-data:

When deploying the same Compose file to Amazon ECS, the volumes key is not mapped to a directory on the underlying container host, instead it is mapped to Amazon Elastic File System (Amazon EFS). EFS shares can be mounted into containers through NFS and locked down with POSIX permissions. Through Docker Compose for Amazon ECS, all of the EFS concepts are abstracted away from the user, so no changes are required to the Compose file. When the voting app’s Compose file is deployed to Amazon ECS, the Compose CLI will create an EFS share for the voting app, create mount targets in each Availability Zone, and create EFS access points.

Deploying a Postgres database on a container backed by the EFS storage is done for demonstration purposes only and it’s not necessarily a best practice. The user should consider the best deployment model for their stateful workloads based on their specific needs, use cases, and performance requirements.

The final Compose file

After applying the various changes in the previous section and removing the version line at the start of the Compose file (as the Compose Specification does not have a version key). The voting app’s Compose file should now look like this:

$ cat docker-compose.yml
services:

  redis:
    image: redis:alpine
    networks:
      - frontend
    deploy:
      replicas: 1
      update_config:
        parallelism: 1
  db:
    image: postgres:9.4
    environment:
      POSTGRES_USER: "postgres"
      POSTGRES_PASSWORD: "postgres"
    volumes:
      - db-data:/var/lib/postgresql/data
    networks:
      - backend
    deploy:
      replicas: 1
      update_config:
        parallelism: 1
  vote:
    image: ${ECR_URI}:vote
    ports:
      - 80:80
    networks:
      - frontend
    depends_on:
      - redis
    deploy:
      replicas: 1
      update_config:
        parallelism: 1
  result:
    image: ${ECR_URI}:result
    ports:
      - 80:80
    networks:
      - backend
    depends_on:
      - db
    deploy:
      replicas: 1
      update_config:
        parallelism: 1

  worker:
    image: ${ECR_URI}:worker
    networks:
      - frontend
      - backend
    depends_on:
      - db
      - redis
    deploy:
      replicas: 1
      update_config:
        parallelism: 1

networks:
  frontend:
  backend:

volumes:
  db-data:

x-aws-cloudformation:
  Resources:
    ResultTCP80Listener:
      Properties:
        Port: 8080
    Backend8080Ingress:
      Type: AWS::EC2::SecurityGroupIngress
      Properties:
        CidrIp: 0.0.0.0/0
        Description: result:8080/tcp on backend network
        GroupId:
          Ref: BackendNetwork
        IpProtocol: TCP
        ToPort: 8080
        FromPort: 8080

Deploying the Compose file to Amazon ECS

1) Configure the Docker context

Docker Compose for ECS is configured via a Docker context, a Docker context allows the Docker Command Line Client to point to different endpoints. The default context being Docker Engine running on the local workstation/server.

# Create a Docker Context for Amazon ECS
$ docker context create ecs demo_ecs_context

Through the Docker context wizard, you are directed to select the AWS Credentials to use when provisioning resources in the AWS Cloud. For more information and troubleshooting when creating the Docker Context see the Docker Documentation.

2) Docker Compose up

Deploy the Compose file to Amazon ECS through the Docker Compose CLI

# Select the ECS Context
$ docker context use demo_ecs_context

# Deploy the Docker Compose File
$ docker compose \
  --project-name voting-app \
  --file docker-compose.yml \
  up

It will take some time for all of the resources to be created and configured, but once the deployment is finished, the DNS name of the Load Balancer fronting the voting app can be retrieved with the following command. It may take a few minutes for the ECS tasks to pass the ELB listener health checks and be available.

$ aws elbv2 describe-load-balancers | jq -r '.LoadBalancers | .[] | select(.DNSName|test("vot.")).DNSName'
votin-LoadB-DYQ1QS2R3PU0-290533005.eu-west-1.elb.amazonaws.com

On Port 80 the “vote” microservice should now be reachable.

On Port 8080 the “Result” microservice should now be reachable.

Once an application has been migrated from Docker Swarm to Amazon ECS there will be additional changes required to complete the migration of a production workload. CI/CD pipelines may need to be updated to the new deployment target or testing endpoint. DNS entries or a CDN may need to be switched from the Docker Swarm endpoint to the AWS ELB endpoint. Rollouts of this nature are normally carried out in a phased approach with traffic being migrated gradually to the new environment.

Once all production traffic is being served from the ECS environment, the Docker Swarm cluster can be removed. In this blog, we were only running a single node Docker Swarm cluster, to clean up the environment run the following commands:

# Clean up the local Docker Daemon by switching to the default context
$ docker context use default

# Remove the Docker Stack
$ docker stack remove voting-app

# Ensure there are no more services running
$ docker service list
ID        NAME      MODE      REPLICAS   IMAGE     PORTS

# Remove the existing Docker Swarm Cluster
$ docker swarm leave --force
Node left the swarm.

Cleanup

To clean up all of the resources deployed on to AWS through this blog, execute the following commands:

# Switch back to the ECS Docker Context
$ docker context use demo_ecs_context

# Clean up the Voting App
$ docker compose \
  --project-name voting-app \
  down

# Remove the EFS File Share
$ FS_ID=$(aws efs describe-file-systems | jq -r '.FileSystems | .[] | select(.Name=="voting-app_db-data").FileSystemId')
$ aws efs delete-file-system \
   --file-system-id $FS_ID

# Clean up the ECR Repository
$ aws ecr delete-repository \
  --repository-name voting-app \
  --force

Conclusion

In this blog post, we have shown by leveraging Docker Compose for Amazon ECS, applications can be migrated between container orchestrators without writing the application definition. We have taken Docker’s sample voting app and pushed local copies of the container images to Amazon ECR. We have then:

  • Deployed the application to Docker Swarm using a Compose v3 file.
  • Adjusted a few keys in the Compose file to define how the application should run in the AWS Cloud.
  • Deployed the voting app to Amazon ECS through Docker Compose for Amazon ECS.

For additional information on Docker Compose for Amazon ECS see the Docker Documentation. To report issues and create feature requests, please use the ‘Issues’ tab on the Compose-Cli GitHub Repository. For the Docker Compose for Amazon ECS Roadmap, see the Docker Roadmap on GitHub.

Olly Pomeroy

Olly Pomeroy

Olly is a Developer Advocate at Amazon Web Services. He is part of the containers team working on AWS Fargate, container runtimes and containerd related projects.

Jesus Escudero Lopez

Jesus Escudero Lopez

Jesus is a Solutions Architect at AWS based in Madrid (Spain) focused on helping DNB customers overcome challenges and leveraging the full potential of the cloud.