AWS Security Blog

How to Govern Your Application Deployments by Using Amazon EC2 Container Service and Docker

by Michael Capicotto | on | in How-to guides | | Comments

Governance among IT teams has become increasingly challenging, especially when dealing with application deployments that involve many different technologies. For example, consider the case of trying to collocate multiple applications on a shared operating system. Accidental conflicts can stem from the applications themselves, or the underlying libraries and network ports they rely on. The likelihood of conflicts is heightened even further when security functionality is involved, such as inintrusion prevention or access logging. Such concerns have typically resulted in security functions being relegated to their own independent operating systems via physical or virtual hardware isolation (for example, an inline firewall device).

In this blog post, I will show you how to eliminate these potential conflicts while also deploying your applications in a continuous and secure manner at scale. We will do this by collocating different applications on the same obuildperating system, through the use of Amazon EC2 Container Service (ECS) and Docker.

Let’s start with a brief overview of Docker.

Docker explained

Simply put, Docker allows you to create containers, which wrap your applications into a complete file system and contain everything that this software needs to run. This means you can transport that container onto any environment running Docker, and it will run the same, while staying isolated from other containers and the host operating system. This isolation between containers eliminates any potential conflicts the applications may have with each other, because they are each running in their own separate run-time environments.

The hands-on portion of this post will focus on creating two Docker containers: one containing a simple web application (“application container”), and the other containing a reverse proxy with throttling enabled (“proxy container”), which is used to protect the web application. These containers will be collocated on the same underlying Amazon EC2 instance using ECS; however, all network traffic between the web application and the outside world will be forced through the proxy container, as shown in the following diagram. This tiered network access can be the basis of a security overlay solution in which a web application is not directly reachable from the network to which its underlying instance is connected. All inbound application requests are forced through a proxy container that throttles requests. In practice, this container could also perform activities such as filtering, logging, and intrusion detection.

Diagram showing network isolation using Docker containers

Figure 1. Network isolation using Docker containers

Create your Docker containers

To start let’s create a Docker container that contains a simple PHP web application. This Docker container will represent the application container in the previous diagram. For a more detailed guide to the following steps, see Docker Basics.

Install Docker

  1. Launch an instance with the Amazon Linux AMI. For more information, see Launching an Instance in the Amazon EC2 User Guide for Linux Instances.
  2. Connect to your instance. For more information, see Connect to Your Linux Instance.
  3. Update the installed packages and package cache on your instance:
    [ec2-user ~]$ sudo yum update -y
  4. Install Docker:
    [ec2-user ~]$ sudo yum install -y docker
  5. Start the Docker service:
     [ec2-user ~]$ sudo service docker start
  6. Add the ec2-user to the Docker group so that you can execute Docker commands without using sudo:
    [ec2-user ~]$ sudo usermod -a -G docker ec2-user
  7. Log out and log back in again to pick up the new Docker group permissions.
  8. Verify that the ec2-user can run Docker commands without sudo:
    [ec2-user ~]$ docker info

Sign up for a Docker Hub account

Docker uses images to launch containers, and these images are stored in repositories. The most common Docker image repository (and the default repository for the Docker daemon) is Docker Hub. Although you don’t need a Docker Hub account to use ECS or Docker, having a Docker Hub account gives you the freedom to store your modified Docker images so that you can use them in your ECS task definitions. Docker Hub offers public and private registries. You can create a private registry on Docker Hub and configure private registry authentication on your ECS container instances to use your private images in task definitions.

For more information about Docker Hub and to sign up for an account, go to https://hub.docker.com.

Create a Docker image containing a simple PHP application

  1. Install git and use it to clone the simple PHP application from our GitHub repository onto your system:
    [ec2-user ~]$ sudo yum install -y git
    
    [ec2-user ~]$ git clone https://github.com/awslabs/ecs-demo-php-simple-app
  1. Change directories to the ecs-demo-php-simple-app folder:
    [ec2-user ~]$ cd ecs-demo-php-simple-app
  2. Examine the Dockerfile in this folder:
    [ec2-user ecs-demo-php-simple-app]$ cat Dockerfile
    1. A Dockerfile is a manifest that contains instructions about how to build your Docker image. For more information about Dockerfiles, go to the Dockerfile Reference.
  3. Build the Docker image from our Dockerfile. Replace the placeholder user name with your Docker Hub user name (be sure to include the blank space and period at the end of the command):
    [ec2-user ecs-demo-php-simple-app]$ docker build -t my-dockerhub-username/amazon-ecs-sample .
  4. Run docker images to verify that the image was created correctly and that the image name contains a repository that you can push to (in this example, your Docker Hub user name):
    [ec2-user ecs-demo-php-simple-app]$ docker images
  5. Upload the Docker image to your Docker Hub account.
    1. Log in to Docker Hub:
      [ec2-user ecs-demo-php-simple-app]$ docker login
    2. Check to ensure your login worked:
      [ec2-user ecs-demo-php-simple-app]$ docker info
    3. Push your image to Docker Hub:
       [ec2-user ecs-demo-php-simple-app]$ docker push my-dockerhub-username/amazon-ecs-sample

Now that you’ve created this first Docker image, you can move on to create your second Docker image, which will be deployed into the proxy container.

Create a reverse proxy Docker image

For our second Docker image, you will build a reverse proxy using NGINX and enable throttling. This will allow you to simulate security functionality for the purpose of this blog post. In practice, this proxy container could contain any security-related software you desire, and could be produced by a security team and delivered to the team responsible for deployments as a standalone artifact.

  1. Using SSH, connect to the Amazon Linux instance you used in the last section.
  2. Ensure that the Docker service is running and you are logged in to your Docker Hub account (instructions in previous section).
  3. Create a local directory called proxy-image, and switch into it.
  4. In this directory, you will create two files. You can copy and paste the contents for each as follows.
    1. First, create a file called Dockerfile. This is a file used to build a Docker image according to your specifications. Copy and paste the following contents into the file. You are using a base Ubuntu image, running an update command, installing NGINX (your reverse proxy), telling Docker to copy the other file from your local machine to the Docker image, and then finally exposing port 80 for HTTP traffic.
      FROM ubuntu
      RUN apt-get update && apt-get install -y nginx
      COPY nginx.conf /etc/nginx/nginx.conf
      RUN echo "daemon off;" >> /etc/nginx/nginx.conf
      EXPOSE 80
      CMD service nginx start
    2. Next, create a supporting file called nginx.conf. You want to overwrite the standard NGINX configuration file to ensure it is configured as a reverse proxy for all HTTP traffic. Throttling has been left out for the time being.
      user www-data;
      worker_processes 4;
      pid /var/run/nginx.pid;
       
      events {
           worker_connections 768;
           # multi_accept on;
      }
       
      http {
        server {
          listen               80;
      
          # Proxy pass to servlet container
      
          location / {
            proxy_pass                http://application-container:80;
          }
      
        }
      }
  1. Now you are ready to build your proxy image. Run the following command with your specific Docker Hub information to instruct Docker to do so (be sure to include the blank space and period at the end of the command).
    docker build -t my-dockerhub-username/proxy-image .
  2. When this process completes, push your built image to Docker Hub.
    docker push my-dockerhub-username/proxy-image

You have now successfully built both of your Docker images and pushed them to Docker Hub. You can now move on to deploying Docker images with Amazon ECS.

Deploy your Docker images with Amazon ECS

Amazon ECS is a container management service that allows you to manage and deploy Docker containers at scale. In this section, we will use ECS to deploy your two containers on a single instance. All inbound and outbound traffic from your application will be funneled through the proxy container, allowing you to enforce security measures on your application without modifying the application container in which it lives.

In the following diagram, you can see a visual representation of the ECS architecture we will be using. An ECS cluster is simply a logical grouping of container instances (we are just using one) that you will deploy your containers onto. A task definition specifies one or more container definitions, including which Docker image to use, port mappings, and more. This task definition allows you to model your application by having different containers work together. An instantiation of this task definition is called a task.

Visual representation showing the ECS architecture described in this post

Figure 2. The ECS architecture described in this blog post

Use ECS to deploy your containers

Now that we have both Docker images built and stored in the Docker Hub repository, we can use ECS to deploy these containers.

Create your ECS task definition

  1. Navigate to the AWS Management Console, and then to the EC2 Container Service page.
    1. If you haven’t used ECS before, you should see a page with a Get started button. Click this button, and then click Cancel at the bottom of the page. If you have used ECS before, go straight to Step b.
    2. Click Task Definitions in the left sidebar, and then click Create a New Task Definition.
    3. Give your task definition a name, such as SecurityExampleTask.
    4. Click Add Container, and set up the first container definition with the following parameters, inserting the path to both your proxy image stored in Docker Hub (in other words, username/proxy-image), and the path to your web application image in Docker Hub (in the Links box). Don’t forget to click Advanced container configuration and complete all the fields.
      Container Name: proxy-container
      Image: username/proxy-image
      Memory: 256
      Port Mappings
         Host port: 80
         Container port: 80
         Protocol: tcp
      CPU: 256
      Links: application-container
    5. After you have populated the fields, click Add. Then, repeat the same process for the application container according to the following specifications. Note that the application container does not need a link back to the proxy container—doing this one way will suffice for this example.
      Container Name: application-container
      Image: username/amazon-ecs-sample
      Memory: 256
      CPU: 256
    6. After you have populated the fields, click Add. Now, click the Configure via JSON tab to see the task definition that you have created. When you are done viewing this, click Create.

Now that you have created your task definition, you can move on to the next step.

Deploy an ECS container instance

  1. In the ECS console, click Clusters in the left sidebar. If a cluster called default does not already exist, click Create Cluster and create a cluster called default (case sensitive).
  2. Launch an instance with an ECS-optimized Amazon Machine Image (AMI), ensuring it has a public IP address and a path to the Internet. For more information, see Launching an Amazon ECS Container Instance. This is the instance onto which you’ll deploy your Docker images.
  3. When your instance is up and running, navigate to the ECS section of the AWS Management Console, and click Clusters.
  4. Click the cluster called default. You should see your instance under the ECS Instances tab. After you have verified this, you can move on to the next step.

Run your ECS task

  1. Navigate to the Task Definitions tab on the left of the AWS Management Console, and select the check box next to the task definition you created. Click Actions, and then select Run Task.
  2. On the next page, ensure the cluster is set to Default and the number of tasks is 1, and then click Run Task.
  3. After the process completes, click the Clusters tab on the left of the AWS Management Console, select the default cluster, and then click the Tasks tab. Here, you can see your running task. It should have a green Running status. After you have verified this, you can proceed to the next step. If you see a Pending status, the task is still being deployed.
  4. Click the ECS Instances tab, where you should see the container instance that you created earlier. Click the container instance to get more information, including its public IP address. If you copy and paste this public IP address into your browser’s address bar, you should see your sample PHP website!
    1. If you do not see your PHP website, first ensure you have built your web application correctly by following the steps above in “Creating a Docker image containing a simple PHP application,” including pushing this image to the Docker Hub. Then, ensure your task is in the green Running state.
  5. Try to refresh the page a couple times, and you will notice that no throttling is currently taking place. To fix this, make a slight modification. First, sign back in to the Amazon Linux instance where you built the two Docker images, and navigate to the proxy-image directory. Change the nginx.conf file to look like the following example. (Notice that a couple lines [highlighted] have been added to throttle requests to 3 per minute. This is an extremely low rate and is used only to show a working solution during this example.)
    user www-data;
    worker_processes 4;
    pid /var/run/nginx.pid;
     
    events {
         worker_connections 768;
         # multi_accept on;
    }
     
    http {
      limit_req_zone  $binary_remote_addr  zone=one:10m   rate=3r/m;
      server {
        listen               80;
     
        # Proxy pass to servlet container
     
        location / {
          proxy_pass                http://application-container:80;
          limit_req zone=one burst=5 nodelay;
        }
      }
    }
  6. Following the same steps you followed earlier in “Create a reverse proxy Docker image” (specifically steps 5 and 6), rebuild the proxy image and push it to your Docker Hub. Now, stop the task that is currently running in the ECS console, and deploy it again by selecting Run New Task and choosing the same task as before. This will pick up the new image that you pushed to Docker Hub.
  7. When you see a status of Running next to the task in the ECS console, paste the container instance’s public IP into your browser’s address bar again. You should see the sample PHP website. Refresh the page and wait for it to load, repeating this process a few times. On the fourth refresh, an error should be shown, and the page is not displayed. This means your throttling is functioning correctly.

Congratulations on completing this walkthrough!

Closing Notes

In this example, we performed Docker image creation and manual task definition creation. (Also see Set up a build pipeline with Jenkins and Amazon ECS on the AWS Application Management Blog for a walkthrough of how to automate Docker task definition creation using Jenkins.) This separation between image definition and deployment configuration can be leveraged for governance purposes. The web and security tiers can be owned by different teams that produce Docker images as artifacts, and a third team can ensure that the two tiers are deployed alongside one another and in this specific tiered-network configuration.

We hope that you find this example of leveraging a deployment service for security and governance purposes useful. You can find additional examples of application configuration options in the AWS Application Management Blog.

We look forward to hearing about your use of this strategy in your organization’s deployment process. If you have questions or comments, post them below or on the EC2 forum.

– Michael