AWS DevOps Blog

Dockerizing a Python Web App

A few weeks ago Elastic Beanstalk announced support for deploying and managing Docker containers in the AWS cloud. In this post we’ll walk through Dockerizing a simple signup form web app originally written for the Elastic Beanstalk Python environment.

About the Signup Form App

We built and blogged about this app a few months ago. There’s a four part video series and a post that drills into the nuts and bolts of using DynamoDB and SNS with the app. Today we’re going to build on that content and talk about how we take that application and make it work with Docker and Elastic Beanstalk. We’re going to knock this out in 4 phases.

The Source, for Reference

The source code for the original (i.e., non-Dockerized) Python application is available on GitHub (in the master branch) at The Dockerized version is in the docker branch at

And if you prefer code and diffs to prose and blog posts, you can check out the nifty GitHub compare view to look at the differences between those two branches. That’s at…docker. You can check out every file added and every line changed to Dockerize this sample.

Dockerization Phase 1: Add a Dockerfile

Let’s start by pulling down the source from GitHub:

$> git clone
$> cd eb-py-flask-signup
$> git checkout master

Looking at the contents of the directory we see this is a simple Python web app that uses the Flask framework, Boto for interacting with DynamoDB and SNS, and a few other dependencies declared in requirements.txt.

Simple enough, so we create a Dockerfile that will build an image suitable for running this application. The Dockerfile is placed in the directory with the rest of the app source (i.e., alongside requirements.txt,, etc.):

FROM ubuntu:12.10

# Install Python Setuptools
RUN apt-get install -y python-setuptools

# Install pip
RUN easy_install pip

# Add and install Python modules
ADD requirements.txt /src/requirements.txt
RUN cd /src; pip install -r requirements.txt

# Bundle app source
ADD . /src

# Expose
EXPOSE  5000

# Run
CMD ["python", "/src/"]

Dockerization Phase 2: Test Locally

Although this app requires a DynamoDB table and SNS topic to be completely functional, we can test it without them:

First, build the Docker image:

$> docker build -t eb-py-sample .

Last (and straight to profit!), run a container from the image (mapping container port 5000 to host port 8080, and setting a few env vars discussed below):

$> docker run -d 
     -e APP_CONFIG=application.config.example 
     -p 8080:5000 

On OS X I can open http://localhost:8080 and there’s my app!

Sidebar: We used the -e options to pass in a few environment variables:

  1. APP_CONFIG: The app expects this environment variable to point to its configuration file. We point to the default config file bundled with the app. You could create a DynamoDB table and SNS topic and add them to this conf file to make the app work perfectly in your local dev env.
  2. AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY: The app uses Boto to connect to DynamoDB and SNS, and Boto uses these environment variables to sign requests to those services. This is for local dev only. When we deploy to Elastic Beanstalk we’ll use IAM Roles.

Dockerization Phase 3: Modify .ebextensions

Our app has a special .ebextensions directory with a setup.config file. We use the file to tell Elastic Beanstalk to create the DynamoDB table and SNS topic our app needs, and also to create an app config file – /var/app/app.config – that includes the names of the DynamoDB table and SNS topic that were just created.

The file also specifies a few things that are specific to the Python (as opposed to Docker) environment type in Elastic Beanstalk. We need to remove those bits now:

Modify the files member to remove the owner and group keys so it looks like:

    mode: "000444"
    content: |
      AWS_REGION = '`{ "Ref" : "AWS::Region"}`'
      STARTUP_SIGNUP_TABLE = '`{ "Ref" : "StartupSignupsTable"}`'
      NEW_SIGNUP_TOPIC = '`{ "Ref" : "NewSignupTopic"}`'

Modify option_settings to remove the static files mapping so it looks like:

     "AlarmEmail" : ""
    "APP_CONFIG": "/var/app/app.config"
    "FLASK_DEBUG": "false"
    "THEME": "flatly"  

Check out the earlier post for more info on setup.config, or look at the Dockerized setup.config on GitHub

Dockerization Phase 4: Deploy to Elastic Beanstalk

I’ve built and tested my container locally, removed a few .ebextensions that were specific to the Elastic Beanstalk Python environment, and now I’m ready to deploy it – with confidence!

I create a file named in the same place I created the Dockerfile. This file will tell Elastic Beanstalk how to run the Docker container and it looks like this (see the sidebar below for more details on this file):

   "AWSEBDockerrunVersion": "1",
   "Volumes": [
       "ContainerDirectory": "/var/app",
       "HostDirectory": "/var/app"
   "Logging": "/var/eb_log"

Sidebar about

The Volumes member will map /var/app on the EC2 Instance to /var/app in the container. This let’s the app running in the Docker container access the app.config file created by .ebextensions/setup.config The Logging member indicates to Elastic Beanstalk that our Dockerized app will write logs to /var/eb_log in the container. Beanstalk will automatically pull logs from this directory whenever you click Snapshot Logs in the console, or if you enable automatic log rotation:

I’ll commit my changes and use git archive to make a zip to deploy to Elastic Beanstalk (or you can use the zip tool or Finder or Windows Explorer to achieve the same):

$> git add Docker* && git commit -am "Dockerized"
$> git archive --format=zip HEAD >

And then I deploy the zip via the Elastic Beanstalk Management Console

When my environment is green, I can, of course, access it and verify it works:

I’ll also snapshot the environment’s logs:

Because I added the Logging member to the file earlier, logs in the /var/eb_log dir on the container will be rotated to S3 and I can access them in the browser:

Coming Up

In the next post, we’ll use the eb command line tool to depoy this Dockerized app directly from the command line, no browser or management console necessary!