AWS Compute Blog

Cloudmicro for AWS: Speeding up serverless development at The Coca‑Cola Company

We have a guest blog post today from our friend Patrick Brandt at The Coca‑Cola Company. Patrick and his team have open-sourced an innovative use of Docker containers to encourage rapid local development and testing for applications that use AWS Lambda and Amazon DynamoDB.


Using Cloudmicro to build AWS Lambda and DynamoDB applications on your laptop

My team at The Coca‑Cola Company recently began work on a proximity-marketing platform using AWS Lambda and DynamoDB. We’re gathering beacon sighting events via API Gateway, layering in additional data with a Lambda function, and then storing these events in DynamoDB.

In an effort to shorten the development cycle-time of building and deploying Lambda functions, we created a local runtime of Lambda and DynamoDB using Docker containers. Running our Lambda functions locally in containers removed the overhead of having to deploy code to debug it, greatly increasing the speed at which we could build and tweak new features. I’ve since launched an open-source organization called Cloudmicro with the mission of assembling Docker-ized versions of AWS services to encourage rapid development and easy experimentation.


Getting started with Cloudmicro

The Cloudmicro project I’m working with is a local runtime for Python-based Lambda functions that integrate with DynamoDB: The only prerequisite for this project is that you have Docker installed and running on your local environment.


Cloning the lambda-dynamodb-local project and running the hello function

In these examples, you run commands using Docker on a Mac. The instructions for running Docker commands using Windows are slightly different and can be found in the project Readme.

Run the following commands in your terminal window to clone the lambda-dynamodb-local project and execute the hello Lambda function:

> git clone
> cd lambda-dynamodb-local
> docker-compose up -d
> docker-compose run --rm -e FUNCTION_NAME=hello lambda-python

Your output will look like this:

executing hello function locally:
[root - INFO - 2016-02-29 14:55:30,382] Event: {u'first_name': u'Umberto', u'last_name': u'Boccioni'}
[root - INFO - 2016-02-29 14:55:30,382] START RequestId: 11a94c54-d0fe-4a87-83de-661692edc440
[root - INFO - 2016-02-29 14:55:30,382] END RequestId: 11a94c54-d0fe-4a87-83de-661692edc440
[root - INFO - 2016-02-29 14:55:30,382] RESULT:
{'message': 'Hello Umberto Boccioni!'}
[root - INFO - 2016-02-29 14:55:30,382] REPORT RequestId: 11a94c54-d0fe-4a87-83de-661692edc440 Duration: 0.11 ms

The output is identical to what you would see if you had run this same function using AWS.


Understanding how the hello function runs locally

We’ll look at three files and the docker-compose command to understand how the hello function executes with its test event.

The docker-compose.yml file
The docker-compose.yml file defines three docker-compose services:

 build: .
 container_name: python-lambda-local
   - ./:/usr/src
   - dynamodb
 working_dir: /usr/src
 container_name: dynamodb-local
 image: modli/dynamodb
   - "8000"
 image: node:latest
 container_name: init-local
   - DYNAMODB_ENDPOINT=http://dynamodb:8000
   - ./db_gen:/db_gen
   - dynamodb
 working_dir: /db_gen
 command: /bin/bash
  1. lambda-python contains the local version of the Python-based Lambda runtime that executes the Lambda function handler in lambda_functions/hello/
  2. dynamodb contains an instance of the dynamodb-local application (a fully functional version of DynamoDB).
  3. init contains an application that initializes the dynamodb service with any number of DynamoDB tables and optional sample data for those tables.

The hello function only uses the lambda-python service. You’ll look at an example that uses dynamodb and init a little later.

The lambda_functions/hello/ file
The hello function code is identical to the Lambda code found in the AWS documentation for Python handler functions:

import logging
logger = logging.getLogger()

def hello_handler(event, context):
   message = 'Hello {} {}!'.format(event['first_name'],

   return {
       'message' : message

Like the hello function, your Lambda functions will live in a subdirectory of lambda_functions. The pattern you’ll follow is lambda_functions/{function name}/{function name}.py and the function handler in your Python file will be named {function name}_handler.

You can also include a requirements.txt file in your function directory that will include any external Python library dependencies required by your Lambda function.

The local_events/hello.json file
The test event for the hello function has two fields:

 "first_name": "Umberto",
 "last_name": "Boccioni"

All test events live in the local_events directory. By convention, the file names for each test event must match the name of the corresponding Lambda function in the lambda_functions directory.

The docker-compose command
Running the docker-compose command will instantiate containers for all of the services outlined in the docker-compose.yml file and execute the hello function.

docker-compose run --rm -e FUNCTION_NAME=hello lambda-python
  • The docker-compose run command will bring up the lambda-python service and the dynamodb linked service defined in the docker-compose.yml file.
  • The –rm argument instructs docker-compose to destroy the container running the Lambda function once the function is complete.
  • The -e FUNCTION_NAME=hello argument defines an environment variable that the Lambda function container uses to run a specific function in the lambda_functions directory (-e FUNCTION_NAME=hello will run the hello function).


Using DynamoDB

Now we’ll look at how you use the init service to create DynamoDB tables and seed them with sample data. Then we’ll tie it all together and create a Lambda function that reads data from a table in the DynamoDB container.

Creating tables and populating them with data
The init service uses two subdirectories in the db_gen directory to set up the tables in the container created by the dynamodb service:

  • db_gen/tables/ contains JSON files that define each DynamoDB table.
  • db_gen/table_data/ contains optional JSON files that define a list of items to be inserted into each table.

The file names in db_gen/table_data/ must match those in db_gen/tables/ in order to load tables with data.

You’ll need to follow a couple of steps to allow the init service to automatically create your DynamoDB tables and populate them with sample data. In this example, you’ll be creating a table that stores English words.

  1. Add a file named “words.json” to db_gen/tables.
       "AttributeDefinitions": [
               "AttributeName": "language_code",
               "AttributeType": "S"
               "AttributeName": "word",
               "AttributeType": "S"
       "GlobalSecondaryIndexes": [
               "IndexName": "language_code-index",
               "Projection": {
                   "ProjectionType": "ALL"
               "ProvisionedThroughput": {
                   "WriteCapacityUnits": 5,
                   "ReadCapacityUnits": 5
               "KeySchema": [
                       "KeyType": "HASH",
                       "AttributeName": "language_code"
       "ProvisionedThroughput": {
           "WriteCapacityUnits": 5,
           "ReadCapacityUnits": 5
       "TableName": "words",
       "KeySchema": [
               "KeyType": "HASH",
               "AttributeName": "word"
  2. Add a file named “words.json” to db_gen/table_data.

Your DynamoDB database can be re-created with the init service by running this command:

docker-compose run --rm init

This will rebuild your DynamoDB container with your table definitions and table data.

You can use the describe-table command in the AWS CLI as a handy way to create your DynamoDB table definitions: first use the AWS console to create a DynamoDB table within your AWS account and then use the describe-table command to return a JSON representation of that table. If you use this shortcut, be aware that you’ll need to massage the CLI response such that the “Table” field is removed and the JSON in the “Table” field is moved up a level in the hierarchy. Once this is done, there are several fields you need to remove from the response before it can be used to create your DynamoDB table. You can use the validation errors returned by running the following command as your guide for the fields that need to be removed:

docker-compose run --rm init

Retrieving data from DynamoDB
Now you’re going to write a Lambda function that scans the words table in DynamoDB and returns the output.

  1. Create a getWords Lambda function in lambda_functions/getWords/
    from lambda_utils import *
    import logging
    logger = logging.getLogger()
    def getWords_handler(event, context, config):
       dynamodb = dynamodb_connect(config)
       words_table = dynamodb.Table(config.Dynamodb.wordsTable)
       words = words_table.scan()
       return words["Items"]
  2. Create local_events/getWords.json and add an empty JSON object.
  3. Ensure that the table name is referenced in config/
    class Dynamodb:
       wordsTable = "words"
       endpoint = "http://dynamodb:8000"
    class Session:
       region = "us-east-1"
       access_key = "Temp"
       secret_key = "Temp"
  4. Now you can run your new function and see the results of a word table scan.
    docker-compose run --rm -e FUNCTION_NAME=getWords lambda-python

You may have noticed the @import_config decorator applied to the Lambda function handler in the prior example. This is a utility that imports configuration information from the config directory and injects it into the function handler parameter list. You should update the config/ file with DynamoDB table names and then reference these table names via the config parameter in your Lambda function handler.

This configuration pattern is not specific to Lambda functions run with Cloudmicro; it is an example of a general approach to environmental-awareness in Python-based Lambda that I’ve outlined on Gist.


Call for contributors

The goal of Cloudmicro for AWS is to re-create the AWS cloud on your laptop for rapid development of cloud applications. The lambda-dynamodb-local project is just the start of a much larger vision for an ecosystem of interconnected Docker-ized AWS components.

Here are some milestones:

  1. Support Lambda function invocation from other Docker-ized Lambda functions.
  2. Add a Docker-ized S3 service.
  3. Create Yeoman generators to easily scaffold Cloudmicro services.

Supporting these capabilities will require re-architecting the current lambda-dynamodb-local project into a system that provides more robust coordination between containers. I’m hoping to enlist brilliant developers like you to support the cause and build something that many people will find useful.

Fork the lambda-dynamodb-local project, or find me on GitHub and let me know how you’d like to help out.