Containerizing Lambda deployments using OCI container images

This post is contributed by Mark Sweat, Senior Software Architect with Koch Industries.

Developers looking to run their code with AWS in a serverless fashion have had to make a decision between two separate runtime models – each with a distinct packaging and deployment pattern. The two choices we have had are running functions as a service in AWS Lambda using specific packaging and deployment mechanisms, or running container-based workflows in AWS Fargate.

The introduction of Lambda support for OCI container images provides customers with more choices when it comes to packaging formats. Developers can now choose to take advantage of the event-driven runtime model and cost-savings advantages of AWS Lambda, while taking advantage of the predictability and control offered by a container-based development and deployment cycle.

Lambda functions built with containers have an architecture extremely similar to other Lambda functions. The key difference is that the Lambda process is managed by a running container pulled from an OCI container image in Amazon ECR.

Why Use Containers for Lambda?

A developer could choose to use containers over prior Lambda packaging and deployment tools for a number of reasons.

Lambda functions built with containers allow a much more fine-grained control over runtimes and packages. This is especially helpful when working with packages that might be difficult (or even impossible) to bundle up into a Lambda layer. This also simplifies developer tasks when working with packages that cannot easily be bundled from a non-Linux developer machine.

By using an OCI container image, developers can build a rich suite of test cases against a Lambda container image that can be run as part of build pipeline. In addition to testing the function code, these test cases can test the environment setup – something not easily accomplishable with Lambda layers.

For the development teams I work with, we prefer to use serverless technologies like Lambda and Fargate over EC2 instances due to both the cost and security benefits. Our guidance has always been to run event triggered workloads (such as application integration APIs, on-demand data analytics jobs, or event queue triggered data transformations) on Lambda, and long-running tasks (such as a hosted stateful web server) with Fargate. When presented with a workload that could be hosted in either model (such as a stateless website), at Koch Industries, the guidance is to use Lambda so as to take advantage of the scaling and cost benefits it provides.

This has unfortunately led to a disconnect between the development processes for the two technologies. By moving to container-based Lambda, we hope to be able to transition most of our development to containers while still having the choice between Lambda and Fargate for hosting.

Building your first Lambda function container

To create a Lambda-compatible container image, AWS already provides a number of pre-configured base images as well as a runtime interface client for popular runtimes. Most production use cases of Lambda function containers should utilize these tools.

However, creating an image from scratch is actually quite simple. The container image needs to have, at minimum, the function code and a bootstrap executable that will wire into the Lambda event loop. We will work through a rudimentary example here that should give you the insight and building blocks to build your own container-based Lambda functions.

Creating the function image

For our example, we will be building up the following files:

├── /content
│ ├──
│ ├──
│ └── requirements.txt
└── Dockerfile

These files will be bundled into a container image, pushed to ECR, and executed inside of a sample Lambda function.

Bootstrap code

The first file we will need is, which will serve as the primary application of our function. It will wire up an event loop to listen to events from Lambda and pass them on to our application code.

import os
import requests
import sys
import traceback

def run_loop():
       aws_lambda_runtime_api = os.environ['AWS_LAMBDA_RUNTIME_API']
       import app
       while True:
              request_id = None
                     invocation_response = requests.get(f'http://{aws_lambda_runtime_api}/2018-06-01/runtime/invocation/next')

                     request_id = invocation_response.headers['Lambda-Runtime-Aws-Request-Id']
                     invoked_function_arn = invocation_response.headers['Lambda-Runtime-Invoked-Function-Arn']
                     trace_id = invocation_response.headers['Lambda-Runtime-Trace-Id']
                     os.environ['_X_AMZN_TRACE_ID'] = trace_id
                     context = {
                            'request_id': request_id,
                            'invoked_function_arn': invoked_function_arn,
                            'trace_id': trace_id
                     event = invocation_response.json()
                     response_url = f'http://{aws_lambda_runtime_api}/2018-06-01/runtime/invocation/{request_id}/response'
                     result = app.lambda_handler(event, context)

           , json=result)
                      if request_id != None:
                                   exc_type, exc_value, exc_traceback = sys.exc_info()
                                   exception_message = {
                                        'errorType': exc_type.__name__,
                                        'errorMessage': str(exc_value),
                                        'stackTrace': traceback.format_exception(exc_type,exc_value, exc_traceback)
                                   error_url = f'http://{aws_lambda_runtime_api}/2018-06-01/runtime/invocation/{request_id}/error'
                         , json=exception_message)


Looking at this code, we can see that the execution loop consists of retrieving an event from the Lambda API, passing that event to the function code, and then responding back to the Lambda API with the results.

Application code

Now that we have a bootstrap function, we need to create the actual application code that will be in the file. For our example, we won’t do anything of interest, other than responding back with a simple echo of the triggering event.

def lambda_handler(event, context):
    return {
        'statusCode': 200,
        'body': 'Hello from Lambda Containers',
        'event': event

The code here should be recognizable to anyone familiar with building Lambda functions in Python. It should be easy to imagine how additional helper functions can be added to extend this simple example into something ready for production deployment.

Unlike layer-based Lambdas, we will not be specifying the module and function as part of our Lambda function definition. Rather, we use either the command or entry point of the function to manage which application code to run. (We will see how this works in the container definition file in a few steps.)

Dependencies File

The next piece of content for this example is our requirements.txt file. In our case, this file will be used to pip install our dependencies as part of the container build.


One of the more challenging tasks in traditional Lambda deployments is dependency management. The ability to predictably manage all dependencies is one of the hallmarks of container-driven development.

Creating the container image

Now that we have fully built out of example function and bootstrap, our last piece of code is our Dockerfile. In this, we will bundle up the files from our content folder into a container based on a Python base image and tell the container how to run our bootstrap function at start up.

FROM python:alpine

COPY ./content .

RUN pip install -r requirements.txt

CMD python3

From here, we just need to build our image and publish it to Amazon ECR.

# Create the ECR repository
aws ecr create-repository --repository-name sample-lambda

# Login to ECR
aws ecr get-login-password | docker login --username AWS --password-stdin

# Build the image
docker build -t sample-lambda .

# Tag the image with the ECR URI
docker tag http-echo:latest

# Push the image to ECR
docker push

Creating the Lambda function

Now that we have a Lambda function built and deployed as an OCI container image to Amazon ECR, we can create the Lambda function instance that will use the image. To do so, navigate to Lambda in the AWS Management Console and choose Create function.

You should see a new option available to you – Container image. Select this option, and then click the Browse images button to select the image you uploaded previously to ECR. Once selected, you can specify a function name and then click Create function again to finalize the creation.

At this point, you have a Lambda function that functionally behaves the same as every other Lambda function in your account. You can create a test event and test the function directly from the GUI.

It should be evident how streamlined this process is compared to Lambda functions created from S3 or using Lambda Layers. By specifying a single image in ECR, we are able to choose a precise version of both our runtime and our function code.


From this example, you should see how easy it is to build and deploy Lambda functions using container images. Development teams experienced with containers should now be able to migrate their container knowledge to building and running Lambda functions.

About the author

Mark Sweat is an AWS Certified Solutions Architect Associate currently working a Senior Software Architect with Koch Industries. He has a passion for all things containers and serverless best exemplified by a Lambda tattoo on his left wrist. When not working with AWS technologies, he enjoys spending his time either playing games with his family or enjoying craft beers.