AWS Compute Blog
Working with Lambda layers and extensions in container images
In this post, I explain how to use AWS Lambda layers and extensions with Lambda functions packaged and deployed as container images.
Previously, Lambda functions were packaged only as .zip archives. This includes functions created in the AWS Management Console. You can now also package and deploy Lambda functions as container images.
You can use familiar container tooling such as the Docker CLI with a Dockerfile to build, test, and tag images locally. Lambda functions built using container images can be up to 10 GB in size. You push images to an Amazon Elastic Container Registry (ECR) repository, a managed AWS container image registry service. You create your Lambda function, specifying the source code as the ECR image URL from the registry.
Lambda functions packaged as container images do not support adding Lambda layers to the function configuration. However, there are a number of solutions to use the functionality of Lambda layers with container images. You take on the responsible for packaging your preferred runtimes and dependencies as a part of the container image during the build process.
Understanding how Lambda layers and extensions work as .zip archives
If you deploy function code using a .zip archive, you can use Lambda layers as a distribution mechanism for libraries, custom runtimes, and other function dependencies.
When you include one or more layers in a function, during initialization, the contents of each layer are extracted in order to the /opt
directory in the function execution environment. Each runtime then looks for libraries in a different location under /opt
, depending on the language. You can include up to five layers per function, which count towards the unzipped deployment package size limit of 250 MB. Layers are automatically set as private, but they can be shared with other AWS accounts, or shared publicly.
Lambda Extensions are a way to augment your Lambda functions and are deployed as Lambda layers. You can use Lambda Extensions to integrate functions with your preferred monitoring, observability, security, and governance tools. You can choose from a broad set of tools provided by AWS, AWS Lambda Ready Partners, and AWS Partners, or create your own Lambda Extensions. For more information, see “Introducing AWS Lambda Extensions.”
Extensions can run in either of two modes, internal and external. An external extension runs as an independent process in the execution environment. They can start before the runtime process, and can continue after the function invocation is fully processed. Internal extensions run as part of the runtime process, in-process with your code.
Lambda searches the /opt/extensions
directory and starts initializing any extensions found. Extensions must be executable as binaries or scripts. As the function code directory is read-only, extensions cannot modify function code.
It helps to understand that Lambda layers and extensions are just files copied into specific file paths in the execution environment during the function initialization. The files are read-only in the execution environment.
Understanding container images with Lambda
A container image is a packaged template built from a Dockerfile. The image is assembled or built from commands in the Dockerfile, starting from a parent or base image, or from scratch. Each command then creates a new layer in the image, which is stacked in order on top of the previous layer. Once built from the packaged template, a container image is immutable and read-only.
For Lambda, a container image includes the base operating system, the runtime, any Lambda extensions, your application code, and its dependencies. Lambda provides a set of open-source base images that you can use to build your container image. Lambda uses the image to construct the execution environment during function initialization. Lambda also optimizes the image and caches it close to where the functions runs so cold start times are the same as for .zip archives. You can use the AWS Serverless Application Model (AWS SAM) CLI or native container tools such as the Docker CLI to build and test container images locally.
Using Lambda layers in container images
Container layers are added to a container image, similar to how Lambda layers are added to a .zip archive function.
There are a number of ways to use container image layering to add the functionality of Lambda layers to your Lambda function container images.
Use a container image version of a Lambda layer
A Lambda layer publisher may have a container image format equivalent of a Lambda layer. To maintain the same file path as Lambda layers, the published container images must have the equivalent files located in the /opt
directory. An image containing an extension must include the files in the /opt/extensions
directory.
An example Lambda function, packaged as a .zip archive, is created with two layers. One layer contains shared libraries, and the other layer is a Lambda extension from an AWS Partner.
aws lambda create-function –region us-east-1 –function-name my-function \
aws lambda create-function --region us-east-1 --function-name my-function \
--role arn:aws:iam::123456789012:role/lambda-role \
--layers \
"arn:aws:lambda:us-east-1:123456789012:layer:shared-lib-layer:1" \
"arn:aws:lambda:us-east-1:987654321987:extensions-layer:1" \
…
The corresponding Dockerfile syntax for a function packaged as a container image includes the following lines. These pull the container image versions of the Lambda layers and copy them into the function image. The shared library image is pulled from ECR and the extension image is pulled from Docker Hub.
FROM public.ecr.aws/myrepo/shared-lib-layer:1 AS shared-lib-layer
# Layer code
WORKDIR /opt
COPY --from=shared-lib-layer /opt/ .
FROM aws-partner/extensions-layer:1 as extensions-layer
# Extension code
WORKDIR /opt/extensions
COPY --from=extensions-layer /opt/extensions/ .
Copy the contents of a Lambda layer into a container image
You can use existing Lambda layers, and copy the contents of the layers into the function container image /opt
directory during docker build
.
To use this in an automated build process, build a Dockerfile that includes the AWS Command Line Interface to copy the layer files from Amazon S3.
It is worth considering the security aspects of this approach as it requires sending credentials to the Dockerfile to copy the files from S3. Always ensure you never store AWS credentials in the final container image. This example uses a multi-stage docker build process to fetch a layer using the credentials, and then continuing FROM scratch
to squash the previous layer. This removes the credentials from the final image which can be verfied using docker history
. Ensure you use, and store, build-time specific credentials with permissions scoped to only the resources you need access to.
The Dockerfile for this automated approach, to add two layers into a single image, includes the following lines to copy the Lambda layer contents.
FROM alpine:latest as layer-copy # Create a build stage to copy the files from S3 using credentials
ARG AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION:-"us-east-1"}
ARG AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID:-""}
ARG AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY:-""}
ENV AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
ENV AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
ENV AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
RUN apk add aws-cli curl unzip
RUN mkdir -p /opt
RUN curl $(aws lambda get-layer-version-by-arn --arn arn:aws:lambda:us-east-1:1234567890123:layer:shared-lib-layer:1 --query 'Content.Location' --output text) --output layer.zip
RUN unzip layer.zip -d /opt
RUN rm layer.zip
RUN curl $(aws lambda get-layer-version-by-arn --arn arn:aws:lambda:us-east-1:987654321987:extensions-layer:1 --query 'Content.Location' --output text) --output layer.zip
RUN unzip layer.zip -d /opt
RUN rm layer.zip
FROM scratch # Start second stage from blank image to squash all previous history, including credentials.
WORKDIR /opt
COPY --from=layer-copy /opt .
To run the AWS CLI within the Dockerfile, specify your AWS_ACCESS_KEY
, and AWS_SECRET_ACCESS_KEY
, and include the required AWS_DEFAULT_REGION
as command-line arguments. Ensure you securely use, and store the minimal access credentials.
docker build . -t layer-image1:latest \
--build-arg AWS_DEFAULT_REGION=us-east-1 \
--build-arg AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE \
--build-arg AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Confirm the credentials are not stored in the image.
docker history layer-image1:latest
This creates a container image containing the existing Lambda layer and extension files. This can be pushed to ECR and used in a function.
Build a container image from a Lambda layer
You can repackage and publish Lambda layer file content as container images. Creating separate container images for different layers allows you to add them to multiple functions, and share them in a similar way as Lambda layers.
You can create a separate container image containing the files from a single layer, or combine the files from multiple layers into a single image. If you create separate container images for layer files, you then add these images into your function image.
There are two ways to manage language code dependencies. You can pre-build the dependencies and copy the files into the container image, or build the dependencies during docker build
.
In this example, an existing Python application is migrated. This comprises a Lambda function and extension, from a .zip archive to separate function and extension container images. The extension writes logs to S3.
You can choose how to store images in repositories. You can either push both images to the same ECR repository with different image tags, or push to different repositories. In this example, use separate ECR repositories.
To set up the example, visit the GitHub repo and follow the instructions in the README.md file.
The existing example extension uses a makefile
to install boto3
using pip install
with a requirements.txt
file. This is migrated to the docker build
process. You must add a Python runtime to be able to run pip install
as part of the build process. You can use python:3.8-alpine
as a minimal base image.
Create separate Dockerfiles for the function and extension. The extension Dockerfile contains the following lines.
FROM python:3.8-alpine AS installer
#Layer Code
COPY extensionssrc /opt/
COPY extensionssrc/requirements.txt /opt/
RUN pip install -r /opt/requirements.txt -t /opt/extensions/lib
FROM scratch AS base
WORKDIR /opt/extensions
COPY --from=installer /opt/extensions .
Build, tag, login, and push the extension container image to an existing ECR repository.
docker build -t log-extension-image:latest .
docker tag log-extension-image:latest 123456789012.dkr.ecr.us-east-1.amazonaws.com/log-extension-image:latest
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com
docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/log-extension-image:latest
The function Dockerfile contains the following lines, which add the files from the previously created extension image to the function image. There is no need to run pip install
for the function as it does not require any additional dependencies.
FROM 123456789012.dkr.ecr.us-east-1.amazonaws.com/log-extension-image:latest AS layer
FROM public.ecr.aws/lambda/python:3.8
# Layer code
WORKDIR /opt
COPY --from=layer /opt/ .
# Function code
WORKDIR /var/task
COPY app.py .
CMD ["app.lambda_handler"]
Build, tag, and push the function container image to a separate existing ECR repository. This creates an immutable image of the Lambda function.
docker build -t log-extension-function:latest .
docker tag log-extension-function:latest 123456789012.dkr.ecr.us-east-1.amazonaws.com/log-extension-function:latest
docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/log-extension-function:latest
The function requires a unique S3 bucket to store the logs files, which is created in the S3 console. Create a Lambda function from the ECR repository image, and specify the bucket name as a Lambda environment variable.
aws lambda create-function --region us-east-1 --function-name log-extension-function \
--package-type Image --code ImageUri=123456789012.dkr.ecr.us-east-1.amazonaws.com/log-extension-function:latest \
--role "arn:aws:iam:: 123456789012:role/lambda-role" \
--environment "Variables": {"S3_BUCKET_NAME": "s3-logs-extension-demo-logextensionsbucket-us-east-1"}
For subsequent extension code changes, you need to update both the extension and function images. If only the function code changes, you need to update the function image. Push the function image as the :latest
image to ECR. Then update the function code deployment to use the updated :latest
ECR image.
aws lambda update-function-code --function-name log-extension-function --image-uri 123456789012.dkr.ecr.us-east-1.amazonaws.com/log-extension-function:latest
Using custom runtimes with container images
With .zip archive functions, custom runtimes are added using Lambda layers. With container images, you no longer need to copy in Lambda layer code for custom runtimes.
You can build your own custom runtime images starting with AWS provided base images for custom runtimes. You can add your preferred runtime, dependencies, and code to these images. To communicate with Lambda, the image must implement the Lambda Runtime API. We provide Lambda runtime interface clients for all supported runtimes, or you can implement your own for additional runtimes.
Running extensions in container images
A Lambda extension running in a function packaged as a container image works in the same way as a .zip archive function. You build a function container image including the extension files, or adding an extension image layer. Lambda looks for any external extensions in the /opt/extensions
directory and starts initializing them. Extensions must be executable as binaries or scripts.
Internal extensions modify the Lambda runtime startup behavior using language-specific environment variables, or wrapper scripts. For language-specific environment variables, you can set the following environment variables in your function configuration to augment the runtime command line.
- JAVA_TOOL_OPTIONS (Java Corretto 8 and 11)
- NODE_OPTIONS (Node.js 10 and 12)
- DOTNET_STARTUP_HOOKS (.NET Core 3.1)
An example Lambda environment variable for JAVA_TOOL_OPTIONS:
-javaagent:"/opt/ExampleAgent-0.0.jar"
Wrapper scripts delegate the runtime start-up to a script. The script can inject and alter arguments, set environment variables, or capture metrics, errors, and other diagnostic information. The following runtimes support wrapper scripts: Node.js 10 and 12, Python 3.8, Ruby 2.7, Java 8 and 11, and .NET Core 3.1
You specify the script by setting the value of the AWS_LAMBDA_EXEC_WRAPPER environment variable as the file system path of an executable binary or script, for example:
/opt/wrapper_script
Conclusion
You can now package and deploy Lambda functions as container images in addition to .zip archives. Lambda functions packaged as container images do not directly support adding Lambda layers to the function configuration as .zip archives do.
In this post, I show a number of solutions to use the functionality of Lambda layers and extensions with container images, including example Dockerfiles.
I show how you can migrate an existing Lambda function and extension from a .zip archive to separate function and extension container images. Follow the instructions in the README.md file in the GitHub repository.
For more serverless learning resources, visit https://serverlessland.com.