AWS DevOps & Developer Productivity Blog
Reducing Docker image build time on AWS CodeBuild using an external cache
With the proliferation of containerized solutions to simplify creating, deploying, and running applications, coupled with the use of automation CI/CD pipelines that continuously rebuild, test, and deploy such applications when new changes are committed, it’s important that your CI/CD pipelines run as quickly as possible, enabling you to get early feedback and allowing for faster releases.
AWS CodeBuild supports local caching, which makes it possible to persist intermediate build artifacts, like a Docker layer cache, locally on the build host and reuse them in subsequent runs. The CodeBuild local cache is maintained on the host at best effort, so it’s possible several of your build runs don’t hit the cache as frequently as you would like.
A typical Docker image is built from several intermediate layers that are constructed during the initial image build process on a host. These intermediate layers are reused if found valid in any subsequent image rebuild; doing so speeds up the build process considerably because the Docker engine doesn’t need to rebuild the whole image if the layers in the cache are still valid.
This post shows how to implement a simple, effective, and durable external Docker layer cache for CodeBuild to significantly reduce image build runtime.
Solution overview
The following diagram illustrates the high-level architecture of this solution. We describe implementing each stage in more detail in the following paragraphs.
In a modern software engineering approach built around CI/CD practices, whenever specific events happen, such as an application code change is merged, you need to rebuild, test, and eventually deploy the application. Assuming the application is containerized with Docker, the build process entails rebuilding one or multiple Docker images. The environment for this rebuild is on CodeBuild, which is a fully managed build service in the cloud. CodeBuild spins up a new environment to accommodate build requests and runs a sequence of actions defined in its build specification.
Because each CodeBuild instance is an independent environment, build artifacts can’t be persisted in the host indefinitely. The native CodeBuild local caching feature allows you to persist a cache for a limited time so that immediate subsequent builds can benefit from it. Native local caching is performed at best effort and can’t be relied on when multiple builds are triggered at different times. This solution describes using an external persistent cache that you can reuse across builds and is valid at any time.
After the first build of a Docker image is complete, the image is tagged and pushed to Amazon Elastic Container Registry (Amazon ECR). In each subsequent build, the image is pulled from Amazon ECR and the Docker build process is forced to use it as cache for its next build iteration of the image. Finally, the newly produced image is pushed back to Amazon ECR.
In the following paragraphs, we explain the solution and walk you through an example implementation. The solution rebuilds the publicly available Amazon Linux 2 Standard 3.0 image, which is an optimized image that you can use with CodeBuild.
Creating a policy and service role
The first step is to create an AWS Identity and Access Management (IAM) policy and service role for CodeBuild with the minimum set of permissions to perform the job.
- On the IAM console, choose Policies.
- Choose Create policy.
- Provide the following policy in JSON format:
CodeBuild Docker Cache Policy:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ecr:GetAuthorizationToken", "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:GetRepositoryPolicy", "ecr:DescribeRepositories", "ecr:ListImages", "ecr:DescribeImages", "ecr:BatchGetImage", "ecr:ListTagsForResource", "ecr:DescribeImageScanFindings", "ecr:InitiateLayerUpload", "ecr:UploadLayerPart", "ecr:CompleteLayerUpload", "ecr:PutImage" ], "Resource": "*" } ] }
- In the Review policy section, enter a name (for example,
CodeBuildDockerCachePolicy
). - Choose Create policy.
- Choose Roles on the navigation pane.
- Choose Create role.
- Keep AWS service as the type of role and choose CodeBuild from the list of services.
- Choose Next.
- Search for and add the policy you created.
- Review the role and enter a name (for example,
CodeBuildDockerCacheRole
). - Choose Create role.
Creating an Amazon ECR repository
In this step, we create an Amazon ECR repository to store the built Docker images.
- On the Amazon ECR console, choose Create repository.
- Enter a name (for example,
amazon_linux_codebuild_image
). - Choose Create repository.
Configuring a CodeBuild project
You now configure the CodeBuild project that builds the Docker image and configures its cache to speed up the process.
- On the CodeBuild console, choose Create build project.
- Enter a name (for example,
SampleDockerCacheProject
). - For Source provider, choose GitHub.
- For Repository, select Public repository.
- For Repository URL, enter
https://github.com/aws/aws-codebuild-docker-images
.
- In the Environment section, for Environment image, select Managed image.
- For Operating system, choose Amazon Linux 2.
- For Runtime(s), choose Standard.
- For Image, enter
aws/codebuild/amazonlinux2-x86_64-standard:3.0
. - For Image version, choose Always use the latest image for this runtime version.
- For Environment type, choose Linux.
- For Privileged, select Enable this flag if you want to build Docker images or want your builds to get elevated privileges.
- For Service role, select Existing service role.
- For Role ARN, enter the ARN for the service role you created (
CodeBuildDockerCachePolicy
). - Select Allow AWS CodeBuild to modify this service so it can be used with this build project.
- In the Buildspec section, select Insert build commands.
- Choose Switch to editor.
- Enter the following build specification (substitute account-ID and region).
version: 0.2 env: variables: CONTAINER_REPOSITORY_URL: account-ID.dkr.ecr.region.amazonaws.com/amazon_linux_codebuild_image TAG_NAME: latest phases: install: runtime-versions: docker: 19 pre_build: commands: - $(aws ecr get-login --no-include-email) - docker pull $CONTAINER_REPOSITORY_URL:$TAG_NAME || true build: commands: - cd ./al2/x86_64/standard/1.0 - docker build --cache-from $CONTAINER_REPOSITORY_URL:$TAG_NAME --tag $CONTAINER_REPOSITORY_URL:$TAG_NAME . post_build: commands: - docker push $CONTAINER_REPOSITORY_URL
- Choose Create the project.
The provided build specification instructs CodeBuild to do the following:
- Use the Docker 19 runtime to run the build. The following process doesn’t work reliably with Docker versions lower than 19.
- Authenticate with Amazon ECR and pull the image you want to rebuild if it exists (on the first run, this image doesn’t exist).
- Run the image rebuild, forcing Docker to consider as cache the image pulled at the previous step using the –cache-from parameter.
- When the image rebuild is complete, push it to Amazon ECR.
Testing the solution
The solution is fully configured, so we can proceed to evaluate its behavior.
For the first run, we record a runtime of approximately 39 minutes. The build doesn’t use any cache and the docker pull in the pre-build stage fails to find the image we indicate, as expected (the || true statement at the end of the command line guarantees that the CodeBuild instance doesn’t stop because the docker pull failed).
The second run pulls the previously built image before starting the rebuild and completes in approximately 6 minutes, most of which is spent downloading the image from Amazon ECR (which is almost 5 GB).
We trigger another run after simulating a change halfway through the Dockerfile (addition of an echo command to the statement at line 291 of the Dockerfile). Docker still reuses the layers in the cache until the point of the changed statement and then rebuilds from scratch the remaining layers described in the Dockerfile. The runtime was approximately 31 minutes; the overhead of downloading the whole image first partially offsets the advantages of using it as cache.
It’s relevant to note the image size in this use case is considerably large; on average, projects deal with smaller images that introduce less overhead. Furthermore, the previous run had the built-in CodeBuild feature to cache Docker layers at best effort disabled; enabling it provides further efficiency because the docker pull specified in the pre-build stage doesn’t have to download the image if the one available locally matches the one on Amazon ECR.
Cleaning up
When you’re finished testing, you should un-provision the following resources to avoid incurring further charges and keep the account clean from unused resources:
- The
amazon_linux_codebuild_image
Amazon ECR repository and its images; - The
SampleDockerCacheProject
CodeBuild project; - The
CodeBuildDockerCachePolicy
policy and theCodeBuildDockerCacheRole
role.
Conclusion
In this post, we reviewed a simple and effective solution to implement a durable external cache for Docker on CodeBuild. The solution provides significant improvements in the execution time of the Docker build process on CodeBuild and is general enough to accommodate the majority of use cases, including multi-stage builds.
The approach works in synergy with the built-in CodeBuild feature of caching Docker layers at best effort, and we recommend using it for further improvements. Shorter build processes translate to lower compute costs, and overall determine a shorter development lifecycle for features released faster and at a lower cost.
About the Author
Camillo Anania is a Global DevOps Consultant with AWS Professional Services, London, UK.
James Jacob is a Global DevOps Consultant with AWS Professional Services, London, UK.