AWS Cloud Operations & Migrations Blog

Build EC2 Image Builder container images locally

EC2 Image Builder is a fully-managed AWS service that simplifies the creation, management, and deployment of golden server and container images. The images are built using an automation pipeline that is customizable for customers, enabling them to create images that are pre-installed and pre-configured with software and packages to meet specific IT requirements. The service provides customers the option to build either Amazon Machine Images (AMI) or Docker container images. EC2 Image Builder uses bootstrap and cleanup scripts which install required software packages before the pipeline components are executed. This enables customers to focus on the software and packages they want to install and without worrying about the build environment setup.

The container bootstrap script prepares the docker environment where the components will be executed. The cleanup script makes sure that the installed packages in the bootstrap script are cleaned before the docker image is created. However, modifying the docker environment can inadvertently cause an Image Builder pipeline failure (e.g changing the user permissions before executing the components). Troubleshooting a pipeline failure caused by bootstrap and cleanup scripts can be difficult if you do not have visibility into the scripts.

This blog will focus on understanding the different segments involved in the bootstrap and build scripts used by EC2 Image Builder to create a docker image. You can use the following steps to setup a docker environment locally and create docker image similar to the EC2 Image Builder docker build process. This will also help you debug the errors locally and save costs when running an EC2 Image Builder pipeline.

Build container images locally

In the following sections, we walk you through the steps to create a docker container build locally on a Linux machine. These steps are part of the EC2 Image Builder build process.

Step 1 – Prepare Environment or machine

You can either launch an Amazon EC2 instance or execute the scripts directly on your local Linux machine. In this post, we will execute the scripts on a t2.medium EC2 instance using the Amazon Linux 2 64-bit (x86) AMI. For information about launching an EC2 instance with SSH connectivity, please follow the steps for launching and connecting to the instance.

Before moving on to the remaining steps, ensure that you have AWS CLIv2 installed on your EC2 instance or local machine. You can verify the current AWS CLI version by running aws --version. If you are using AWS CLIv1, you can either uninstall or update to v2. Note: If you need to run the scripts with elevated permissions, please verify the current AWS CLI version for the root user too, (e.g. sudo aws –version). If the version command still shows an older version you will need to run the following command to update the path for the root user:ln -sf /usr/local/bin/aws /usr/bin/aws.

Step 2 – Execute bootstrap scripts

The first script is a bootstrap script that performs package installations and configuration steps. This will prepare the Image Builder environment for component execution. The full script can be found in the AWS Samples GitHub repository. A description of what each section of the script does is found in Table 1.

Table 1: Code explanation for
Line numbers Description
20 – 22 Create a working directory if it does not exist
25 – 27 Stop Amazon ECS service if it is running (Note: this step can be removed if running on a local machine)
45 – 58 Install unzip and aws-cli if not already installed
61 – 63 Download and install the mustache library from the EC2 Image Builder S3 bucket
66 – 79 Install and start docker
82 – 114 Create script that installs which, sudo, and curl
116 – 137 Create script that cleans up the yum and apt-get packages inside the container

Within your local environment, the bootstrap script creates a working directory where packages are installed and resources are created, making it easier to clean up the environment upon completion.

# Creates a working directory if it does not exist
if [ ! -d /tmp/imagebuilder_service ]; then
   mkdir -p /tmp/imagebuilder_service;

Now that we’ve described what the bootstrap script does, we can execute it within the environment.

  1. Create a new file called using nano or vim in the working directory /tmp/imagebuilder_service
  2. Paste the script contents from the GitHub repository into the file you created
  3. Change the permissions of the script to allow execute permissions: chmod +x
  4. Execute the script: ./

Note: If you receive a permissions error, you may need to use elevated permissions by entering sudo su

Step 3 – Run build scripts to execute components

After the bootstrap script from the previous step is executed, EC2 Image Builder uses build scripts to execute the components inside the container. The full script can be found in the AWS Samples GitHub repository. A description of each step can be found in Table 2.

Table 2: Code explanation for
Line numbers Description
3 – 11 Check if TOE_EXIT file was created in bootstrap step
13 – 163

Create script which performs the following actions:

Install TOE

Create build_input_config.json file which contains a list of components to be executed along with the phases [BUILD, VALIDATE]

Clean up steps to remove packages and TOE from the container

Remove TOE and Image Builder directory

165 – 169 Get the docker file template from the container recipe and dynamically generate Dockerfile to be used to build the required image
170 Copy,, and scripts from the instance or local machine to the docker container
172 – 174 Create and store random image name and create docker image using docker build command

The build script contains important package installations, including the AWS Task Orchestrator and Executor (AWSTOE). AWSTOE is a standalone application that is used to orchestrate workflows, perform installations, test image builds, and change system configurations which are core to what Image Builder does when executing components in a pipeline.

curl -o ${TOE_DIR}/ --silent --create-dirs
chmod +x ${TOE_DIR}/
stderr=$(${TOE_DIR}/ ${TOE_DIR} 2>&1)

Another core part of the build script is the creation of the build_input_config.json file. This file defines the phases in your pipeline and the associated documents which describe what actions to take, how to handle failures, number of retries, and parameter inputs for each step. The build and validate phases are listed to execute but you can also add the test phase if your document contains steps for testing your image.

cat << 'EOS' > ${IMAGE_BUILDER_DIR}/build_input_config.json
   "phases": "build,validate",
   "documents": [
         "path": "Image-Builder-Component-ARN"
         "path": "Image-Builder-Component-ARN"

The last part of the build script to highlight is preparing the Dockerfile that creates the golden container image. We first start by retrieving the Dockerfile template data from the specified container recipe using the AWS CLI. In addition, the Dockerfile is modified to copy the scripts needed to bootstrap the container environment, cleanup resources within the built image, and create the final, golden image.

aws imagebuilder get-container-recipe --container-recipe-arn Container-Recipe-ARN --endpoint-url --region region --query 'containerRecipe.dockerfileTemplateData' --output text > /tmp/imagebuilder_service/dockerfile_template

sed -i 's/imagebuilder:parentImage/parentImage/g; s/imagebuilder:environments/environments/g; s/imagebuilder:components/components/g' /tmp/imagebuilder_service/dockerfile_template

echo "$(</tmp/imagebuilder_service/dockerfile_template)" | parentImage="amazonlinux:latest" environments="WORKDIR /tmp
COPY /tmp/" components="RUN chmod +x /tmp/ && /tmp/ && chmod +x /tmp/ && /tmp/ && chmod +x /tmp/ && /tmp/ && rm -f /tmp/ && rm -f /tmp/ && rm -f /tmp/" /tmp/imagebuilder_service/mo > /tmp/imagebuilder_service/Dockerfile

Now that we’ve described what the build script does, we can execute it within the environment.

  1. Update the AWS credentials you are using so the script can access Image Builder resources
    1. If running on a local machine, you can use aws configure to set up your credentials
    2. If running on an EC2 instance, you can use the method above or configure an IAM role for the instance that has Image Builder permissions. For information on IAM roles for Amazon EC2, refer to this documentation page
  2. Create a new file called using nano or vim in the working directory /tmp/imagebuilder_service
  3. Paste the script contents from the GitHub repository into the file you created
    1. Replace Image-Builder-Component-ARN with your component ARN
    2. Replace Container-Recipe-ARN with your container recipe ARN
    3. Replace Region with your desired region
  4. Change the permissions of the script to allow execute permissions: chmod +x
  5. Execute the script: ./

Note: If you receive a permissions error, you may need to use elevated permissions by entering sudo su

Troubleshooting failed container builds

To determine if the container build script executed correctly, perform a docker image ls command and look for the newly created container. If the container is not listed, then the build process failed. To troubleshoot failures that occur within the Docker container, we recommend reviewing the exit code of the Docker container used in the build process by running docker ps -a. If the exit code is something other than 0, that indicates an error occurred. For example, if the exit code is 137, that indicates the Docker container ran out of memory for the software run inside of the container. An exit code of 1 requires further investigation of the container.

Another troubleshooting technique is to review the files that were modified in the Docker container. Using the output from the previous docker ps -a command, capture the CONTAINER ID and run docker diff <CONTAINER ID> to view the files that were modified. We recommend reviewing the files with prefix letters D and C as these files were Deleted and Changed respectively. Files with a prefix letter A were Added to the container.

Finally, you can manually run your components within the Docker container to better understand the failure point. Note: the files might have been modified during the build process and running the components again may not produce the same errors. Using the same CONTAINER ID from the previous troubleshooting step, execute the following command docker commit <CONTAINER ID> image_for_testing. This commits a Docker image (named image_for_testing) that we will reference in the next step. Next, start a container from the image with shell prompt access to manually run your component code using the following command: docker run -it --name troubleshooting_container image_for_testing /bin/sh. This command created a new Docker container named troubleshooting_container, in addition your Command Prompt changed to show that you are now logged in to the container and are now using the container’s Command Prompt.

Clean up

If you are running from an EC2 instance, follow these steps to terminate the instance. Note: Remember to save your data before you terminate the EC2 Instance.


In this blog we provided an overview of the EC2 Image Builder service, step-by-step instructions for bootstrapping a local environment for testing, and creating and executing build scripts for your container recipes and pipeline components. You can now use these steps to troubleshoot build issues within your pipeline locally before running your pipeline using the automation Image Builder provides. For more information about EC2 Image Builder, visit the user guide.

About the authors:

Kim Wendt

Kim Wendt is a Senior Solutions Architect at Amazon Web Services (AWS), responsible for helping global Media & Entertainment companies on their journey to the cloud. Prior to AWS, she was a Software Developer and uses her development skills to build solutions for customers. She has a passion for continuous learning and holds a Master’s degree from Georgia Tech in Computer Science with a Machine Learning specialization. In her free time, she enjoys reading and exploring the PNW with her husband and dog.

Sahil Sehgal

Sahil Sehgal is a Software Development Engineer in EC2 Image Builder service (AWS), responsible for developing features helping the customers in creating golden and secure up to date images. Prior to AWS, he worked as a Software Engineer for 2 years and holds a Master’s degree from Syracuse University, NY in Computer Science. He is technology enthusiast who enjoy solving problems using technology. In his free time, he enjoys cooking, going on road trips, and trying different cuisines.

Andrew Thomas

Andrew Thomas is a Cloud Support Engineer (AWS). He enjoys coding projects, playing board games, and time in the gym. He is always curious to learn new things, but often has to rein himself in from exploring rabbit holes of information.