Containers

Shift left to secure your container supply chain

Introduction

When we talk about securing container solutions, most of the focus is on securing the orchestrator or the infrastructure that the orchestrator runs on. However, at the heart of your container solutions are the containers themselves. In this post, we show you how we ensured that before we even push a container into our repository, it’s evaluated for security risks automatically and rejected if it doesn’t meet the required standards. If you have read the Security Pillar from the container build lens, you’ll know that linting docker files, scanning images, and making sure you don’t commit secrets into your codebase are all important practices that this post helps you put into practice.

There are lots of tools out there that help you secure your container images and you can usually run these tools on your desktop or laptop. This is all well and good, but in terms of security, we want to make sure the same standards are applied by everyone. To do this, I’ll apply automatic checks using a build pipeline. Creating a build pipeline for securing container images is not a new idea, and indeed this build pipeline is an iteration on an earlier post. The tools I will use are all open source and include: Hadolint (used for applying linting rules to a Dockerfile), Trufflehog (used for looking for secrets that have been committed as part of a build before they make it into your codebase), and Trivy, which is being used for static scanning of images.

Tools in use

Hadolint is a Dockerfile linting tool. It parses the Dockerfile into an AST (Abstract Syntax Tree) and then performs rules on that AST. It also uses Shellcheck to lint the BASH code inside RUN instructions.

Trivy

Trivy is an opensource vulnerability scanner, which is made available by Aqua Security. It can scan for both OS and language-specific package vulnerabilities. It also supports different templates for integration with third-party applications, one of which is AWS Security Hub (Security Hub). I’ll use this integration to create an audit trail for vulnerabilities.

Trufflehog

This opensource tool searches through Git repositories for secrets, and can search commit history and branches to find accidentally committed secrets.

Pipeline plan

I implemented all three security tools in the same stage, so that they run in parallel and ensure that all three checks complete successfully for the artefact before it moves on to the build stage.

An audit level is built into the pipeline that is set one level below what the company deems to be the acceptable vulnerability threshold. If the company prefers containers to be blocked with critical vulnerabilities, then any with HIGH vulnerabilities would trigger the audit threshold and be logged. Similarly, if the security threshold for blocking the pipeline was set to HIGH, then any containers with MEDIUM vulnerabilities would have those logged. This means that the company has a record of what was deemed acceptable and when, so that they can remediate when time allows or validate in case of future issues.

I’m using Trivy in client server mode. This is so that the vulnerability database doesn’t have to be downloaded for every scan. This approach can also be adopted so several pipeline flows or repositories can use the same scanner. The Server downloads the vulnerability database automatically and continues to fetch the latest database in the background. The client then queries the remote address of the load balancer Infront of the server instead of downloading the vulnerability database locally.

Solution overview

The solution architecture is shown in the following diagram:

Service architecture diagram for implementing image security CICD pipeline.

Walkthrough

So how does the pipeline work?

  1. A developer is working on an update to a container image to be used for an application and submits code to the application repository.
  2. The developer then submits a pull request to get the changes approved and merged. This also creates the tag for the image as the pull request is being monitored by Amazon Eventbridge and that triggers an AWS Lambda function to set some parameters in AWS Systems Manager Parameter Store, including the commit id that’s used for the tag.
  3. This triggers the pipeline to start the security analysis stage. In this stage, there are the three checks using the tools mentioned above. The configuration of the tools is initially done on the build of the environment and a separate repository is also created that enables the team responsible for managing the security tools to update and manage those tools. Typically, this team is separate from the development team, with the latter building the business logic of the applications.
    • Hadolint check – In this check, we specify any trusted registries that can be pulled from. If the image specified an untrusted registry, then it fails this check. We also specify some additional rules to ignore.
    • Trufflehog check – In this check, we look for any strings that match known patterns for specific secrets (e.g., AKIA followed by a combination of 16 numbers and letters to make up an AWS Identity and Access Management [AWS IAM] secret key). We scan the repository so that any such commits cause the check to fail.
    • Trivy scan – In this check, we set the level that is acceptable for security vulnerabilities (i.e., MEDIUM). Any vulnerability flagged by the Trivy scan above the MEDIUM level then causes the check to fail. Any MEDIUM vulnerabilities are converted to Security Hub Format and sent to Security Hub as an audit trail.
  4. Only if all three of the above checks pass will the image then be built, tagged, and pushed to the Amazon Elastic Container Registry (Amazon ECR) repository.
  5. Once the image has been built, the pipeline deploys that image (i.e., to Amazon Elastic Container Service (AWS Fargate) in this case), but this could be changed to deploy on Amazon Elastic Kubernetes Service (Amazon EKS).

The full repository is available to clone here.

Let’s look at the build specifications used for the different tools and what we can easily alter according to the security standards of the company.

First, we will look at Hadolint:

The following build specification is the buildspec_dockerfile.yml file in the AWS CodeCommit (CodeCommit) “container-security-demo-config” repository.

version: 0.2

phases:
   install:
    runtime-versions:
       python: 3.7
   pre_build:
        commands:
        - curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
        - echo Copying hadolint.yml to the application directory
        - cp hadolint.yml $CODEBUILD_SRC_DIR_AppSource/hadolint.yml
        - echo Switching to the application directory
        - cd $CODEBUILD_SRC_DIR_AppSource
        - pwd
        - which docker
        - echo Pulling the hadolint docker image
        - docker pull public.ecr.aws/zomato/hadolint/hadolint:v2.5.0-alpine
   build:
        commands:
        - echo Build started on `date`
        - echo Scanning with Hadolint...
        - result=`docker run --rm -i -v ${PWD}/hadolint.yml:/.hadolint.yaml public.ecr.aws/zomato/hadolint/hadolint:v2.5.0-alpine hadolint -f json - < Dockerfile`
   post_build:
        commands:
        - echo $result
        - aws ssm put-parameter --name "codebuild-dockerfile-results" --type "String" --value "$result" --overwrite
        - echo Build completed on `date`

The configuration of Hadolint is found in the container-security-demo-config/hadolint.yml file in the repository.

ignored: 
  - DL3000 
  - DL3025 

trustedRegistries: 
  - public.ecr.aws 

The buildspec installs Hadolint, which is then used to inspect the Dockerfile to apply linting rules.

This means that the Hadolint rules are applied with the exception of rules 3000 (i.e., the rule that requires an absolute workdir to be used) and 3025 (i.e., the rule that states that JSON notation must be used for CMD and ENTRYPOINT arguments) and we only allow images to be pulled from the Public Amazon ECR Gallery. Instead, you could insert your company’s private repositories here. These are simply examples of rules you may choose to include; however, you should review the available rules and confirm if they are applicable for your company’s security posture.

Let’s look at the Trivy buildspec in more detail (i.e., this is the buildspec_vuln.yml file in the CodeCommit container-security-demo-config repository).

version: 0.2

phases:
  install:
    runtime-versions:
      python: 3.7

  pre_build:
    commands:
      - curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
      - apt-get update && apt-get install -y python-dev jq wget net-tools apt-transport-https gnupg
      - wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | apt-key add -
      - echo deb https://aquasecurity.github.io/trivy-repo/deb bionic main | tee -a /etc/apt/sources.list.d/trivy.list
      - apt-get update && apt-get install -y trivy
      - curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
      - python3 get-pip.py
      - pip3 install awscli
      - $(aws ecr get-login --no-include-email)
      - IMAGE_TAG_FOR_AUDIT=`aws ssm get-parameter --name "destinationCommit" --query "Parameter.Value" --output text`
      - curl https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/asff.tpl -o asff.tpl
      - ls asff.tpl

  build:
    commands:
      - IMAGE=$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME
      - docker build $CODEBUILD_SRC_DIR_AppSource -t $IMAGE:$IMAGE_TAG_FOR_AUDIT
      - IMAGE=$IMAGE:$IMAGE_TAG_FOR_AUDIT
      - docker push $IMAGE
  post_build:
    commands:
      - trivy -f json image --server $TRIVY_CLI_URL $IMAGE -o scan_results.json
      - trivy -f template --template "@asff.tpl"  image --severity $AUDIT_LEVEL --server $TRIVY_CLI_URL $IMAGE -o audit.asff  && cat audit.asff | jq '.Findings' > report.asff
      - if cat scan_results.json |  jq -r --arg threshold $AUDIT_LEVEL '.Results[].Vulnerabilities[]?.Severity ==$threshold' | grep -q true; then aws securityhub batch-import-findings --findings file://report.asff
         &&  echo Report Sent to Security Hub on `date`; fi
      - if cat scan_results.json |  jq -r --arg threshold $FAIL_WHEN '.Results[].Vulnerabilities[]?.Severity ==$threshold' | grep -q true; then echo "Vulnerabilties Found" && exit 1; fi

The pre_build installs some dependencies and Trivy. It also grabs the Commit reference to use as a tag for an audit-trail.

A temporary image is built and pushed to a separate repository so that it can be used in the scan. Having a separate repository here means that any images are evaluated before being pushed to the main repository. If an image has a security issue, this means that it is prevented from being made available in the main repository. We scan the image using the remote server for any vulnerabilities that match or exceed the threshold. If those vulnerabilities exist, then the build job exits with a failed status. At the same time, the image is scanned for any vulnerabilities that match the audit level and those vulnerabilities are transformed into AWS Security Finding Format (ASFF) for Security Hub and imported into Security Hub to create an audit trail.

Finally let’s look at the Secret scan with Trufflehog.

Here is the buildspec for Trufflehog (i.e., this is the buildspec_secrets.yml file in the CodeCommit container-security-demo-config repository).

version: 0.2

phases:
  pre_build:
    commands:
    - echo Setting CodeCommit Credentials
    - git config --global credential.helper '!aws codecommit credential-helper $@'
    - git config --global credential.UseHttpPath true
    - echo Copying secrets_config.json to the application directory
    - cp secrets_config.json $CODEBUILD_SRC_DIR_AppSource/secrets_config.json
    - echo Switching to the application directory
    - echo Installing truffleHog
    - which pip3 && pip3 --version
    - which python3 && python3 --version
    - pip3 install 'truffleHog>=2.1.0,<3.0'
  build:
    commands:
    - echo Build started on `date`
    - echo Scanning with truffleHog...
    - trufflehog --regex --rules secrets_config.json --max_depth 1 --entropy=False "$APP_REPO_URL"
  post_build:
    commands:
    - echo Build completed on `date`

Trufflehog can search through an entire Git repository for accidentally committed secrets. For this check, we’re scanning the CodeCommit repository for the application for any strings that match a regex. The configuration of those­­­­ regex are set up in the secrets.json file and look for common patterns for secrets such as AWS application programming interface (API) keys or Github credentials. By setting a max_depth parameter, we limit the scanning to the most recent commit.

How can you test out this pipeline?

Prerequisites

In order to implement the instructions, you need to have access to an AWS account with administrative permissions. You also need to have the AWS cli installed locally or on an AWS Cloud9 instance and Git installed.

Steps

1. Clone the repository.

git clone https://github.com/aws-samples/aws-container-security-pipeline-dso-ecs

2. The demo is using nested AWS CloudFormation templates and some configuration files that need to be uploaded to an Amazon Simple Storage Service (Amazon S3) bucket. Create a bucket in the same Region that you’ll create the demonstration in, the bucket doesn’t need to be public facing.

aws s3 mb s3://<bucketname> --region <region name>

3. Security Hub is used in this demonstration and has an integration with Trivy. To enable this, go to Security Hub in the AWS Management Console and select Integrations from the menu on the left. Filter for Aqua Security and select Accept findings.

Screenshot of Aqua Security Integration options for AWS Security Hub

This should now look like the following image.

Screenshot showing completed Aqua Security integration

If you do not have Security Hub setup in your account, please follow the instructions here.

4. Upload files in the environment directory directly to the new bucket. This can be done via the Console or via the CLI as shown.

aws s3 sync environment s3://<bucketname> —region <region name>

5. There are 2 zipfiles required which need to be created in the following way.

cd initial-commit
zip -rj ../devsecops/initial-commit.zip ./*
cd ../trivy-build
zip -rj ../devsecops/trivy-build.zip ./*

6. Upload the entire devsecops folder to the bucket.

aws s3 sync ../devsecops s3://<bucketname>/devsecops —region <region name>

Your bucket structure should now look like the following:

Screenshot showing S3 bucket object structure

  1. Create a stack using the container-devsecops-demo.yml template and use the bucket name of the bucket that you just created as the DevSecOpsResourcesBucket

Once all the nested stacks have completed building, you’ll have a CICD (Continuous Integration Continuous Delivery) pipeline with built‑in configurable security tools, that deploys new images to an Amazon ECS on AWS Fargate cluster.

Now it’s time to clone the application repository to your local machine or to an AWS Cloud9 instance, if that’s what you’re running. You need the AWS Command Line Interface (AWS CLI) and Git commands available to update the application.

  1. Clone the CodeCommit repository container-security-demo-app to your local machine or AWS Cloud9, depending on what you want to use.
git clone https://git-codecommit.<region>.amazonaws.com/v1/repos/container-security-demo-app
  1. We also want an application to use with the pipeline. We will clone the popular application 2048 from Github to use. This application uses the MIT License.
git clone https://github.com/gabrielecirulli/2048.git
  1. Switch into the new Git repository that you cloned from CodeCommit.
cd container-security-demo-app
  1. Switch to the development branch.
git checkout development
  1. Now, it’s time to fully populate the application as only a Dockerfile and some additional required Nginx settings are committed initially.
cp -r ../2048/* . 
git add . 
git commit“-m "nice commit message goes here" 
git push origin development
    1. At this stage, you have committed your changes to the repo but not instigated the pipeline as it’s waiting for a pull request. Create the pull request:
aws --region <region> codecommit create-pull-request \
--title "Uploading new application" \
--description "Please Review" \
--targets repositoryName=container-security-demo-app,sourceReference=development,destinationReference=main
  1. Now if you go to look at the pipeline by selecting AWS Codepipeline (CodePipeline) from the AWS Console and select the container-security-demo-pipeline, you can see the change being submitted through the pipeline.

The image is tested for vulnerabilities, secrets, and linting errors. If all tests pass, then the container image is built, tagged, and added to the Amazon ECR private repository. Once there, the pipeline continues to the next stage and deploys to Amazon ECS on AWS Fargate. You can then interact with the application using the public load balancer created as part of the ECS Service stack.

If there are any vulnerabilities at the specified failure level or above, then the pipeline stops and the image won’t be added to the Amazon ECR repository.

You can test out various failure scenarios using this pipeline. Try replacing the COPY commands in the Dockerfile with ADD. If the Nginx image included in the Dockerfile has HIGH vulnerabilities (which it did at the time of testing) try replacing it with “public.ecr.aws/nginx/nginx:1.24-alpine-slim”, or alternatively you can change the pipeline settings to only block on “CRITICAL” instead of “HIGH and CRITICAL” by updating the parameters in the CloudFormation template.

  1. You can see how this looks in the following examples. The first image shows a failed scan and then the second image is a snippet from CodeBuild logs that shows how many vulnerabilities have been detected and sent to Security Hub.

  1. If there are vulnerabilities only at the audit level and below, those are logged directly with Security Hub so that there is an audit trail for the future. To see these, go to Security Hub and add a filter on findings (Product Name is Aqua Security).

If there are vulnerabilities you will see them displayed like those in the image below.

Changing the image for a new version is easy by editing the Dockerfile.

Now you have a pipeline with some open-source tooling that enables you to add security checks for your container images as they are built, and prevents images with CRITICAL or HIGH (depending on the threshold you set) vulnerabilities from being used as part of a task.

Cleanup

To clean up the stack please complete the following actions:

  • Delete all objects from the Amazon S3 bucket that was created for CodePipeline, which is found in the AWS CloudFormation stack for DSO-Initial-Pipeline under Resources as the Pipeline Bucket (i.e., container-security-demo-<accountid>-<region>-artifacts).
  • Delete all images from the Amazon ECR Private repositories container-demo-security-sample and container-security-demo-trivy.
  • Now go to the AWS CloudFormation console and delete the DSO-ECS stack that you created from the AWS CloudFormation yaml.

Conclusion

In this post, we showed you how to create a DevSecOps pipeline that performs three different types of security checks on container images, tags the images and then deploys them onto Amazon ECS on AWS Fargate. It also creates an audit log of any images that have Common Vulnerabilities and Exposures (CVEs) listed as HIGH, MEDIUM or below (depending on the level you choose for the audit setting). This example helps you to implement the security posture that suits your company security profile and requirements.