Containers

CI/CD pipeline for testing containers on AWS Fargate with scaling to zero

Development teams are running manual and automated tests several times a day for their feature branches. Running tests locally is only one part of the process. To test workloads against other systems as well as give access to QA engineers, it requires deploying code to dedicated environments. These servers/VMs spend hours idling because new test workloads do not come in at a constant rate. The most common approach to reduce costs is to have a scheduler that starts services in the morning and turns them off by the end of the working day. This is not effective due to a possibility of teams being in different time zones or, for example, working extra hours.

This post explains how to use a CI/CD pipeline to scale your containers in AWS Fargate in response to your access logs.

To achieve this result, you will be using the following services:

Solution Overview

Prerequisites

For this tutorial, you must have the following prerequisites:

Method

Let us assume that your development team uses mainline (this is the naming schema we use internally at AWS) branch as testing environment.

The first part of this process is the actual push/merge into mainline branch in a git repository on GitHub. This action triggers AWS CodePipeline. It checks out the code from the repository and initiates a build of an output artifact as a Docker container image.

CodeBuild BuildSpec script creates a Docker container image and pushes it into the Amazon ECR image repository. You can modify this step to push the container to other repositories, such as Docker Hub.

AWS CodeBuild BuildSpec sample:

version: 0.2
phases:
  pre_build:
    commands:
      - $(aws ecr get-login --no-include-email)
      - TAG="$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | head -c 8)"
      - IMAGE_URI="${REPOSITORY_URI}:${TAG}"
  build:
    commands:
      - docker build --tag "$IMAGE_URI" .
  post_build:
    commands:
      - docker push "$IMAGE_URI"
      - printf '[{"name":"%s","imageUri":"%s"}]' "$CONTAINER_NAME" "$IMAGE_URI" > images.json
artifacts:
  files: images.json

Once CodeBuild creates a Docker image as an output artifact, CodeDeploy will start its deployment inside on AWS Fargate

The next step consists of CodePipeline triggering a Lambda function that increases the number of tasks inside your cluster to the desired amount. This value is 1 by default, but can be modified in the CloudFormation template module-scaling-lambda.yaml by changing “desired_count = 1” to “desired_count = N”

The above-mentioned AWS Lambda function will also invoke AWS Step Functions’ state machine. It is responsible for waiting a predefined number of seconds (value of scaleToZeroTime parameter in the Cloudformation template). The wait period that you would want to set depends on how long your tasks need to run automated and/or manual tests. Once time expires, AWS Step Functions re-triggers AWS Lambda function mentioned above that scales the tasks back to zero, avoiding incurring unnecessary costs.

If you need to access this deployment after it has been scaled down, just click the load balancer’s URL and wait until the task becomes available again. CloudWatch monitors 503 responses and emits an alarm, which means you are requesting access to the task. This alarm triggers the scale-up Lambda function again. Same scale-down policy as before applies after predefined time is expired.

AWS Lambda code

lambda_handler is responsible for scaling tasks to the amount defined inside “desired_count” variable. After the tasks are scaled up via (ecs) client.update_service function, this variable is modified to 0 and passed to the AWS Step Functions’ state machine.

import json
import boto3
import os 

client = boto3.client('ecs')
code_pipeline = boto3.client('codepipeline')


def lambda_handler(event, context):
    service_arn = os.environ['service_arn']
    cluster_arn = os.environ['cluster_arn']
    state_arn = os.environ['state_arn']

    try:
        desired_count = event['desired_count']
    
    except Exception as e:
        desired_count = 1 # This is the value where the amount of tasks is defined

    try:
        client.update_service(
            cluster = cluster_arn,
            service = service_arn,
            desiredCount=int(desired_count)
        )
    except Exception as e:
        raise Exception('Unable to scale the cluster' + str(e))
    
    try:
      # If we are starting the tasks on the cluster, we need to trigger state machine that will scale tasks back to zero after predefined amount of time
      if int(desired_count) > 0:

        clientSF = boto3.client('stepfunctions')
        
        # List all running executions (if any) and stop them because this is a new deployment in the pipeline
        executions = clientSF.list_executions(
            stateMachineArn=state_arn,
            statusFilter='RUNNING'
        )
        
        for execution in executions['executions']:
            clientSF.stop_execution(
            executionArn=execution['executionArn']
            )                     
        
        # Start new timed execution
        clientSF.start_execution(
            stateMachineArn=state_arn,
            # We send desired_count as 0 to the state machine so the new invocation of this lambda will scale tasks down
            input = json.dumps({'desired_count': 0})
        )   
    except Exception as e:
        raise Exception('Unable to trigger state machine: ' + str(e))

AWS Step Functions code

{
  "Comment": "Wait to scale Fargate back to zero",
  "StartAt": "WaitState",
  "States": {
    "WaitState": {
      "Type": "Wait",
      "Seconds": 1800,
      "Next": "StopLambda"
    },
    "StopLambda": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:AWS_REGION:AWS_ACCOUNT:function:LAMBDA_FUNCTION",
      "End": true
    }
  }
}

Tutorial

You can download a repository with a sample Dockerfile and CloudFormation templates for this tutorial here.

The first step is to extract the downloaded zip file and create a new GitHub repository to which you must commit and push the Dockerfile from this archive. Please bear in mind that we are working with mainline branch in this example.

The second step is to create an S3 bucket and upload the contents of the “cloudformation” folder there. Now you are ready to launch your stack:

Please modify the bucket name in the URL so CloudFormation can fetch the template from the right location.

The next step is to fill the values with your data:

Fill the following fields:

  • Stack name (any name works)
  • GitHub repository name (just the name, no username/url needed)
  • Branch (mainline|develop for example)
  • GitHub username
  • GitHub personal access token (for more information click here)
  • Name of the bucket where your CloudFormation templates were uploaded
  • Time until the task scales to zero.

Click next and on the last step page confirm the following checkboxes:

Click “Create stack.”

Once the stack creation process is finished, click on the Outputs tab to see the load balancer’s ServiceUrl to access your testing environment.

PipelineUrl will take you to the detailed page about the AWS CodePipeline for this project.

Process overview

Start by committing and pushing a simple README.md file to the repository and observe what happens.

If you check your AWS CodePipeline, you will see that it was triggered and started building a new artifact:

Once ScaleUp stage starts, it triggers the Lambda function that scales up Fargate cluster’s tasks:

This also triggers the AWS Step Functions that will wait to scale down after the number of minutes you defined during CloudFormation stack creation:

When the predefined time expires, AWS Step Functions invokes the AWS Lambda function again. This will scale the tasks to zero and avoid any further costs of running this workload:

The process will repeat itself on the next commit.

To test the second scenario in which tasks are already scaled to zero but we need to access our environment. Open Amazon CloudWatch Alarms when you access the load balancer’s URL. This URL is available in the output of CloudFormation template.

Since the tasks are not running, the load balancer emits a 503 HTTP code. An alarm that is set to listen to 5XX errors and to send a notification to Amazon’s SNS. This event consequently triggers the same Lambda function that AWS CodePipeline’s workflow uses. The number of tasks increases and the environment is accessible again.

Cleaning up

To avoid incurring future charges, delete the CloudFormation stack                        .

Conclusion

By using AWS Step Functions to connect AWS Lambda with AWS CodePipeline and AWS Fargate, it becomes easy to scale tasks up and down based on the activity of the repository and by observing Load Balancer’s access logs. Adopting such approach helps avoid incurring unnecessary costs for your testing environments.