Containers

Developing Twelve-Factor Apps using Amazon ECS and AWS Fargate

Sushanth Mangalore and Chance Lee, AWS Solutions Architects, SMB

Introduction

The twelve-factor methodology helps you build modern, scalable, and maintainable software-as-a-service apps. The methodology is technology agnostic and has become a widely-adopted approach to developing cloud-native applications.

There are a few different ways to develop twelve-factor applications on AWS. Solutions based on containers technology are a natural fit for twelve-factor applications. This post describes the factors of the methodology through a reference solution developed using Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type. If you are starting to build container solutions on AWS, the information provided in this post can be used as guidance to plan your twelve-factor container application architecture. The concepts outlined here are just as applicable to ECS-based applications with the EC2 launch mode or Amazon Elastic Kubernetes Service (Amazon EKS)-based applications.

Solution overview

The reference solution uses a container-based Python application with a DynamoDB table as the data store. Along with Amazon ECS, the solution uses a combination of AWS services for continuous integration and continuous delivery (CI/CD), Amazon Elastic Container Registry (Amazon ECR) for container registry, AWS AppConfig to host application configuration, and Amazon CloudWatch Logs to manage logs. AWS Virtual Private Cloud (VPC) manages the network setup and Amazon Application Load Balancer (ALB) routes external traffic to the Amazon ECS service. The factors described in the next section provide details on how the different parts of the reference solution achieve the best practices recommended by the methodology. The high-level architecture of the reference solution is shown below.


The twelve factors of the reference solution

Codebase – You must manage all your code in a source control system and use one codebase per application. This means if you have two container applications in your solution, they should live in two separate code repositories in a source control system like Git. Our solution will use AWS CodeCommit as the source code repository for the main container application.

An additional aspect for container applications is a binary called the container image. Each container app should have a container image that will live in its own repository in a container image registry. Our solution uses a private Amazon ECR repository as the container image registry.

Dependencies – The dependencies of the application should be declared explicitly and no dependency can be assumed to come from the execution environment. In a container-based application, you can handle this concern in the Dockerfile. A Dockerfile is a text-based list of instructions to create a container image. You can declare your application dependencies in the manifest file of the programming language’s packaging system and install them using instructions in the Dockerfile. Pip, the package installer for Python, is used as part of the solution here to install the two key dependencies, boto3 and Flask.

Config – The twelve-factor methodology prescribes a strict separation of configuration from code. Configurations that vary among deployment environments should not be stored together with your application code. The code for your application must be the same across your deployment environments. The specific configuration for a deployment is provided by the deployment environment using environment variables. The different stages of your deployment pipeline can provide the environment’s specific configuration. You can specify environment variables in the container definition of your ECS application.

Backing services – Your application must treat any service that it uses to perform its business function as an attached resource. This may be an RDS database, a DynamoDB table, an S3 bucket, or even a non-AWS service. The attached resource semantics prescribes that these resources are made available to the application as a configuration. Your application will be aware of the attached resource, but not be tightly coupled to it. For a DynamoDB table, it is sufficient to be aware of the table name, which is the resource identifier. In the reference solution, the name of the DynamoDB table comes from AppConfig, allowing you to switch out one table for another without any application downtime. This can be useful in scenarios where you are experimenting with different data sets to achieve A/B testing.

Build, release and run – AWS CodePipeline provides mechanisms to separate the build, release, and run stages for a deployment, as recommended for a twelve-factor application. When you are releasing new code to the target execution environment, you will first package the application code as a deployment artifact. For a container-based application, this is a Docker image. This responsibility is part of the build stage, and the solution uses AWS CodeBuild for the build stage to push the container image to Amazon ECR. Next, the pipeline combines your deployment artifact with any environment specific configuration to form a release. In the reference solution, the environment specific variables are defined in the container definition. The dynamic configuration that can change during runtime, is sourced from AppConfig. In the final step, the pipeline will deploy our application using ECS with Fargate.

Processes – Your twelve-factor applications should be built as one or more stateless processes. Containers are designed to be stateless and immutable. It is recommended to run one application process per container for most use cases. Any state must be persisted in a backing store like a database. The memory or the local disk of the execution runtime should not maintain a persistent state between requests. In the reference solution, a single stateless Python application process runs in the container and exposes API endpoints.

Port binding – Twelve-factor applications should make themselves available to other applications by binding to a port in the execution environment. Container-based applications expose their port through the EXPOSE instruction in the Dockerfile. The container definition in an ECS task definition allows you to specify the host port to container port mapping. Applications that run on Fargate use the awsvpc network mode, where each task gets its own network interface. This means you only need to specify the container port and do not need any host port bindings. Our solution uses a built-in Flask server in the main Python application process to bind to port 80 in the container execution environment.

Concurrency -– Container-based applications can scale out by spinning up more instances of the container process. This is the model of scaling prescribed by the twelve-factor methodology as well. The unit of scaling in ECS is a task. ECS services define the desired number of tasks that should be running to ensure that the application can handle the load it receives. To provide fault tolerance, you should specify more than one desired count of tasks in your service. ECS will ensure any failed tasks are automatically restarted. The number of tasks in a service definition can be increased or decreased as per the scaling needs of the application. The solution runs two copies of the main application task within the ECS service.

Disposability – Fast startup and graceful shutdown are advantages of container-based applications. You can keep the startup time short by following the best practices to build your container images using the Dockerfile. In shutting down, a container application is expected to handle both graceful and abrupt terminations. When an ECS task is stopped, it results in a SIGTERM value and a default 30-second timeout, after which the SIGKILL value is sent and the containers are forcibly stopped. Based on the needs of your application, you can configure the time to wait before the SIGKILL signal is sent by using the stopTimeout parameter. This can be up to 120 seconds with Fargate. As part of the shutdown, the application should finish processing all outstanding requests and stop accepting new requests with a proper SIGTERM handler. This blog post dives deeper into graceful shutdowns with ECS.

Dev/prod parity – The production and non-production environments for your twelve-factor app should be as similar as possible. This ensures that the application code has been tested in conditions that are realistic. This extends to the use of any backing services as well so that the integration of your application and the backing services are also tested uniformly between the different environments. Docker promotes this idea through the concept of build once, deploy many times. The same version of the Docker image can be used in both the non-production and production environments.

The CloudFormation stack for this solution includes an environment parameter, which can be customized. When the value of the environment parameter is prod, the deployment pipeline is created with an additional stage, where the release is deployed to a staging cluster. You can perform tests in the Staging cluster before releasing to the production environment. The staging deployment is followed by a manual approval step. The pipeline deploys to the production cluster after you approve the staging release.

Logs – Treating logs as a continuous stream of events instead of static files, allows you to react to the continuous nature of log generation. You can capture, store, and analyze real time log data to get meaningful insights into the application’s performance, network, and other characteristics. An application must not be required to manage its own log files. You can specify the awslogs log driver for containers in your task definition under the logConfiguration object to ship the stdout and stderr I/O streams to a designated log group in Amazon CloudWatch logs for viewing and archival. Additionally, FireLens for Amazon ECS enables you to use task definition parameters with the awsfirelens log driver to route logs to other AWS services or third-party log aggregation tools for log storage and analytics. We provide the AWS for Fluent Bit Docker image or you can use your own Fluentd or Fluent Bit image.

Admin processes – Any admin or maintenance tasks for the application should run in an identical environment to that of the application. These maintenance tasks are usually short running and only intended to support the normal business function of the main application. Sidecar containers can be one way of achieving admin capabilities by running them alongside the main application, but these become long running applications as well. Amazon ECS scheduled tasks are a better fit for admin functions. With these, you can use a cron-like schedule to run tasks at set intervals in your cluster. In the reference solution, you can imagine a scheduled ECS admin task that creates a backup of the DynamoDB table once every day. This admin ECS task can have one container, which runs a single process. The task will perform its function and exit immediately.

Launching the reference solution

The reference solution can be launched in your AWS account using the CloudFormation template in this GitHub repository. The template creates all the resources for the reference solution. The source in the GitHub repository will initialize a CodeCommit repository in your account and become the starting point for your deployment pipeline.

After the stack has launched successfully, you should see that the deployment pipeline created by the stack in AWS CodePipeline has run successfully for the first time. If you have used the default EnvironmentName parameter value of prod, the pipeline execution will halt at the Deployment_Approval stage. You can provide the approval for this stage to complete the deployment to the ECS cluster which acts as your production environment.

The pipeline pulls the latest code from the CodeCommit repository created by the stack, builds the Docker image in the CodeBuild stage, and pushes it to the ECR repository. The application deployed to the ECS cluster is made through an internet facing Application Load Balancer, which directs traffic to the ECS service.

You will see the URL of the application in the outputs section with the /hello endpoint.

This URL can be accessed in a browser to see a hello message output using the information in the DynamoDB table.

The other available application endpoints are –

  • / – This is the health check endpoint used by the load balancer.
  • /refresh-config – This endpoint can be used to refresh the application configuration that comes from AWS AppConfig.
  • /table-name – This endpoint can be used to view the name of the DynamoDB table being used by the application. A scheduled admin task can use this value configured in AWS AppConfig, to make a nightly backup of the DynamoDB table.

You may follow the below instructions to see some of the factors in action –

  1. Create a backup of the DynamoDB table and restore the backup to a new DynamoDB table.
  1. Change any display attribute of the application in the DynamoDB item using the AWS Management Console. As an example, you can change the background color attribute of the application.
  1. Next, you can update the hosted AppConfig configuration to the use the name of the restored DynamoDB table and deploy a new version of the configuration.
  1. You can follow this by accessing the /refresh-config endpoint of the application. You may need to access the URL more than once to ensure that the configuration is updated for each ECS task copy behind the ECS service. You will see a message indicating that the configuration is refreshed.
  1. Lastly, you can access the /hello endpoint of the application to see the application change displayed in the output. This is an example of how a dependency of the application was swapped out at runtime without stopping the application.

You can push a new commit to the CodeCommit repository to see the stages of the deployment pipeline in action again. The pipeline will deploy a new version of the ECS task definition using the updated container image. The pipeline will update the ECS tasks in a rolling fashion, behind the ECS service and become accessible through the same /hello application endpoint.

Conclusion

This post outlined one of the possible combinations of services on AWS that you can use to implement your twelve-factor apps. AWS container services like Amazon ECS have strong native integrations with the larger AWS ecosystem for building secure, scalable, and operationally efficient solutions. AWS is always looking to make adoption easy for industry recommended methodologies like the twelve-factor app.  We encourage you to continue learning about AWS container services using AWS documentation, blogs and whitepapers.

Further Reading

Sushanth Mangalore

Sushanth Mangalore

Sushanth Mangalore is a Sr. Solutions Architect at Amazon Web Services, based in Chicago, IL. He is a technologist helping businesses build solutions on AWS to achieve their strategic objectives. He specializes in container services like Amazon ECS and Amazon EKS. Prior to AWS, Sushanth built IT solutions for multiple industries, using a wide range of technologies.

Chance Lee

Chance Lee

Chance Lee is a Sr. Container Specialist Solutions Architect at AWS based in the Bay Area. He helps customers architect highly scalable and secure container workloads with AWS container services and various ecosystem solutions. Prior to joining AWS, Chance was an IBM Lab Services consultant.