AWS Compute Blog

Run Containerized Microservices with Amazon ECS and Application Load Balancer

This is a guest post from Sathiya Shunmugasundaram, Gnani Dathathreya, and Jeff Storey from the Capital One Technology team.

—————–

At Capital One, we are rapidly embracing cloud-native microservices architectures and applying them to both existing and new workloads. To advance microservices adoption, increase efficiencies of cloud resources and decouple application layer from the underlying infrastructure, we are starting to use Docker to containerize the workloads and Amazon Elastic Container Service (Amazon ECS) to manage them.

Docker enables environment consistency and allows us to spin up new containers in a location (Dev / QA / Performance / Prod) of choice in seconds vs. minutes / hours / days it used to take us in the past.

Amazon ECS gives us a platform to manage Docker containers with no hassle. We chose ECS because of its simplicity in deploying and managing containers. Our API platform is an early adopter and ECS-based deployments quickly became the norm for managing the lifecycle of stateless workloads.

Container orchestration has traditionally been challenging. With the Elastic Load Balancing Classic Load Balancer, we have had limitations routing to multiple ports on the same server and are also unable to route services based on context, which meant that we needed to use one load balancer per service. By using open source software like Consul, Nginx, and registrator it was possible to achieve dynamic service discovery and context based routing but at the cost of adding more complexity and costs for running these additional components.

This post shows how the arrival of the Application Load Balancer has significantly simplified Docker based deployments on ECS and enabled delivering microservices in the cloud with enterprise class capabilities like service discovery, health checks, and load balancing.

Overview of Application Load Balancer

With the announcement of the new Application Load Balancer, we can take advantage of several out of the box features that are readily integrated with ECS. With dynamic port mapping option for containers we can simply register a service with a load balancer, and ECS transparently manages the registration and de-registration of Docker containers. We no longer need to know the host port ahead of time, as the load balancer automatically detects it and dynamically reconfigures itself. In addition to the port mapping, we also get all the features of traditional load balancers, like health checking, connection draining and access logs to name a few.

Similar to EC2 Auto Scaling, ECS also has the ability to auto scale services based on CloudWatch alarms. This functionality is critical as it allows us to scale-in or scale-out based on demand. This feature coupled with the new Application Load Balancer gives us fully-featured container orchestration. The pace at which we can now spin up new applications on ECS has greatly improved and we can spend much less time managing orchestration tools.

The Application Load Balancer also introduces path-based routing. This feature allows us to easily map different URL paths to different services. In a traditional monolithic application, URL paths are often used to denote different parts of the application, for example, http://<apphost>.com/service1 and http://<apphost>.com/service2.

Traditionally this was done using a context root in an application server or by using different load balancers for each service. With the new path-based routing, these parts of the application can be split into individual ECS-backed services without changing the existing URL patterns. This makes migrating applications seamless, as clients can continue to call the same URLs. A large monolithic application with many subcontexts like www.example.com/orders, www.example.com/inventory can be refactored into smaller micro services and each such path can be directed to a different target group of servers such as an ECS Service with Docker containers.

Key features of Application Load Balancers include:

  • Path-based routing – URL-based routing policies enable using the same ELB URL to route to different microservices
  • Multiple ports routing on same server
  • AWS integration – Integrated with many AWS services, such as ECS, IAM, Auto Scaling, and CloudFormation
  • Application monitoring – Improved metrics and health checks for the application

Core components of Application Load Balancers include:

  • Load balancer – The entry point for clients
  • Listener – Listens to requests from clients on a specific protocol/port and forwards to one or more target group based on rules
  • Rule – Determines how to route the request – based on path-based condition and priority matches to one or more target groups
  • Target – The entity that runs the backend servers – currently EC2 is the available target group. The same EC2 instance can be registered multiple times with different ports
  • Target group – Each target group identifies a set of backend servers which can be routed based on a rule. Health checks can be defined per target group. The same load balancer can have many target groups

ALB Components

Sample application architecture

In the following example, we show two services with three tasks deployed on two ECS container instances. This shows the ability to route to multiple ports on the same host. Also, there is only one Application Load Balancer that provides path-based routing for both ECS services, simplifying the architecture and reducing costs.

Sample App

Configuring Application Load Balancers

The following steps create and configure an Application Load Balancer using the AWS Management Console. These steps can also be done using the AWS CLI.

      1. Create an Application Load Balancer using the AWS Console
        • Login to the EC2 Console (https://console.aws.amazon.com/ec2)
        • Select Load Balancers
        • Select Create Load Balancer

ALB Console

        • Choose Application Load Balancer
      1. Configure Load Balancer

Configure Load Balancer

        • For Name, type a name for your load balancer.
        • For Scheme, an Internet-facing load balancer routes requests from clients over the Internet to targets. An internal load balancer routes requests to targets using private IP addresses.
        • For Listeners, the default is a listener that accepts HTTP traffic on port 80. You can keep the default listener settings, modify the protocol or port of the listener, or choose Add to add another listener.
        • For VPC, select the same VPC that you used for the container instances on which you intend to run your service.
        • For Available subnets, select at least two subnets from different Availability Zones, and choose the icon in the Actions column.
      1. Configure Security Groups

You must assign a security group to your load balancer that allows inbound traffic to the ports that you specified for your listeners.

      1. Configure Routing
        • For Target group, keep the default, New target group.
        • For Name, type a name for the new target group.
        • Set Protocol and Port as needed.
        • For Health checks, keep the default health check settings.

ALB Routing

      1. Register Targets

Your load balancer distributes traffic between the targets that are registered to its target groups. When you associate a target group to an Amazon ECS service, Amazon ECS automatically registers and deregisters containers with your target group. Because Amazon ECS handles target registration, you do not add targets to your target group at this time.

ALB Targets

        • Click Next:Review
        • Click Create
      1. Registering Docker containers as a target

Create Service

        • Provide the Task definition for service
        • Provide a Service Name and number of tasks
        • Click the Configure ELB button
        • Choose Application Load Balancer

ECS Service ALB

        • Choose the ecsService Role as IAM role
        • Choose the Application Load Balancer created above
        • Select a container that you want the load balancer to use and click Add to ELB

ECS Service ALB

      • Select the target group name that was created above
      • Click Save and then Create Service

Cleaning up

  • Using the ECS Console, go to your Cluster and Service, update the number of tasks to 0, and then delete the service.
  • Using the EC2 Console, select the Load Balancer created above and delete.

Conclusion
Docker has given our developers a simplified application automation mechanism. Amazon ECS and Application Load Balancers make it easy to deliver these applications without needing to manage dynamic service discovery, load balancing and container orchestration. Using ECS and Application Load Balancers new services can be deployed in less than 30 minutes and existing services updated with newer versions in less than a minute. ECS automatically takes care of rolling updates and Application Load Balancer takes care of registering new versions quickly and unregistering existing containers gracefully. This not only improves the agility of teams, but also reduces the overall time to market.