Containers

Implement custom service discovery for Amazon ECS Anywhere tasks

Introduction

Amazon Elastic Container Service (Amazon ECS) is a managed container orchestration service offered by AWS. It simplifies the deployment, management, and scalability of containerized applications using Amazon ECS task definitions through the AWS Management Console, AWS Command Line Interface (AWS CLI), or AWS Software Development Kits (AWS SDKs).

Customers who require running containerized workloads, both on AWS and on-premises, often encounter challenges due to inconsistent tooling and deployment experiences across environments. Factors such as data gravity, compliance, and latency requirements contribute to these challenges.

To address these issues, Amazon ECS Anywhere extends the capabilities of Amazon ECS, which enables the deployment and management of containerized applications on-premises or in edge locations using a unified container orchestration platform. Amazon ECS Anywhere allows you to utilize the same Amazon ECS Application Programming Interfaces (APIs) and tools for deploying and managing container workloads on physical servers, virtual machines, or even Raspberry Pis, which ensures a consistent deployment experience across all environments.

Technical challenges of Amazon ECS Anywhere

One limitation of Amazon ECS Anywhere is the lack of native support for load balancing and service discovery. This limitation restricts the use case of deploying customer-facing applications with Amazon ECS Anywhere and dynamically scaling the workload similar to what is possible on AWS.

While there are some solutions for load balancing with third-party tools (e.g., F5 and Inlets), it’s also possible to simplify your architecture by using AWS services. This approach minimizes external dependencies, such as vendor licenses and product-specific configurations. In this post, we’ll utilize the AWS Application Load Balancer (ALB) for our solution.

The main challenge in implementing the solution lies in keeping the Amazon ECS service status synchronized with the instance targets behind the load balancer. This means that whenever changes occur in Amazon ECS services, the Amazon ECS tasks should automatically be registered or deregistered from the target group behind the ALB in response to those scaling events. The good news is that we can capture these events from the Amazon EventBridge event bus and utilize an AWS Lambda function with the AWS SDK to adjust the ALB targets accordingly.

Solution overview

There are two Amazon Virtual Private Cloud (VPC) in this demonstration:

  • The first VPC, OnPremVPC, simulates the on-premises environment, where three Linux EC2 instances that run Ubuntu would be provisioned and the Amazon ECS Anywhere agent would be installed. Those simulate the on-premises virtual machines. In the same VPC, there are also another three Linux EC2 instances (also on Ubuntu) that run open-source HTTP proxy (Squid) for the outbound HTTPS access, which is required for Amazon ECS Anywhere agents. An internet-facing ALB would also be provisioned to simulate the on-premises Load Balancer.
  • The second VPC, LambdaVPC, mainly hosts the AWS Lambda function, where it consumes the Amazon ECS Task State Change events stored in Amazon SQS queue, which are fired by Amazon EventBridge. The AWS Lambda function register or deregisters the IP targets for the ALB target group whenever those Amazon ECS tasks change their Host IP and Port. The VPC Peering between OnPremVPC and LambdaVPC is actually NOT required because the AWS Lambda function can register or deregister IP and Port for ALB target group through the VPC endpoint in the same VPC. The VPC Peering is provisioned in this demonstration to highlight or emphasize the typical dependency between LambdaVPC and on-premises network. For example, for the case that the AWS Lambda function needs to update the Load Balancer in on-premises (e.g., BIG-IP), then the connectivity to on-premises network (e.g., AWS Site-to-Site VPN or AWS Direct Connect) is required.

The following diagram describes the solution architecture of this post:

Architecture Diagram for Custom Service Discovery for ECS Anywhere Tasks

   Diagram 1 – Architecture Diagram for Custom Service Discovery for ECS Anywhere Tasks

  1. Squid HTTP proxy in public subnets – There are both private subnets and public subnets in OnPremVPC. The Linux EC2 instances running Amazon ECS Anywhere agent are placed in private subnets without direct internet access. This simulates the typical on-premises environment with lockdown on internet access. Outbound HTTPS are proxied by an open-source proxy (i.e., Squid) running in other Linux EC2 instances, which are hosted in public subnets. There is an internal Network Load Balancer (NLB) for load balancing HTTP proxy request for Linux EC2 instances running HTTP proxy (Squid).
  2. SSM parameters for Amazon ECS Anywhere Activation ID and Activation Code – Registration of Amazon ECS Anywhere agents require Activation ID and Activation Code. For details of the registration, see Registering an external instance to a cluster. In this solution, both Activation ID and Activation Code are retrieved from the Amazon ECS Control Plane and are persisted in the Parameter Store, which is a capability of AWS Systems Manager (AWS SSM). This facilitates the automation in AWS CloudFormation for the registration of Amazon ECS Anywhere agents in Linux EC2 instances.
  3. Amazon ECS Anywhere Agent in Private subnets managed by Amazon ECS Control Plane – Amazon ECS Anywhere agent is placed in private subnets, with HTTP proxy configuration pointing to the Domain Name System (DNS) name of the internal NLB of HTTP proxy. Sample Amazon ECS tasks and services are deployed, where containers are managed and run in the Linux EC2 instance. The Amazon ECS Control Plane is responsible for the orchestration of evenly distributing containers for Amazon ECS tasks evenly distributed among the three Linux EC2 instances. NetworkMode is set to bridge with HostPort set to 0 in the Amazon ECS task definition, which means that containers of Amazon ECS tasks are assigned a host port from the range of 32768 – 61000 on-demand.
  4. Amazon ECS Task State Change events fired by Amazon ECS Control Plane – Since the Amazon ECS Control Plane is responsible for the task orchestration, whenever containers of Amazon ECS tasks are launched and are destroyed or relocated in the Linux EC2 instances, they initiate the Amazon ECS Task State Change events. Those events are delivered through Amazon EventBridge event bus.
  5. AWS Lambda function process the events in batch mode – An Amazon SQS is configured as the consumer of the Amazon EventBridge event bus for the Amazon ECS Task State Change events, and the AWS Lambda function consumes those events. BatchSize and MaximumBatchingWindowInSeconds are configured in the AWS Lambda Event Source Mapping against the Amazon SQS queue. This enables the batch processing of those events which eliminates the chatty invocation of the AWS Lambda function — and thus avoids the frequent Host IP and Port update against the Load Balancer.
  6. AWS Lambda function retrieves Amazon ECS tasks Information through the VPC endpoints – The AWS Lambda function, once a batch of events arrives, retrieves the Host Instance ID and Port information of every relevant Amazon ECS task, through the Amazon ECS VPC endpoint. To resolve the Host IP (which is a private IP), AWS Lambda retrieves the information through the AWS SSM VPC endpoint, by providing the Host Instance ID. Remember that the Amazon ECS Anywhere agent actually depends on the SSM agent. For the details this dependency, see External instances (Amazon ECS Anywhere).
  7. AWS Lambda function update ALB target groups through the VPC endpoints – After the Host IP and Port information are gathered for the relevant Amazon ECS tasks, the AWS Lambda function registers or deregisters the IP targets against the ALB target group. Those IP targets represent the Host IP and Port for the corresponding containers of Amazon ECS tasks running in Linux EC2 instances. The AWS Lambda function uses the Elastic Load Balancing (ELB) VPC endpoint for those updates.
  8. ALB Dispatch HTTP requests based on the up-to-date target group Information – Finally, with the up-to-date IP targets from the ALB target group, HTTP requests launched at the internet-facing ALB Listener are dispatched to the corresponding containers of Amazon ECS tasks accordingly.

Walkthrough

There are three AWS CloudFormation templates in total that you deploy in Steps 1 to 3 below. Those steps help to provision the required AWS components for the solution in this post. The last one, Step 4, include commands to update Amazon ECS service desiredCount manually, and this help us to observe how the targets in ALB target groups are registered automatically with latest Host IP and Port information.

All the provisioning commands, verification, and post-configuration commands are also put in a markdown, all-commands.md, in the source code repository for easier reference. Some sample outputs of commands are trimmed with … (not the full version) to facilitate the reading of this post. For the full version of command outputs, refer to the markdown, all-outputs.md, in the source code repository.

Prerequisites

To provision this solution, you need to have the following prerequisites:

With the prerequisites ready, clone the source code repository of this post to a local directory:

git clone https://github.com/aws-samples/containers-blog-maelstrom.git
cd containers-blog-maelstrom/ecsa-svc-disc

Step 1 – Provision the Amazon ECS cluster, VPCs/Subnets, Amazon EC2 Launch Template, and ALB

Execute the following AWS CLI command to deploy the first AWS CloudFormation template, ecsa-svc-disc-1-ecs-vpc-ec2-alb.yml:

aws cloudformation create-stack --stack-name ecsa-svc-disc-1-ecs-vpc-ec2-alb \
  --template-body file://./cf/ecsa-svc-disc-1-ecs-vpc-ec2-alb.yml \
  --capabilities CAPABILITY_NAMED_IAM --timeout-in-minutes 20 \
  --parameters ParameterKey=SecurityGroupIngressAllowedCidrParameter,ParameterValue=<Your Public IP Range>
  
aws cloudformation wait stack-create-complete --stack-name ecsa-svc-disc-1-ecs-vpc-ec2-alb

The AWS CloudFormation parameter, SecurityGroupIngressAllowedCidrParameter, controls the IP range that can access the SSH Port (22) of the HTTP proxy, as well as the HTTP Port (8080-8082) of the ALB. Instead of specifying 0.0.0.0/0, its recommended to use a more specific Public IP range specific to your testing clients.

The AWS CloudFormation template will:

  • Provision the Amazon ECS cluster, ECSA-Demo-Cluster, for this demonstration. The Activation ID and Activation Code are retrieved and are persisted in the Parameter Store, by using the AWS CloudFormation custom resource (LambdaSSMActivationInvoke).
  • Provision the VPC, Subnets, Security Groups and VPC Peering for OnPremVPC and LambdaVPC.
  • Provision the Auto Scaling group (ASG), and the Amazon EC2 launch template for Linux EC2 instances (Ubuntu) for both the Amazon ECS Anywhere agent and the HTTP proxy.
    • For both ASGs, the LaunchTemplateData section contains the UserData property for the required initialization for the Linux EC2 instances. For HTTP proxy, that’s the installation and required setup of Squid. For the Amazon ECS Anywhere agent, that’s the installation of the agent, as well as the registration using the generated Activation ID and Activation Code. The initialization for Amazon ECS Anywhere agent also includes the required configuration of HTTP Proxy for outbound internet access.
    • The DesiredCapacity of launch template of both ASG is set to 3. Linux EC2 instances of HTTP proxy completes the initialization before those for Amazon ECS Anywhere agents, which is implemented by using WaitConditionHandle and WaitConditon in the AWS CloudFormation template.
  • Provision the Amazon EC2 Key Pair for Linux EC2 instances for both HTTP proxy and Amazon ECS Anywhere agent. The private key of the Amazon EC2 Key Pair would be saved in SSM Parameter, /ec2/keypair/<Key Pair ID>.
  • Provision the ALB Listener and ALB target groups. The target type of ALB target groups is set to IP, and initially there would be no registered targets. The AWS Lambda function (to be provisioned later) will register or deregister against those ALB target groups in Step 4.

Step 2 – Provision the Amazon ECS task definitions and services

Execute the following AWS CLI command to deploy the second AWS CloudFormation template, ecsa-svc-disc-2-ecs-service-task.yml:

aws cloudformation create-stack --stack-name ecsa-svc-disc-2-ecs-service-task \
  --template-body file://./cf/ecsa-svc-disc-2-ecs-service-task.yml \
  --capabilities CAPABILITY_NAMED_IAM --timeout-in-minutes 10
  
aws cloudformation wait stack-create-complete --stack-name ecsa-svc-disc-2-ecs-service-task

The AWS CloudFormation template will:

  • Provision the following Amazon ECS task definitions and services, with different Initial Desired Count.

Amazon ECS service

Amazon ECS task defintions

Initial Desired Count

Containers

Service-DemoApp1 DemoApp1

1

container0
Service-DemoApp2 DemoApp2

3

container1
container2
  • Amazon ECS task definition, DemoApp2, has two containers: container1 and container2, which are used later to demonstrate how ALB perform load-balancing to those containers by using two different frontend ports.
  • For all the three containers in the above two task definitions, the same container image, ecr.aws/aws-containers/ecsdemo-nodejs:latest, is used. This container is a sample nodejs application, which shows a helloworld page printing the Host IP and Port information.

Step 3 – Provision the Amazon EventBridge event bus, Amazon SQS queue, and AWS Lambda function

Execute the following AWS CLI command to deploy the third AWS CloudFormation template, ecsa-svc-disc-3-sqs-lambda.yml:

aws cloudformation create-stack --stack-name ecsa-svc-disc-3-sqs-lambda \
  --template-body file://./cf/ecsa-svc-disc-3-sqs-lambda.yml \
  --capabilities CAPABILITY_NAMED_IAM --timeout-in-minutes 10
  
aws cloudformation wait stack-create-complete --stack-name ecsa-svc-disc-3-sqs-lambda

The AWS CloudFormation template will:

  • Deploy the Amazon SQS queue and Amazon EventBridge event bus
  • Deploy the AWS Lambda function, ECSA-Demo-Cluster-Lambda-ProcessEvent, for processing the Amazon ECS Task State Change events
  • Deploy the required VPC endpoints for the AWS Lambda function

The AWS CloudFormation template only deploys the setting of the AWS Lambda function without its main code. To deploy the main code, execute the following after the completion of the AWS CloudFormation deployment.

cd lambda
zip lambda.zip *.mjs
aws lambda update-function-code --function-name ECSA-Demo-Cluster-Lambda-ProcessEvent --zip-file fileb://./lambda.zip | jq '{FunctionArn:.FunctionArn,CodeSize:.CodeSize}'
cd ..

A mapping is required to link the Amazon ECS service with the ALB target group, so that the AWS Lambda function knows which target to update for the change of the Amazon ECS tasks’ Host IP and Port. We’ll achieve this by using the tag, ecs-a.lbName, associated with the Amazon ECS service.

Execute the following command to set the tag, ecs-a.lbName, for the two Amazon ECS services:

chmod 755 script/ecsa-svc-disc-set-tg-tags.sh
./script/ecsa-svc-disc-set-tg-tags.sh

Sample output:

# AWS Account ID are masked as ************
Setting Target Group Tags
------------------------
DONE

Listing Current Target Group Tags
------------------------
arn:aws:ecs:ap-east-1:************:service/ECSA-Demo-Cluster/Service-DemoApp1
{
    "tags": [
        {
            "key": "ecs-a.lbName",
            "value": "arn:aws:elasticloadbalancing:ap-east-1:************:targetgroup/ECSA-Demo-Cluster-TargetGroup-0/fdfacc0652446c11"
        }
    ]
}

arn:aws:ecs:ap-east-1:************:service/ECSA-Demo-Cluster/Service-DemoApp2
{
    "tags": [
        {
            "key": "ecs-a.lbName",
            "value": "arn:aws:elasticloadbalancing:ap-east-1:************:targetgroup/ECSA-Demo-Cluster-TargetGroup-1/e6162b3123cbaa66 arn:aws:elasticloadbalancing:ap-east-1:************:targetgroup/ECSA-Demo-Cluster-TargetGroup-2/ae7a33533a90d745"
        }
    ]
}

The tag, ecs-a.lbName, of Service-DemoApp1 is set to the Amazon Resource Name (ARN) of an ALB target group because there is only one container in its task definition. For Service-DemoApp2, it is set to the ARNs of the two ALB target groups because there are two containers in its task definition.

Amazon ECS service

Amazon ECS task defintions

Updated Desired Count

Containers

ALB target group

Service-DemoApp1 DemoApp1 1 → 2 container0 ECSA-Demo-Cluster-TargetGroup-0
Service-DemoApp2 DemoApp2 3 → 6 container1
container2
ECSA-Demo-Cluster-TargetGroup-1
ECSA-Demo-Cluster-TargetGroup-2

Verification and post-configuration

Execute the following to see the registered targets (Host IP and Port) of the ALB target group:

chmod 755 script/ecsa-svc-disc-show-tg-health.sh
./script/ecsa-svc-disc-show-tg-health.sh

Sample output:

# AWS Account ID are masked as ************
Target Group Health
------------------------
arn:aws:elasticloadbalancing:ap-east-1:************:targetgroup/ECSA-Demo-Cluster-TargetGroup-0/fdfacc0652446c11

arn:aws:elasticloadbalancing:ap-east-1:************:targetgroup/ECSA-Demo-Cluster-TargetGroup-1/e6162b3123cbaa66

arn:aws:elasticloadbalancing:ap-east-1:************:targetgroup/ECSA-Demo-Cluster-TargetGroup-2/ae7a33533a90d745

URL
------------------------
http://ECSA-SvcDisc-ALB-OnPremLB-678673162.ap-east-1.elb.amazonaws.com:8080
http://ECSA-SvcDisc-ALB-OnPremLB-678673162.ap-east-1.elb.amazonaws.com:8081
http://ECSA-SvcDisc-ALB-OnPremLB-678673162.ap-east-1.elb.amazonaws.com:8082

In the previous Sample output, be aware that there are NO targets shown under the Target Group Health section. It is expected, because there hasn’t been any Amazon ECS Task State Change event fired after the AWS Lambda function was provisioned. Thus, the ALB target groups didn’t get updated and is keeping its initial state of NO targets.

Step 4 – Update Amazon ECS service desiredCount and observe the registered targets in ALB target groups

Execute the following AWS CLI command to:

  • Update the Desired Count of Service-DemoApp1 from 1 to 2
  • Update the Desired Count of Service-DemoApp2 from 3 to 6

For the last command of aws ecs describe-services below, execute it a few times with a few seconds delay for each run, until the value of runningCount reaches the value of desiredCount.

aws ecs update-service --cluster ECSA-Demo-Cluster --service Service-DemoApp1 --desired-count 2 | jq '.service | {serviceArn:.serviceArn, status:.status, desiredCount:.desiredCount, runningCount:.runningCount}'
aws ecs update-service --cluster ECSA-Demo-Cluster --service Service-DemoApp2 --desired-count 6 | jq '.service | {serviceArn:.serviceArn, status:.status, desiredCount:.desiredCount, runningCount:.runningCount}'

aws ecs describe-services --cluster ECSA-Demo-Cluster --service Service-DemoApp1 Service-DemoApp2 | jq '.services[] | {serviceArn:.serviceArn, deployments:.deployments[]}'

Sample output (trimmed):

# AWS Account ID are masked as ************
{
  "serviceArn": "arn:aws:ecs:ap-east-1:************:service/ECSA-Demo-Cluster/Service-DemoApp1",
  "deployments": {
    "id": "ecs-svc/2474979950726421586",
    "status": "PRIMARY",
    "taskDefinition": "arn:aws:ecs:ap-east-1:************:task-definition/DemoApp1:21",
    "desiredCount": 2,
    "pendingCount": 0,
    "runningCount": 2,
    "failedTasks": 0,
    "createdAt": "2023-05-23T00:52:52.802000+08:00",
    "updatedAt": "2023-05-23T01:44:01.662000+08:00",
    "launchType": "EXTERNAL",
    "rolloutState": "COMPLETED",
    "rolloutStateReason": "ECS deployment ecs-svc/2474979950726421586 completed."
  }
}
...

Verification and post-configuration

Wait for a minute for the Batch Window to be expired, so that the AWS Lambda function starts processing the events. Execute ecsa-svc-disc-show-tg-health.sh again, and verify the Targets (Host IP and Port) are registered successfully on the three ALB target groups:

./script/ecsa-svc-disc-show-tg-health.sh

Sample outputs (trimmed):

# AWS Account ID are masked as ************
Target Group Health
------------------------
arn:aws:elasticloadbalancing:ap-east-1:************:targetgroup/ECSA-Demo-Cluster-TargetGroup-0/fdfacc0652446c11
{
  "target": "10.0.3.224:32768",
  "targetHealth": {
    "state": "healthy",
    "reason": null
  }
}
{
  "target": "10.0.2.73:32770",
  "targetHealth": {
    "state": "healthy",
    "reason": null
  }
}
...

URL
------------------------
http://ECSA-SvcDisc-ALB-OnPremLB-678673162.ap-east-1.elb.amazonaws.com:8080
http://ECSA-SvcDisc-ALB-OnPremLB-678673162.ap-east-1.elb.amazonaws.com:8081
http://ECSA-SvcDisc-ALB-OnPremLB-678673162.ap-east-1.elb.amazonaws.com:8082

The URL section from the output of previous command shows the DNS name of the ALB for the three containers, running in these two Amazon ECS services. Make sure the SecurityGroupIngressAllowedCidrParameter, which you provided as the parameter for AWS CloudFormation template in Step 1, covers the Public IP range of your testing clients, before you execute the curl command below.

Execute the curl command to see if the ALB Listeners can dispatch the requests to the underlying Amazon ECS tasks. It may take a minute for the new targets to be effective in ALB, so please run the curl command multiple times (with a few seconds delay for each run), if you receive errors on the curl command initially.

curl http://ECSA-SvcDisc-ALB-OnPremLB-<suffix>.<aws region>.elb.amazonaws.com:8080
curl http://ECSA-SvcDisc-ALB-OnPremLB-<suffix>.<aws region>.elb.amazonaws.com:8081
curl http://ECSA-SvcDisc-ALB-OnPremLB-<suffix>.<aws region>.elb.amazonaws.com:8082

Sample output:

Node.js backend: Hello! from 
Service-DemoApp1|container0|10.0.2.73:32770
arn:aws:ecs:ap-east-1:************:container/ECSA-Demo-Cluster/a3578bfe19c34de5aae679caab599fdc/8cd31569-491c-4d70-b13c-1d1842766c31
 commit c3e96da
Node.js backend: Hello! from 
Service-DemoApp1|container0|10.0.3.224:32768
arn:aws:ecs:ap-east-1:************:container/ECSA-Demo-Cluster/f049c03808114a3f8fddd35d67873cb0/36013772-7b44-48f3-a2b4-81b6557a26ef
 commit c3e96da

The second line of the HTTP content:

Service-DemoApp1|container0|10.0.2.73:32770

indicates the information about ECS Service Name | Container Name | Host IP and Port.

Execute the curl command multiple times again. It is expected the Host IP and Port in the second line of HTTP Content may change. If the Host IP and Port has changed, this indicates how load balancing in the ALB works because HTTP requests has been dispatched to different containers (and thus showing different Host IP and Port).

Highlight of required modification for on-premises Load Balancer

For demonstration purpose, this post uses the ALB, which simulates on-premises Load Balancer, for the custom service discovery solution to the Amazon ECS Anywhere. This solution is flexible, so you can change the sample code a bit for your on-premises load balancer (as long as your on-premises load balancer provide API to change its member IP and port on-demand).

The following provide some high-level directions for the required modification.

  1. For each Amazon ECS service running Amazon ECS Anywhere Tasks, you need to add a tag, ecs-a.lbName, where the value would be the identifier of the on-premises Load Balancer.
  2. For the AWS Lambda function, ECSA-Demo-Cluster-Lambda-ProcessEvent, update the index.mjs to align with the following code:
//import * as lb from './lb-alb.mjs';
import * as lb from './lb-your-onprem-lb.mjs';
  1. Create a new file lb-*.mjs (e.g., lb-your-onprem-lb.mjs) for the AWS Lambda function and provide your implementation. You can refer to the lb-alb.mjs for references.

Function to Override

Remark

getCurrentLoadBalancingInfo

·  Get the identifier of the on-premises Load Balancer from the input parameter lbInfo.lbName

·  Call the on-premises Load Balancer API to get the current member IP and port, and return those information

compareLoadBalancingInfo

·  From the input parameter, targetLbInfo, get the target Host IP and Port, which are the information retrieved from Amazon ECS Control Plane for the current up-to-date information of Host IP and Port

· Call getCurrentLoadBalancingInfo to get the current member IP and port from the on-premises Load Balancer API

· Compare the difference of previous two items, and return the changeLbInfo – the list of member IP and port to Add or Remove

applyLoadBalancingInfo

· From the input parameter, changeLbInfo, get the list of member IP and port to Add or Remove

· Add or Remove the member IP and port, by call the on-premises Load Balancer API

Cleaning up

Before you delete the Amazon ECS cluster, you need to de-register the Container Instances for Amazon ECS Anywhere. The first AWS CloudFormation template, ecsa-svc-disc-1-ecs-vpc-ec2-alb.yml, contains an AWS CloudFormation Custom Resource (LambdaECSACleanupInvoke), which use an AWS Lambda function to perform the de-registration for you automatically when the AWS CloudFormation stack is deleted.

Thus to avoid incurring future charges, just delete the following AWS CloudFormation stacks by executing the following command:

aws cloudformation delete-stack --stack-name ecsa-svc-disc-3-sqs-lambda
aws cloudformation wait stack-delete-complete --stack-name ecsa-svc-disc-3-sqs-lambda

aws cloudformation delete-stack --stack-name ecsa-svc-disc-2-ecs-service-task
aws cloudformation wait stack-delete-complete --stack-name ecsa-svc-disc-2-ecs-service-task

aws cloudformation delete-stack --stack-name ecsa-svc-disc-1-ecs-vpc-ec2-alb
aws cloudformation wait stack-delete-complete --stack-name ecsa-svc-disc-1-ecs-vpc-ec2-alb

Conclusion

In this post, we showed you how to use the ALB for the Amazon ECS Anywhere services discovery, which has not been natively supported by Amazon ECS Anywhere at the time of publishing this post. The solution was implemented by capturing Amazon ECS Task State Change events using the Amazon EventBridge event bus and storing them in Amazon SQS queue. An AWS Lambda function was then configured to asynchronously update the ALB targets by comparing the current state of the ALB with the Amazon ECS service targets.

We have also addressed the possibility of implementing a similar solution with on-premise load balancing solutions. In such cases, you can make minimal changes to the AWS Lambda function code using your on-premises load balancer API/SDK. The overall architecture remains almost the same.

We hope that this post provided our customers with a reference architecture pattern for implementing service discovery for workloads on Amazon ECS Anywhere. To quickly refer to all the commands shown in this post, you can refer to the markdown in this github repo. The markdown also contains additional verification commands that help you to understand more about the setup of Amazon ECS Anywhere. To learn more from Amazon ECS Anywhere, you can also have a try on this workshop.

George Liu

George Liu

George Liu is a Senior Solutions Architect from AWS, serving enterprise customers in Hong Kong. He got hands-on exposure on enterprise application development and modernization, with different technology stacks, including Java, .NET, Oracle and SQL Server. He also played an architect role in several large-scale enterprise core mission-critical system modernization projects in Hong Kong, for his 20+ years of experiences in IT industry.

Shawn Zhang

Shawn Zhang

Shawn is a Specialist Solutions Architect at AWS Hong Kong, focusing on cloud native technologies including containers and serverless. He is passionate about Kubernetes and helping customers build container solutions on AWS. Before joining AWS, he worked on DevOps and cloud platform engineering for unicorn companies in Hong Kong.