Containers
Amazon ECS on AWS Outposts
AWS Outposts is a fully managed service that offers the same AWS infrastructure, AWS services, APIs, and tools to virtually any data center, co-location space, or on-premises facility, in the form of a physical rack connected to the AWS global network. AWS compute, storage, database, and other services run locally on Outposts, and you can access the full range of AWS services available in the Region to build, manage, and scale your on-premises applications using familiar AWS services and tools.
AWS Outposts is designed to meet the needs of customers who have workloads with low latency, data residency, or local data processing requirements. Data-intensive workloads can collect and process hundreds of TBs of data a day, and real-time or near real-time data delivery of that volume to the cloud can be cost-prohibitive. With AWS Outposts, you can run data transformation processes such as transcoding, filtering, caching, normalization and data reduction at the edge before that data is moved to an AWS Region. You can also use services such as Amazon Relational Database Service (Amazon RDS) to keep data within your datacenter, Amazon EMR to run big data frameworks like Apache Hadoop and Apache Spark, and Amazon ElastiCache for real-time ultra-low latency applications that require sub millisecond responses in under a millisecond.
Since 2014, Amazon Elastic Container Service (Amazon ECS) has eliminated the need for you to install, operate, and scale your own cluster management infrastructure, helping you to orchestrate containerized application deployments. Large applications are still difficult to move from on-premises to the cloud due to latency-sensitive system interdependencies between the various components of the application, and segmenting these migrations into smaller pieces requires latency-sensitive connectivity between various parts of the application. With the release of Amazon ECS on AWS Outposts, customers can deploy containerized applications that need to remain on premises, and they can deploy this architecture as an intermediate solution previous to the full migration to an AWS Region. Amazon ECS also offers the feature ECS Anywhere to run container workloads on customer-managed infrastructure.
This post provides an overview of how to set up an application deployed into Amazon ECS containers that are running on AWS Outposts and the main differences with Amazon ECS in an AWS Region. The sample code for this post is available in a GitHub repository, which also includes a Terraform script to get you started.
This blog assumes you are familiar with Outposts, including local gateway (LGW) functionality and customer-owned IP address pool (CoIP pool). More information regarding Outposts can be found in the user guide, What is AWS Outposts.
General considerations for running microservices on AWS Outposts
AWS Outposts
AWS Outposts service provides AWS infrastructure and services to on-premises locations from the AWS Global Cloud Infrastructure. An Outpost is anchored to an Availability Zone in the Region and is an extension of that Availability Zone. AWS operates, monitors, and manages AWS Outposts infrastructure as part of its parent Region. You can extend any VPC from an AWS Region to an Outpost in your datacenter by creating an Outpost subnet. The VPC components that are accessible in the Region are also available in your Outpost. The route tables for Outpost subnets work as they do for Availability Zone subnets and you can specify IP addresses, internet gateways, local gateways, virtual private gateways, and peering connections as destinations.
Each Outpost supports a single local gateway (LGW) to enable connectivity from your Outpost subnets to all AWS services available in the parent Region and to your local network. The local gateway provides a target in your VPC route tables for on-premises destined traffic and internet-bound traffic. During the Outpost installation process, the customer provides an address pool range, known as CoIP pool, that is assigned to the LGW and advertised back to the customer network via Border Gateway Protocol (BGP). LGW will perform network address translation (NAT) for instances that have been assigned addresses from the CoIP pool.
During AWS Outposts provisioning, a service link connection is created that connects your Outpost back to your chosen AWS Region. The service link is an encrypted set of VPN connections that are used whenever the Outpost communicates with your chosen home Region. It is used for management traffic and traffic between Outposts and any associated VPCs. You can check the networking requirements in AWS Outposts documentation.
What’s different from running Amazon ECS on an AWS Region?
One of the main differences from running Amazon ECS on Outposts compared with running them on an AWS Region is that AWS Fargate is not available. In an AWS Region, Amazon ECS tasks using Fargate launch type are deployed onto infrastructure managed by AWS, and the customer does not have to manage servers or clusters of Amazon EC2 instances. However, this option is not available in AWS Outposts, so customers have to deploy their containers using EC2 launch type. For that reason, an Amazon ECS container instance is required to deploy an Amazon ECS cluster on AWS Outposts. For AWS outposts, an EC2 instance type must be one of m5/m5d, c5/c5d, r5/r5d, g4dn, or i3en. An Amazon ECS container instance is an Amazon EC2 instance that is running the Amazon ECS container agent and that has been registered into an Amazon ECS cluster. For more information, see Amazon ECS container instances and Amazon ECS container agent.
Customers can use Auto Scaling group capacity providers to manage the Amazon EC2 instances registered to their ECS clusters. When creating an Auto Scaling group capacity provider with managed scaling enabled, Amazon ECS manages the scale-in and scale-out actions of the Auto Scaling group used when creating the capacity provider. For more information, see the documentation Auto Scaling group capacity providers and AWS posts Managing compute for Amazon ECS cluster with capacity providers and Deep Dive on Amazon ECS Cluster Auto Scaling.
Main limitations of Amazon ECS on AWS Outposts
Customers need to take into account the latency between AWS services running in an AWS Region and AWS Outposts in their data center. Specifically, Amazon Elastic Container Registry (Amazon ECR), AWS Identity and Access Management (IAM), Network Load Balancer, Classic Load Balancer, and Amazon Route 53 run in an AWS Region.
Application Load Balancer on AWS Outposts
The ALB service consumes EC2 instance resources within the AWS Outposts. It can be deployed on m5/m5d, c5/c5d, or r5/r5d instances, and it chooses the instance type in that order (transparent to the customer). Initially, the EC2 instance size starts at large. The load balancer scales as needed, from large to xlarge, xlarge to 2xlarge, and 2xlarge to 4xlarge. In the beginning, it consumes two IP addresses from within the AWS Outpost’s subnet within the VPC, and it requires two Elastic IP addresses from the Co-IP pool. To scale, ALB requires two additional instances of the next size up and another two Co-IP addresses from the Elastic IP address pool.
ALB can forward traffic to targets on the AWS Outpost (where it is deployed) and to IP addresses of on-premises targets. However, it does not forward traffic to targets in the AWS Region, even when using IP addresses. For more information, you can read the AWS blog configuring an Application Load Balancer on AWS Outposts.
Solution overview
The solution consists of a producer application that writes data to an Amazon Kinesis data stream deployed in an AWS Region. Amazon Kinesis Producer Library (KPL) is the library used to build the producer. KPL simplifies producer application development, allowing developers to achieve high write throughput to a Kinesis data stream. More details about how to create a producer application can be found in the AWS post Building a scalable streaming data processor with Amazon Kinesis Data Streams.
This producer application is running in Amazon ECS containers on AWS Outposts. Amazon ECS pulls the container image from an Amazon ECR located in an AWS Region. Amazon ECS cluster Auto Scaling is deployed to manage the scale-in and scale-out actions for Amazon EC2 instances within an Amazon ECS cluster. The application receives data coming from an on-premises data centre via LGW, and an ALB running on AWS Outposts distributes this incoming traffic between the Amazon ECS tasks. The following diagram shows the architecture:
Walkthrough
In this post, we will walk you through the following steps:
- Creation of Amazon ECR repository, AWS CodeBuild project, and container image
- Creation of VPC and subnet in AWS Outposts
- Deployment of ALB and Amazon ECS resources
- End-to-end test and collected metrics
Prerequisites
For this walkthrough, you should have the following prerequisites:
- An AWS account
- You must have installed and configured an AWS Outposts in your on-premises data center
- You must have a reliable network connection between your AWS Outposts and its AWS Region.
- You must have sufficient capacity of instance types available in your AWS Outposts.
- All Amazon ECS container instances must have Amazon ECS container agent 1.33.0 or later.
- You must have installed Terraform version 13 or later.
Sample code
This blog provides you with a script to deploy an Amazon ECS cluster on AWS Outposts. You can find the whole solution in GitHub. All the cloud resources are managed and implemented using Infrastructure as Code (IaC), where Terraform is used as the IaC provider. Terraform gives you the flexibility to use modules that are containers for multiple resources that can be used together, and they can be used to create lightweight abstractions. For this solution, three modules have been created: a VPC module called vpc, an Amazon ECR plus AWS CodeBuild module called docker-ecr-codebuild, and an ALB plus Amazon ECS module called alb-ecs. The proposed architecture is not meant to be deployed in a production environment but to serve as a template because you need to customize the input variables values specified in the tfvars file as well as other configuration options such as the ALB listener port and protocol, ALB target group port and protocol, and the use of custom domains.
Before deploying this solution, you will need to:
- Collect your AWS Outposts Amazon Resource Name (ARN) and AWS Outposts Local Gateway ID
- Create or modify tfvar file with the configuration of your environment
Follow these commands for the deployment:
- terraform init
- terraform plan -var-file=vars/test.tfvars -target=module.ecs-docker-codebuild
- terraform apply -var-file=vars/test.tfvars -target=module.ecs-docker-codebuild
- terraform plan -var-file=vars/test.tfvars
- terraform apply -var-file=vars/test.tfvars
You can also find a table with the available input variables for the modules in the readme file of the Github repository.
Step 1 – Deployment of Amazon ECR, AWS CodeBuild project, and Docker image
Before creating the ALB and the containers on AWS Outposts, you need to prepare and upload the container image into the Amazon ECR repository. For that reason, I have created a Terraform module that will deploy an Amazon ECR repository and an AWS CodeBuild project that will build and push the container image to the repository. The container image used is stored in this GitHub repository.
The docker-ecr-codebuild module has the VPC name and the ECR repository name as inputs. See below an example of the use of this module:
module "ecs-docker-codebuild" {
source = "./modules/docker-ecr-codebuild"
tags = local.common_tags
prefix_name = "${var.vpc_name}${var.vpc_suffix}"
}
Step 2 – Deployment of VPC and subnet in AWS Outposts
After your Outpost is installed and the compute and storage capacity are available for use, you can deploy your containerized applications on-premises by extending a VPC in the AWS Region to your AWS Outpost and using an Outpost subnet. To add an Outpost subnet to an existing VPC, specify the ARN of your AWS Outpost during the subnet creation. Once your VPC is extended to your AWS Outposts, you must explicitly associate the VPC with the local gateway route table to provide connectivity between the VPC and your local network. When you create a route, you can specify IP addresses, internet gateways, local gateways, virtual private gateways, and peering connections as destinations. For the instance in your Outpost subnets to communicate with the local network, you must add a route with the local gateway as the next hop target to your Outpost’s VPC subnet route table.
See below an example of the Terraform code that I used inside the VPC module to create the outpost subnet and to associate the VPC with the LGW:
data "aws_outposts_outpost" "target_outpost" {
count = var.outposts_arn != "" ? 1 : 0
arn = var.outposts_arn
}
data "aws_ec2_local_gateway_route_table" "outposts_lgw_route_table" {
count = var.outposts_arn != "" ? 1 : 0
outpost_arn = var.outposts_arn
}
data "aws_ec2_local_gateway" "outposts_lgw" {
count = var.outposts_arn != "" ? 1 : 0
filter {
name = "outpost-arn"
values = [var.outposts_arn]
}
resource "aws_subnet" "subnet_outposts" {
count = var.outposts_arn != "" ? length(var.outposts_subnets_cidr_list) : 0
vpc_id = aws_vpc.vpc.id
cidr_block = var.outposts_subnets_cidr_list[count.index]
availability_zone_id = data.aws_outposts_outpost.target_outpost[0].availability_zone_id
map_customer_owned_ip_on_launch = var.coip_auto_assign ? true : null
customer_owned_ipv4_pool = var.coip_auto_assign ? data.aws_ec2_coip_pool.coip[0].id : null
outpost_arn = var.outposts_arn
}
resource "aws_ec2_local_gateway_route_table_vpc_association" "outposts_lgw_route_table_assoc" {
count = var.outposts_arn != "" ? 1 : 0
local_gateway_route_table_id = data.aws_ec2_local_gateway_route_table.outposts_lgw_route_table[0].id
vpc_id = aws_vpc.vpc.id
}
#route to local gw
resource "aws_route" "outposts_route_to_LGW" {
depends_on = [
aws_ec2_local_gateway_route_table_vpc_association.outposts_lgw_route_table_assoc
]
count = var.outposts_arn != "" && var.outposts_route_to_LGW_destination != "" ? length(var.outposts_subnets_cidr_list) : 0
route_table_id = aws_route_table.subnet_outposts_route_table[count.index].id
destination_cidr_block = var.outposts_route_to_LGW_destination
local_gateway_id = data.aws_ec2_local_gateway.outposts_lgw[0].id
}
The vpc module creates a VPC with the main CIDR block and name that you specify. You can define whether the VPC has an internet gateway, a NAT gateway, public subnets, private subnets, Outposts subnets, and so on. You can also create SSM endpoints for the private subnets and enable VPC flow logs. For AWS Outposts, you need to provide your AWS Outposts ARN and LGW ID. See below an example of how to use the VPC module:
module "test-vpc" {
source = "./modules/vpc"
tags = local.common_tags
vpc_name = var.vpc_name
vpc_suffix = var.vpc_suffix
vpc_main_cidr = var.vpc_main_cidr
enable_dns_hostnames = true
enable_dns_support = true
enable_internet_gateway = true
enable_nat_gateway = true
enable_ssm = true
create_iam_role_ssm = true
number_AZ = var.number_AZ
endpoints_ha = true
private_subnets_cidr_list = var.private_subnets_cidr_list
outposts_subnets_cidr_list = var.outposts_subnets_cidr_list
public_subnets_cidr_list = var.public_subnets_cidr_list
outposts_subnets_internet_access_nat_gw = true
outposts_arn = var.outposts_arn
outposts_route_to_LGW_destination = var.outposts_route_to_LGW_destination
outposts_local_gateway_id = var.outposts_local_gateway_id
enable_flow_log = true
create_flow_log_cloudwatch_log_group = true
create_flow_log_cloudwatch_iam_role = true
}
Step 3 – Deployment of ALB and Amazon ECS resources
The alb-ecs module deploys the following resources:
- An external ALB with a CoIP association. ALB will be deployed in Outposts subnet (one Availability Zone, it is not highly available), and it has as many listeners and ports (and protocols) as you specify during the module creation.
- A security group for the ALB that allows inbound traffic from the list of ports and protocols that the ALB is listening to. It also allows inbound traffic from the CIDR block of your data center (that you have to specify in the input variables). This security group allows outbound traffic to the ECS security group for the list of ports and protocols that you have specified in the input variables.
- Target group/s with IP type as target type, where the port, protocol and health check are given as inputs.
- An Auto Scaling group for ECS nodes associated with a launch template and with variables for the maximum size, minimum size, desired capacity, health check type, capacity rebalance, enabled metrics, override configuration, and so on. The launch template will have associated user data with the container agent configuration (cluster name, enable cached, and so on). The launch template configuration is also customized with variables such as block device mapping, EC2 instance profile, AMI ID, security groups, or instance type.
- A capacity provider associated with the previous Auto Scaling group and with the configuration of the managed scaling options (target capacity, maximum scaling step, minimum scaling step size).
- An ECS cluster associated with the previous capacity provider.
- A task definition with variables such as CPU, memory, and ports, and with awsvpc as network mode. It will use an execution and task IAM roles created in the module.
- An ECS service that uses the above task definition, with launch type EC2, associated with the Application Load Balancer and that uses Outposts. It uses a security group that allows inbound traffic from the ALB and all egress traffic.
This is an example of how to use this module:
module "producer-outposts-ecs-alb" {
depends_on = [module.ecs-docker-codebuild, aws_s3_bucket.s3_bucket_outposts_logging]
source = "./modules/alb-ecs"
tags = local.common_tags
vpc_name = "${var.vpc_name}${var.vpc_suffix}"
alb_name = "ALB-ECS"
subnets = [module.test-vpc.subnet_outposts_ids[0]]
enable_alb_access_logging = true
logging_bucket_name = aws_s3_bucket.s3_bucket_outposts_logging.id
alb_access_logs_prefix = var.alb_access_logs_prefix
port_listeners = {
"port-1" = {
"port" = 80,
"protocol" = "HTTP",
"inbound_cidr_range" = [var.vpc_main_cidr]
"target_group" = {
"port" = 8080,
"protocol" = "HTTP",
"health_check_path" = "/healthcheck"
}
}
}
outposts_arn = var.outposts_arn
ecs_cluster_name = "producer-cluster"
log_group_name = "producer-log"
log_retention_days = 30
ecr_name = "kinesis/producer"
ec2_iam_instance_profile = module.test-vpc.iam_instance_profile_ec2_ssm_id
Application Load Balancer
The ALB type must be internet-facing so it can be accessible from on-premises, but it doesn’t actually have any external public connection. You also have to provide a CoIP pool as part of the creation process. The ALB Elastic Network Interfaces (ENI’s) will have associated an Elastic IP address from the CoIP pool.
This is an example of the Terraform code that I used inside the module to create the ALB:
resource "aws_lb" "alb" {
name = var.alb_name
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.alb_sg.id]
subnets = var.subnets
enable_deletion_protection = var.alb_enable_deletion_protection
customer_owned_ipv4_pool = var.outposts_arn != "" ? data.aws_ec2_coip_pool.coip[0].id : null
ip_address_type = var.alb_ip_address_type
access_logs {
bucket = var.logging_bucket_name
prefix = var.enable_alb_access_logging ? var.alb_access_logs_prefix : null
enabled = var.enable_alb_access_logging
}
}
For ALB access logs, you need an S3 bucket with SSE-S3 encryption in the same AWS Region as AWS Outposts. Find more information about the S3 bucket policy for ALB access logs in the documentation.
Amazon AWS Auto Scaling group and ECS container agent
The launch template used in the AWS Auto Scaling group and associated with the Amazon ECS capacity provider defines the configuration of Amazon ECS container instances. The Amazon ECS agent configuration is defined as part of the user data in the launch template. To deploy Amazon ECS containers in AWS Outposts, you need to provide the ECS cluster name and add the tag Outpost in the user data, as well as enable the cache of the container image. This is the Amazon ECS agent configuration that I used inside the module:
#!/bin/bash
echo ECS_CLUSTER=${clustername} >> /etc/ecs/ecs.config
echo ECS_ENABLE_CONTAINER_METADATA=true >> /etc/ecs/ecs.config
echo ECS_IMAGE_PULL_BEHAVIOR=prefer-cached >> /etc/ecs/ecs.config
echo ECS_CONTAINER_INSTANCE_TAGS={\"environment\": \"Outpost\"} >> /etc/ecs/ecs.config
This is the Terraform code that I used inside the module to create the ASG for Amazon ECS, where I specified “AmazonECSManaged” tag and the ECS agent configuration in the user data:
resource "aws_autoscaling_group" "ecs_nodes" {
name_prefix = var.ecs_asg_name_prefix
max_size = var.ecs_asg_max_size
min_size = var.ecs_asg_min_size
desired_capacity = var.ecs_asg_desired_capacity
vpc_zone_identifier = var.subnets
health_check_type = var.ecs_asg_health_check_type
capacity_rebalance = var.ecs_asg_capacity_rebalance
default_cooldown = var.ecs_asg_default_cooldown
health_check_grace_period = var.ecs_asg_health_check_grace_period
termination_policies = var.ecs_asg_termination_policies
suspended_processes = var.ecs_asg_suspended_processes
enabled_metrics = var.ecs_asg_enabled_metrics
protect_from_scale_in = var.ecs_asg_protect_from_scale_in
mixed_instances_policy {
launch_template {
launch_template_specification {
launch_template_id = var.import_aws_launch_template ? var.launch_template_id : aws_launch_template.node[0].id
version = "$Latest"
}
dynamic "override" {
for_each = var.ecs_asg_override_config
content {
instance_type = lookup(override.value, "instance_type", null)
weighted_capacity = lookup(override.value, "weighted_capacity", null)
}
}
}
}
tag {
key = "AmazonECSManaged"
value = ""
propagate_at_launch = true
}
}
Amazon ECS capacity provider
In the input variables of the module you will need to provide the details for the managed scaling options, such as target capacity, maximum scaling step or minimum scaling step size. This is an example of the Terraform code that I used to deploy the Amazon ECS capacity provider inside the module:
resource "aws_ecs_capacity_provider" "asg" {
name = aws_autoscaling_group.ecs_nodes.name
auto_scaling_group_provider {
auto_scaling_group_arn = aws_autoscaling_group.ecs_nodes.arn
managed_termination_protection = var.capacity_provider_managed_termination_protection
managed_scaling {
maximum_scaling_step_size = var.maximum_scaling_step_size
minimum_scaling_step_size = var.minimum_scaling_step_size
status = "ENABLED"
target_capacity = var.capacity_provider_target_capacity
}
}
}
There is an immutable dependency of an ECS capacity provider on a specific ASG. This relationship is enforced as one to one, so the name of the capacity provider should reflect the name of the ASG.
Amazon ECS cluster, Amazon ECS service, and Amazon ECS task definition
In the input variables of the module, you will need to provide the details for the Amazon ECS task definition, such as CPU, memory, image, and ports. The ECS task definition uses an execution and task IAM role with Amazon Kinesis permissions to put records and list streams, and Amazon CloudWatch Logs permissions to create log groups and to put log events. One of the Amazon ECS task variables, called “STREAM_NAME”, is the Amazon Kinesis data stream name that you have specified in the input variables.
This is the Terraform code that I used in the module to create the Amazon ECS cluster and the Amazon ECS task definition:
resource "aws_ecs_cluster" "ecs-cluster" {
name = var.ecs_cluster_name
capacity_providers = [aws_ecs_capacity_provider.asg.name]
}
resource "aws_ecs_task_definition" "task_definition" {
count = var.import_aws_ecs_task_definition ? 0 : 1
depends_on = [aws_cloudwatch_log_group.ecs_log_group]
family = var.ecs_service["task_definition"]["family"]
container_definitions = jsonencode([
{
name = var.ecs_service["container_name"]
image = "${data.aws_caller_identity.current.account_id}.dkr.ecr.${data.aws_region.region.name}.amazonaws.com/${var.ecr_name}"
essential = true,
cpu = var.ecs_service["task_definition"]["cpu"]
memory = var.ecs_service["task_definition"]["memory"]
portMappings = [
{
containerPort = var.ecs_service["task_definition"]["port"]
}
]
logConfiguration = {
logDriver = "awslogs",
secretOptions = [],
options = {
awslogs-group = aws_cloudwatch_log_group.ecs_log_group[0].name,
awslogs-region = data.aws_region.region.name,
awslogs-stream-prefix = var.ecs_service["container_name"]
}
}
Environment = [
{ Name = "REGION"
Value = data.aws_region.region.name
},
{
Name = "STREAM_NAME"
Value = var.kinesis_stream_name
}
]
}
])
task_role_arn = aws_iam_role.ecs_task_role[0].arn
execution_role_arn = aws_iam_role.ecs_execution_role[0].arn
cpu = var.ecs_service["task_definition"]["cpu"]
memory = var.ecs_service["task_definition"]["memory"]
requires_compatibilities = [var.ecs_service["launch_type"]]
network_mode = "awsvpc"
tags = var.tags
}
Step 4 – End-to-end test
Once you have deployed the code provided in the GitHub repository, you should have an Amazon ECS container instance up and running deployed in your Outpost subnet as shown in the following screenshot taken from the ECS console:
You should also have an Amazon ECS task up and running:
You can check the status of your ECS container instance by logging in to the underlying EC2 instance and running “sudo service docker status” and “sudo service ecs status”. If you see any error, check the file “ecs.config”.
For the ALB, you can check the CoIP association in the network interfaces console:
For the end-to-end testing, the following diagram shows two different testing scenarios:
- First scenario with the client in the customer on-premise network
- Second scenario with the client in AWS Outpost subnet
To do a request via curl, you need to run the following command and replace ALB_ENDPOINT with the DNS record of your Application Load Balancer, protocol with HTTP or HTTPS, and port with 80 or 443 value.
curl --header "Content-Type: application/json" --data-raw '{"data":" This is a testing record"}'${protocol}://${ALB_ENDPOINT}:${port}/
You can check the Amazon ECS task logs in Amazon CloudWatch. See here an example after the testing:
CloudWatch Metrics
These are some metrics collected after the testing phase for scenario 2 that are available in Amazon CloudWatch:
- For the Application Load balancer, the figure below shows the metric called Requests Count under AWS/ApplicationELB namespace. It represents the number of requests processed over IPv4 and IPv6, where the load balancer node was able to choose a target.
- The AWS/ApplicationELB namespace also includes metrics for targets. The figure below shows the metric TargetbResponsebTime. It represents the time elapsed, in milliseconds, after the request leaves the load balancer and until a response from the target is received.
- For Amazon Kinesis Data Streams, the picture below shows the metric Incoming Bytes under AWS/Kinesis namespace. It represents the number of bytes successfully put to the Kinesis stream over the specified time period
- The picture below shows the metric PutRecord.Latency in AWS/Kinesis namespace. It represents the time taken for the PutRecord operation, measured over the specified time period, in milliseconds.
Cleaning up
To avoid additional costs, ensure that the provisioned resources are decommissioned. To do that, execute the command “terraform destroy”.
Conclusion
AWS Outposts enables low latency, data residency, local data processing, and migration of applications with local system interdependencies. Amazon ECS on AWS Outposts follows the same pattern and function as Amazon ECS in an AWS Region. Amazon ECS makes it easy to use containers as a building block for your applications by eliminating the need for you to install, operate, and scale your own cluster management infrastructure. Running Amazon ECS on AWS Outposts allows you to schedule long-running applications using docker containers and scale your containers up or down to meet your application’s capacity requirements.
This post explained the main differences between running Amazon ECS on AWS Outposts in comparison with running Amazon ECS in an AWS Region. It also showed a solution overview for an application with data processing requirements and the implementation steps to follow. You can build your own solution by deploying the sample code from the GitHub repository.