AWS Compute Blog
Amazon ECS launches new deployment capabilities; CloudWatch metrics; Singapore and Frankfurt regions
Today, we launched two improvements that make it easier to run Docker-enabled applications on Amazon EC2 Container Service (ECS). Amazon ECS is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances.
The first improvement allows more flexible deployments. The ECS service scheduler is used for long running stateless services and applications. The service scheduler ensures that the specified number of tasks are constantly running and can optionally register tasks with an Elastic Load Balancing load balancer. Previously, during a deployment the service scheduler created a task with the new task definition; after the new task reached the RUNNING state, a task that was using the old task definition was drained and stopped. This process continued until all of the desired tasks in the service were using the new task definition. This process maintains the service’s capacity during the deployment, but requires enough spare capacity in the cluster to start one additional task. Sometimes that’s not desired, because you do not want to use additional capacity in your cluster to perform a deployment.
Now, a service’s minimumHealthyPercent lets you specify a lower limit on the number of running tasks during a deployment. A minimumHealthyPercent of 100% ensures that you always have the desiredCount of tasks running and values below 100% allow the scheduler to violate desiredCount temporarily during a deployment. For example, if you have 4 Amazon EC2 instances in your cluster, and 4 tasks each running on a separate instances, changing minumumHealthyPercent from 100% to 50% would allow the scheduler to stop 2 tasks before deploying 2 new tasks.
A service’s maximumPercent represents an upper limit on the number of running tasks during a deployment, enabling you to define the deployment batch size. For example, if you have 8 instances in your cluster, and 4 tasks, each running on a separate instance, maximumPercent of 200% starts 4 new tasks before stopping the 4 old tasks. For more information on these new deployment options, see the documentation.
To illustrate these options visually, consider a scenario where you want to deploy using the least space. You could set minimumHealthyPercent to 50% and maximumPercent to 100%. The deployment would look like this:
Another scenario is to deploy quickly without reducing your service’s capacity. You could set set minimumHealthyPercent to 100% and maximumPercent to 200%. The deployment would look like this:
The next improvement involves scaling the EC2 instances in your ECS cluster automatically. When ECS schedules a task, it requires an EC2 instance that meets the constraints in the task definition. For example, if a task definition requires 1 GB RAM, ECS finds an EC2 instance that has at least that much memory so that the container can start. If the scheduler cannot find an EC2 instance that meets the constraints required to place a task, it fails to place the task.
Managing the cluster capacity is thus essential to successful task scheduling. Auto Scaling can enable clusters of EC2 instances to scale dynamically in response to CloudWatch alarms. ECS now publishes CloudWatch metrics for the reserved amount of CPU and memory used by running tasks in the cluster. You can create a CloudWatch alarm using these metrics that adds more EC2 instances to the Auto Scaling group when the cluster’s available capacity drops below a threshold that you define. For more information, see Tutorial: Scaling Container Instances with CloudWatch Alarms.
Last, Amazon ECS is now available in the Asia Pacific (Singapore) region and EU (Frankfurt) regions, bringing ECS to eight regions.