Containers

Cost Optimization Checklist for Amazon ECS and AWS Fargate

This post was contributed by Charu Khurana, Senior Solutions Architect, and John Formento, Solutions Architect.

Amazon Elastic Container Service (Amazon ECS) with the AWS Fargate launch type is a powerful, cloud native, container service that allows customers to create container-based workloads in a matter of minutes without managing the underlying infrastructure. Even with the serverless offering in Fargate, there are still areas you can review to improve cost optimization for running workloads. In this post, we will cover 7 topics that should be considered when optimizing a Fargate workload for cost on Amazon ECS.

Cost Explorer: to track running hours and usage

View your Fargate Cost and Usage in Cost Explorer. Cost Explorer is highly effective in drilling Fargate Hours in addition to Cost.

The Usage Type filter can be used to display Usage Hours for ECS and Fargate. It is categorized by AWS region. For example, to see Fargate hours in us-east-1, type ‘Fargate’ in search box and include USE1-Fargate-vCPU-Hours:perCPU(Hrs) and  USE1-Fargate-GB-Hours(Hrs). Similarly, Fargate vCPU hours covered by SPOT in us-west-2 can be filtered by USW2-SpotUsage-Fargate-vCPU-Hours:perCPU(Hrs).

Cost Explorer: to view your ECS and Fargate untagged resources.

Tagging

Tag your Amazon ECS and AWS Fargate resources such as services, task definitions, tasks, clusters, and container instances . This will enable you to better allocate costs, improve visibility into your workloads, and easily search and identify your containerized applications. This is also useful in implementing programmatic infrastructure management actions (like scale in) and defining fine-grained resource-level permissions. Without tagging, it is difficult to manage the entire infrastructure and costs in a systematic manner which is spread across different teams, products, business units, and environments.

To use the tagging feature, you must opt in to the new Amazon Resource Name (ARN) and resource identifier (ID) formats. For more information, see Amazon Resource Names (ARNs) and IDs.

Analyze your untagged resources such as task definitions, services, tasks, and clusters, spread across different teams, products, business units, and environments to save cost. This will potentially uncover running resources you otherwise might not have known about. Orphaned resources contribute to cloud waste and result in unnecessary charges. You can use ‘Group By’ tags for holistic view of untagged applications as shown below.

Savings Plan

Use Compute Savings Plan to cover your ECS and Fargate cost. Compute Savings Plan is a new and flexible discount model that provides you with the same discounts as Convertible Reserved Instances. For more information, check out the Savings Plan launch blog. AWS Fargate pricing is based on the vCPU and memory resources used from the time you start to download your container image until the Amazon ECS task terminates, rounded up to the nearest second. Savings Plans offer savings of up to 50% on your AWS Fargate usage in exchange for a commiment to use a specific amount of compute usage (measured in dollars per hour) for a one or three year term. The AWS Cost Explorer will help you to choose a Savings Plan, and will guide you through the purchase process.

Although Savings Plan is a great way to save, it is important to keep in mind that Savings Plans are applied to your highest savings percentage first. Savings Plan usage will be applied to the usage that gets the customer the largest savings. If you have other Savings Plan eligible resources running in your AWS Account, such as EC2 or AWS Lambda, then it’s not necessary for your Savings Plan to be applied to Fargate first.

If you have other Savings Plan eligible resources running in your AWS Account like EC2 and Lambda, then it’s not necessary that your Fargate resources will receive most benefit since Savings Plan usage will be applied to the usage that gets the customer most savings. Check out how Savings Plan apply to your AWS usage here.

AWS Cost Management tool is a great option to check your SavingsPlan coverage across Fargate and other covered services.

Note that Compute Savings Plans supports EC2, Fargate, and Lambda.

Spot

By utilizing Fargate Spot, customers can save up to 70% of the on-demand pricing to run fault tolerant Fargate tasks. It’s not only a great option for parallelizable tasks, but also for websites and API tasks, which require high availability. When configuring your Service Autoscaling policy, you can specify the minimum number of regular tasks that should run at all times and then add tasks running on Fargate Spot to improve service performance in a cost-efficient way. When the capacity for Fargate Spot is available, the Scheduler will launch tasks to meet your request. If the capacity for Fargate Spot stops being available, Fargate Spot will scale down, while maintaining the minimum number of regular tasks to ensure the application’s availability. Use FARGATE_SPOT capacity provider to run these kinds of workloads. For more information, take a look at the announcement and more details here.

There is no price bidding for Fargate Spot, it’s solely subject to available capacity. Keep in mind, you should only run tasks designed to handle unexpected interruption with 2 minutes warning.

Right sizing tasks

Right sizing Fargate tasks is an important step for cost optimization. Too often applications are built and an arbitrary configuration is used for a Fargate task and then never revisited. This can result in the overprovisioning of a Fargate task and unnecessary spending. For applications running on Fargate, load testing should be done to understand how a specific task configuration performs under a given load scenario. Then, fine-tuning the vCPU and Memory allocation to the task, along with auto scaling policies, to ensure there is a right balance between performance and cost. There is a simple CloudWatch Dashboard template that can be used in conjunction with Container Insights to aid in right sizing. It’s hosted on GitHub here.

One approach would be to use a load testing tool such as described here (Distributed Load Testing on AWS) and then determine a baseline you are comfortable with for vCPU and Memory utilization. Then, when running the load test to simulate typical load on the application, fine-tune the vCPU and Memory configuration for the task until the baseline utilization is reached.

Auto Scaling

Correct configuration on auto scaling policies for ECS and Fargate is a tool for cost optimization. Ensuring that a cluster service is not scaling unnecessarily will lead to cost efficiency. To fine-tune auto scaling in ECS and Fargate service configuration, it’s important to learn the baseline performance of the application and then the performance of the application under load. Once the application performance profile is established, you can tweak the auto scaling configuration for ECS and Fargate. Fargate allows for the following types of scaling:

  • Target tracking:
    • Increase or decrease the number of tasks that your service runs based on a target value for a specific metric. This is similar to the way that your thermostat maintains the temperature of your home. You select the temperature and the thermostat does the rest.
  • Step scaling:
    • Increase or decrease the number of tasks that your service runs based on a set of scaling adjustments, known as step adjustments, that vary based on the size of the alarm breach.
  • Scheduled scaling:
    • Increase or decrease the number of tasks that your service runs based on the date and time.

The goal will be efficient scaling as to not have more or less tasks running within a service than required for the current load. As an example, for a CPU intensive application you might want to target 75% CPU utilization and have a target tracking configuration to maintain that CPU metric. Scaling should be viewed on a case basis as what works well for one application might not for another. One thing to keep in mind is to gather the metrics for the amount of time a task takes to come online as healthy and factor that into the scaling configuration.

Application Load Balancer

The new feature of Multiple Load Balancer Target Group support for Amazon ECS allows you to attach a single Amazon ECS service running on either EC2 or AWS Fargate, to multiple target groups. Target groups are used to route requests to one or more registered targets when using a load balancer. Attaching multiple target groups to your service allows you to simplify infrastructure code, reduce costs and increase manageability of your ECS services. This allows you to maintain a single ECS service that can serve traffic from both internal and external load balancers. This allows you to maintain one service as opposed to two copies of your service to server internal and external traffic as an example. This article describes this approach is detail. It is important to keep in mind that putting a service behind multiple load balancers can increase your blast radius and administrative complexity.

Instance scheduling

For lower level environments, such as test, engineering, or whatever your nomenclature is, it might not make sense to have tasks running all the time. Setting up a schedule for when tasks should be running or stopped can be as easy way to optimize for cost. For example, if there are no teams working on an application during off-business hours, then having an automated schedule in place to ensure tasks are stopped would be extremely effective. An easy way to get started with implementing a scheduled for ECS services would be to leverage the schedule scaling configuration for the ECS and Fargate service. This would allow you to scale down the service during off hours and scale it back up when developers are working and interacting with the service.

If you have ECS and Fargate tasks that operate in a batch architecture pattern, you can leverage task scheduling.  ECS supports the running tasks on a cron-like schedule and various custom schedulers. With this approach, you can have tasks start based on CloudWatch events and operate just when needed as opposed to sitting idle waiting for work to come.

Summary

In this article, we shared 7 different ways to take into consideration when working towards cost optimization for your Amazon ECS on AWS Fargate clusters. Reviewing each topic and determining if it makes sense for your specific workload is crucial. Again, cost optimization using these 7 approaches isn’t a one size fits all approach and you may find other methods in addition to the ones covered in this post. As an example, if your workload cannot tolerate interruption, leveraging Spot on AWS Fargate would not be a wise choice. Reviewing each method individually for the specific workload will be key to cost optimization.

Charu Khurana

Charu Khurana

As a Senior Solutions Architect at Amazon Web Services, Charu helps customers architect scalable, efficient, and cost effective systems. She is an experienced Software Engineer and holds multiple AWS certifications. Charu is passionate about exploring the world and experiencing different cultures and cuisines.

John Formento

John Formento

John is a Solutions Architect at AWS. He helps large enterprises achieve their goals by architecting secure and scalable solutions on the AWS Cloud. John holds 7 AWS certifications including AWS Certified Solutions Architect – Professional and DevOps Engineer – Professional.