Proactive autoscaling of Kubernetes workloads with KEDA and Amazon CloudWatch
Container Orchestration platforms, such as Amazon Elastic Kubernetes Service (Amazon EKS), have simplified the process of building, securing, operating, and maintaining container-based applications. Therefore, they have helped organizations focus on building applications. Customers have started adopting event-driven deployment, allowing Kubernetes deployments to scale automatically in response to metrics from various sources dynamically.
By implementing event-driven deployment and autoscaling, customers can achieve cost savings by providing on-demand compute and autoscale efficiently that are based on custom needs. KEDA (Kubernetes-based Event Driven Autoscaler) lets you drive the autoscaling of Kubernetes workloads based on the number of events, such as a custom metric scraped breaching a specified threshold, or when there’s a message in a Amazon Managed Streaming for Apache Kafka queue.
Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), IT managers, and product owners. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events. You get a unified view of operational health, and you gain complete visibility of your AWS resources, applications, and services running on AWS and on-premises.
This post will show you how to use KEDA to autoscale Amazon EKS pods by querying the metrics stored in CloudWatch.
The following diagram shows the complete setup that we will walk through in this post.
You will need the following to complete the steps in this post:
- AWS Command Line Interface (AWS CLI) version 2 is a utility for controlling AWS resources
- eksctl is a utility for managing Amazon EKS clusters
- kubectl is a utility for managing Kubernetes
- helm is a tool for automating Kubernetes deployments
Create an Amazon EKS Cluster
You start by setting a few environment variables: