Containers
Deploying and scaling Apache Kafka on Amazon EKS
Introduction
Apache Kafka, a distributed streaming platform, has become a popular choice for building real-time data pipelines, streaming applications, and event-driven architectures. It is horizontally scalable, fault-tolerant, and performant. However, managing and scaling Kafka clusters can be challenging and often time-consuming. This is where Kubernetes, an open-source platform for automating deployment, scaling, and management of containerized applications, comes into play.
Kubernetes provides a robust platform for running Kafka. It offers features like rolling updates, service discovery, and load balancing which are beneficial for running stateful applications like Kafka. Moreover, Kubernetes’ ability to automatically heal applications ensures that Kafka brokers remain operational and highly available.
Scaling Kafka on Kubernetes involves increasing or decreasing the number of Kafka brokers or changing the resources (i.e., CPU, memory, and disk) available. Kubernetes provides mechanisms for auto-scaling applications based off resource utilization and workload, which can be applied to make sure that our level of resource allocation remains optimal at all times. This allows us to both ensure that we can respond to variable workloads while also optimizing cost by elastically autoscaling up and down as is necessary to meet performance demands.
You can install Kafka on a Kubernetes cluster using Helm or a Kubernetes Operator. Helm is a package manager for Kubernetes applications. It allows you to define, install, and manage complex Kubernetes applications using pre-configured packages called charts. A Kubernetes Operator is a custom controller that extends the Kubernetes API to manage and automate the deployment and management of complex, stateful applications. Operators are typically used for applications that require domain-specific knowledge or complex lifecycle management beyond what is provided by native Kubernetes resources.
In this post, we demonstrate how to build and deploy a Kafka cluster with Amazon Elastic Kubernetes Service (Amazon EKS) using Data on EKS (DoEKS). DoEKS’s open-source project empowers customers to expedite their data platform development on Kubernetes. DoEKS provides templates that incorporate best practices around security, performance, cost-optimization among others that can be used as is or fine-tuned to meet more unique requirements.
DoEKS makes it easy for users to begin utilizing these templates through the DoEKS blueprints project. This project provides blueprints, as well as practical examples tailored to various data frameworks, written as infrastructure-as-code in the form of Terraform or AWS Cloud Development Kit (CDK) that can be used to set up infrastructure designed with best practices and performance benchmarks in minutes without having to write any infrastructure-as-code yourself.
For deploying the Kafka cluster, we use the Strimzi Operator. Strimzi simplifies the process of running Apache Kafka in a Kubernetes cluster by providing container images and operators for running Kafka on Kubernetes. The operators simplify the process of deploying and running Kafka clusters, configuration and secure access, upgrading and managing Kafka and even managing topics and users within Kafka itself.
Solution overview
The following detailed diagram shows the design we will implement in this post.
In this post, you will learn how to provision the resources depicted in the diagram using Terraform modules that leverage DoEKS blueprints. These resources are:
- A sample Virtual Private Cloud (VPC) with three Private Subnets and three Public Subnets.
- An internet gateway for Public Subnets and Network Address Translation (NAT) Gateway for Private Subnets, VPC endpoints for Amazon Elastic Container Registry (Amazon ECR), Amazon Elastic Compute Cloud (Amazon EC2), Security Token Service (STS), etc.
- An Amazon Elastic Kubernetes Service (Amazon EKS) Cluster Control plane with public endpoint (for demonstration reasons only) with two managed node groups.
- Amazon EKS Managed Add-ons: VPC_CNI, CoreDNS, Kube_Proxy, EBS_CSI_Driver.
- A metrics server, Cluster Autoscaler, Cluster Proportional Autoscaler for CoreDNS, Kube-Prometheus-Stack with a local Prometheus and optionally an Amazon Managed Prometheus instance.
- Strimzi Operator for Apache Kafka deployed to strimzi-kafka-operator namespace. The operator by default watches and handles kafka in all namespaces.
Walkthrough
Deploy infrastructure
Clone the code repository to your local machine.
You need to setup your AWS credentials profile locally before running Terraform or AWS CLI commands. Run the following commands to deploy the blueprint. The deployment should take approximately 20-30 minutes.
Verify the deployment
List Amazon EKS nodes
The following command will update the kubeconfig on your local machine and allow you to interact with your Amazon EKS Cluster using kubectl to validate the deployment.
The Kafka cluster is deployed with multi-availability zone (AZ) configurations for high availability. This offers stronger fault tolerance by distributing brokers across multiple availability zones within an AWS Region. This ensures that a failed availability zone does not cause Kafka downtime. To achieve high availability, a minimum cluster of three brokers is required.
Check if the deployment has created six nodes. Three nodes for Core Node group and three for Kafka brokers across threes AZs.
Verify Kafka Brokers and Zookeeper
Verify the Kafka Broker and Zookeeper pods created by the Strimzi Operator and the desired Kafka replicas. Because Strimzi is using custom resources to extend the Kubernetes APIs, you can also use kubectl with Strimzi to work with Kafka resources.
Create Kafka topics and run sample test
In Apache Kafka, a topic is a category used to organize messages. Each topic has a unique name across the entire Kafka cluster. Messages are sent to and read from specific topics. Topics are partitioned and replicated across brokers in our implementation.
The producers and consumers play a crucial role in the handling of data. Producers are source systems that write data to Kafka topics. They send multiple streams to the Kafka brokers. Consumers are target systems and read data from Kafka brokers. They can subscribe to one or more topics in the Kafka cluster and process the data.
In Kafka, producers and consumers are fully decoupled and agnostic of each other, which is a key design element to achieve high scalability.
You will create a Kafka topic my-topic and run a Kafka producer to publish messages to it. A Kafka streams will consume messages from the Kafka topic my-topic and will send them to another topic my-topic-reversed. A Kafka consumer will read messages from my-topic-reversed topic.
To make it easy to interact with the Kafka cluster you create a pod with Kafka CLI.
Create the Kafka topics
Verify the status of the Kafka topics:
The cluster has a minimum of three brokers, each in a different AZ, with topics having a replication factor of three and a minimum In-Sync Replica (ISR) of two. This will allow a single broker to be down without affecting producers with acks=all. Fewer brokers will compromise either availability or durability.
Verify that the topic partitions are replicated across all three brokers.
The Kafka topic my-topic has 12 partitions, each with different leader and replicas. All replicas of a partition exist on separate brokers for better scalability and fault tolerance. The replication factor of three ensures that each partition has three replicas, so that data is still available even if one or two brokers fail. The same is for Kafka topic my-topic-reversed.
Reading and writing messages to the cluster
Deploy a simple Kafka producer, Kafka streams and Kafka consumer.
Verify the messages moving through your Kafka brokers. Split your terminal in three parts and run the commands below in each terminal.
Scaling your Kafka cluster
Scaling Kafka brokers is essential for a few key reasons. Firstly, scaling up the cluster by adding more brokers allows for the distribution of data and replication across multiple nodes. This reduces the impact of failures and ensures the system’s resilience. Secondly, with more broker nodes in the cluster, Kafka can distribute the workload and balance the data processing across multiple nodes, which leads to reduced latency. This means that data can be processed faster, improving the overall performance of the system.
Furthermore, Kafka is designed to be scalable. As your needs change, you can scale out by adding brokers to a cluster or scale in by removing brokers. In either case, data should be redistributed across the brokers to maintain balance. This flexibility allows you to adapt your system to meet changing demands and ensure optimal performance.
To scale the Kafka cluster, you need modify the Kafka.spec.kafka.replicas configuration to add or reduce the number of brokers.
The numbers of replicas for Zookeeper can stay the same. One of the main purposes of Zookeeper in distributed systems is to do leader election in case of any broker failure and it needs to have a strict majority when the vote happens. It is advisable to use at-least three Zookeeper servers in a quorum for small Kafka clusters in production environments and go with five or seven Zookeeper servers for medium to very large Kafka clusters.
You have now four Kafka brokers.
However, the existing partitions from Kafka topics are still piled up on the initial brokers.
You need to redistribute these partitions to balance our cluster out.
Cluster balancing
To balance our partitions across your brokers you can use Kafka Cruise Control. Cruise Control is a tool designed to fully automate the dynamic workload rebalance and self-healing of a Kafka cluster. It provides great value to Kafka users by simplifying the operation of Kafka clusters.
Cruise Control’s optimization approach allows it to perform several different operations linked to balancing cluster load. As well as rebalancing a whole cluster, Cruise Control also has the ability to do this balancing automatically when adding and removing brokers or changing topic replica values.
Kafka Cruise Control is already running in the kafka namespace and should appear as follows:
Strimzi provides a way of interacting with the Cruise Control API using the Kubernetes CLI, through KafkaRebalance resources.
You can create a KafkaRebalance resource like this:
The purpose of creating a KafkaRebalance resource is for creating optimization proposals and executing partitions rebalances based on that optimization proposals.
The Cluster Operator will subsequently update the KafkaRebalance resource with the details of the optimization proposal for review.
When the proposal is ready and looks good, you can execute the rebalance based on that proposal, by annotating the KafkaRebalance resource like this:
Wait for Kafka Rebalance to be Ready:
Check to see the partitions spread on all 4 Kafka brokers:
Benchmark your Kafka cluster
Kafka includes a set of tools namely the kafka-producer-perf-test.sh and kafka-consumer-perf-test.sh to run test performance and simulation of the expected load on the cluster.
Create a topic for performance test
First time you created a topic using Kubernetes constructs and Srimzi Custom Resource Definitions (CRDs). Now you create the topic using kafka-topics.sh script. Both options can be used.
Test the producer performance
Then run the producer performance test script with different configuration settings. The following example will use the topic created above to store 10 million messages with a size of 100 bytes each. The -1 value for –throughput means that messages are produced as quickly as possible, with no throttling limit. Kafka producer related configuration properties like acks and bootstrap.servers are mentioned as part of –producer-props argument.
The output will look similar with this:
In this example approximately 364k messages were produced per second with a maximum latency of 1 second.
Next, you test the consumer performance.
Test the consumer performance
The consumer performance test script will read 10 million messages from the test-topic-perf.
The output will show the total amount of 9536.74 MB data consumed and 10000000 messages and the throughput 265.36 MB.sec and 1951828.8636 nMsg.sec.
View Kafka cluster in Grafana
The blueprint installs a local instance of Prometheus and Grafana, with predefined dashboards and has the option to create an instance of Amazon Managed Prometheus. The secret for login to Grafana Dashboard is stored in AWS Secrets Manager.
Login to Grafana dashboard by running the following command.
Open browser with local Grafana Web UI.
Enter username as admin and password can be extracted from AWS Secrets Manager with the following command.
Open Strimzi Kafka dashboard
Test Kafka disruption
When you deploy Kafka on Amazon Elastic Compute Cloud (Amazon EC2) machines, you can configure storage in two primary ways: Amazon Elastic Block Storage (Amazon EBS) and instance storage. Amazon EBS consists of attaching a disk to an instance over a local network, whereas instance storage is directly attached to the instance.
Amazon EBS volumes offer consistent I/O performance and flexibility in deployment. This is crucial when considering Kafka’s built-in fault tolerance of replicating partitions across a configurable number of brokers. In the event of a broker failure, a new replacement broker fetches all data the original broker previously stored from other brokers in the cluster that hosts the other replicas. Depending on your application, this could involve copying tens of gigabytes or terabytes of data, which takes time and increases network traffic, potentially impacting the performance of the Kafka cluster. A properly designed Kafka cluster based on Amazon EBS storage can virtually eliminate re-replication overhead that would be triggered by an instance failure, as the Amazon EBS volumes can be reassigned to a new instance quickly and easily.
For instance, if an underlying Amazon EC2 instance fails or is terminated, the broker’s on-disk partition replicas remain intact and can be mounted by a new Amazon EC2 instance. By using Amazon EBS, most of the replica data for the replacement broker will already be in the Amazon EBS volume and hence won’t need to be transferred over the network.
It is recommended to use the broker ID from the failed broker in the replacement broker when replacing a Kafka broker. This approach is operationally simple as the replacement broker will automatically resume the work that the failed broker was doing.
You simulate a node failure for an Amazon EKS node from Kafka nodegroups hosting one of the brokers.
Create a topic and send some data to it:
Leader: 0 in the output above means the leader for topic test-topic-failover is broker 1, which corresponds to pod cluster-kafka-0.
Find on which node is running pod cluster-kafka-0.
Note: Your nominated node will be different.
Drain the node where the pod cluster-kafka-0 is running:
Get the instance ID and terminate it:
The cluster-kafka-0 pod will be recreated on a new node.
Verify the message is still available under the test-topic-failover.
Prerequisites
Ensure that you have an AWS account, a profile with administrator permissions configured and the following tools installed locally:
- AWS Command Line Interface (AWS CLI)
- kubectl
- terraform >=1.0.0
Cleaning up
Delete the resources creates in the blog to avoid incurring costs. You can run the below script:
Conclusion
In this post, I showed you how to deploy Apache Kafka on Amazon EKS while ensuring high availability across multiple nodes with automatic failover. Additionally, I explored the benefits of using Data on EKS blueprints to quickly deploy infrastructure using Terraform. If you would like to learn more, then you can visit the Data on EKS GitHub repository to collaborate with others and access additional resources.