Containers
Patterns for TargetGroupBinding with AWS Load Balancer Controller
Although provisioning load balancers directly from clusters has been the Kubernetes native method for exposing services, in some cases this creates a provisioning process that doesn’t align with the architecture of the applications. Therefore, there’s a need to have another mechanism for it. For those use-cases, which we describe in this post, provides the functionality to route the traffic of an existing Application Load Balancer (ALB)/Network Load Balancer (NLB) directly to our services without provisioning a new one. This functionality, called TargetGroupBinding, is a custom resource (CR) that can expose your pods using an existing ALB TargetGroup or NLB TargetGroup. The AWS Load Balancer Controller internally used TargetGroupBinding to support the functionality for the ingress and service resource as well. TargetGroupBinding lets users decouple the creation and deletion of load balancers from the lifecycle of a service and ingress. In turn, this allows users to abstract and decouple AWS infrastructure load balancers from the native Kubernetes resources.
TargetGroupBinding supports target groups of either the instance or IP target types. If the target type is not explicitly specified, then a mutating webhook automatically calls AWS API Gateway to find the target type for your target group and set it to the correct value.
For use cases with explicit provisioning and managing of load balancer infrastructure, decoupling from Kubernetes Ingress is required. And TargetGroupBinding can be used to achieve a fully dynamic solution for routing traffic to the respective services.
TargetGroupBinding patterns
In this post, we explore different use cases and architectural patterns where managing load balancer lifecycles with Kubernetes native resources is not ideal. In those cases, managing load balancer infrastructure outside of Kubernetes native resources is needed, and TargetGroupBinding can be used to keep the configurations dynamic. For a hands-on guide, refer to this Containers post, which offers a deeper look at the TargetGroupBinding and ingress resource sharing in the AWS Load Balancer Controller.
Distributing traffic globally
Users looking for a global load balancing solution for their Kubernetes workloads need a way to manage the load balancer integration with AWS Global Accelerator. AWS Global Accelerator is a networking service that helps you improve the availability and performance of the applications that you offer to global users.
In turn, users need to create and manage the AWS Global Accelerator and load balancers outside of the Kubernetes resource provisioning cycle, and have a way to route the traffic to the Kubernetes workloads. To achieve a fully dynamic solution, TargetGroupBinding can be used to bridge the externally managed AWS Global Accelerator, load balancers, and the Kubernetes services. Users can provision the AWS Global Accelerator, load balancer, and Amazon Elastic Kubernetes Service (Amazon EKS) infrastructure with their choice of Infrastructure-as-Code (IaC) tool and use TargetGroupBinding to dynamically associate the load balancer target group with Amazon EKS.
Refer to this Containers post for details on operating a multi-Region global application using Amazon EKS and AWS Global Accelerator. For achieving a fully dynamic solution, TargetGroupBinding can be used.
Single endpoint for L4 and L7 requests
Users might need to set up an NLB in front of an ALB for different reasons as described in this networking and content delivery post. In this kind of setup, your application might need to share a single endpoint for both L4 and L7, exposing the static IP address for your applications, or provide an for your ALB. That means that you have to make sure that both the NLB and the ALB are provisioned and configured regardless of the management lifecycle of your Kubernetes services, such as configuring routing from the NLB to the ALB. In this case, the TargetGroupBinding provides the functionality for having a static or predefined load balancing configuration, which allows you to dynamically register pods as targets.
Cluster blue/green upgrades
In many cases, you can directly upgrade an EKS cluster by using rolling updates. However, for reducing downtime and having the ability to quickly rollback to a previous state, users might prefer blue/green upgrades. Users can achieve the blue/green upgrade strategy by creating a brand new EKS cluster and keeping the incoming traffic load balancer the same for both of the clusters. Users can use the same load balancer with two target-groups, one for each of the EKS clusters. Then, using TargetGroupBinding CRD, Amazon EKS can be dynamically associated with the target-groups created. Within the load balancer rules, configure the weight of requests to send to each target group to control the migration of requests between the clusters. The advantage of this solution is that users don’t rely on domain name server (DNS), time to live (TTL), or caching on their client machines.
Hybrid deployments
Just as in the case of blue/green deployments, when there is a need to run Amazon EKS based applications in parallel with non-Amazon EKS based applications (such as Amazon Elastic Compute Cloud (Amazon EC2) based or even outside of AWS), this pattern can be helpful. You can route user traffic to the parallel EKS cluster using the same load balancer by using TargetGroupBinding. In this pattern, TargetGroupBinding users can use the same load balancer to route traffic to new Amazon EKS workload by adding a new target-group, and the CR takes care of dynamically binding the service to the created target-group.
This same strategy can be applied when planning a workload traffic migration from non-containerized, on-premises, or other container platforms to Amazon EKS.
Summary
In this post we’ve covered common use-cases where it’s best to decouple the ALBs and the Kubernetes service lifecycle. This helps you choose when to use the TargetGroupBinding feature with AWS Load Balancer Controller. We encourage you to think about designing the architecture that fits best to your application, rather than “forcing” it to align with the Kubernetes service/ingress configuration lifecycle. However, if you choose to implement the routing configuration for your application, note that every configuration has pros and cons. With TargetGroupBinding, you benefit from a clean method for binding Kubernetes services into ALB/NLB configurations that occur outside of Kubernetes, while having to use a custom CRD implementation that is not native to a Kubernetes installation (such as local development clusters). For detailed information about all potential configurations for ALB and TargetGroupBinding, refer to the documentation.