Containers
Using AWS Load Balancer Controller for blue/green deployment, canary deployment and A/B testing
In the past, our customers have commonly used solutions such as Flagger, service mesh, or CI/CD to enable blue/green deployment, A/B testing, and traffic management. The AWS Load Balancer Controller (formerly known as ALB Ingress Controller) enables EKS users to realize blue/green deployments, A/B testing, and canary deployments via the Kubernetes ingress resources with the native support of the AWS Application Load Balancer.
In this blog post, we introduce the concept of AWS Application Load Balancer weighted target groups, advanced request routing, and how to manage these configurations via Kubernetes ingress resources.
Solution overview
Weighted target group
To help AWS customers adopt blue/green and canary deployments and A/B testing strategies, AWS announced weighted target groups for Application Load Balancers in November 2019. Multiple target groups can be attached to the same forward action of a listener rule and specify a weight for each group. It allows developers to control how to distribute traffic to multiple versions of their application. For example, when you define a rule having two target groups with weights of 8 and 2, the load balancer will route 80 percent of the traffic to the first target group and 20 percent to the other.
Advanced request routing
In addition to the weighted target group, AWS announced the advanced request routing feature in 2019. Advanced request routing gives developers the ability to write rules (and route traffic) based on standard and custom HTTP headers and methods, the request path, the query string, and the source IP address. This new feature simplifies the application architecture by eliminating the need for a proxy fleet for routing, blocks unwanted traffic at the load balancer, and enables the implementation of A/B testing.
AWS Load Balancer Controller
AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster. It satisfies Kubernetes ingress resources by provisioning Application Load Balancers. Annotations can be added to Kubernetes ingress objects to customize the behavior of the provisioned Application Load Balancers. This allows developers to configure the Application Load Balancer and realize blue/green, canary, and A/B deployments using Kubernetes native semantics. For example, the following ingress annotation configures the Application Load Balancer to split the traffic between two versions of applications:
Walkthrough
Prerequisites
- A good understanding of AWS Application Load Balancer, Amazon EKS, and Kubernetes.
- The Amazon EKS command-line tool, eksctl.
- The Kubernetes command-line tool, kubectl, and helm.
Create your EKS cluster with eksctl
Create your Amazon EKS cluster with the following command. You can replace cluster name dev
with your own value. Replace region-code
with any Region that is supported by Amazon EKS.
More details can be found from Getting started with Amazon EKS – eksctl.
Install AWS Load Balancer Controller
Install the latest version of AWS Load Balancer Controller as documented here: Installing the AWS Load Balancer Controller add-on.
Verify the AWS Load Balancer Controller has been deployed:
Deploy the sample application version 1 and version 2
The sample application used here is hello-kubernetes. Deploy two versions of the applications with custom messages and set the service type to ClusterIP
:
Deploy ingress and test the blue/green deployment
Ingress annotation alb.ingress.kubernetes.io/actions.${action-name}
provides a method for configuring custom actions on the listener of an Application Load Balancer, such as redirect action, forward action. With forward action, multiple target groups with different weights can be defined in the annotation. AWS Load Balancer Controller provisions the target groups and configures the listener rules as per the annotation to direct the traffic. For example, the following ingress resource configures the Application Load Balancer to forward all traffic to hello-kubernetes-v1
service (weight: 100 vs. 0).
Note, the action-name
in the annotation must match the serviceName
in the ingress rules, and servicePort
must be use-annotation
as in the previous code snippet.
Deploy the following ingress resource and wait two minutes for Application Load Balancer to be created and configured by AWS Load Balancer Controller.
Verify that responses from the load balancer endpoint are always from application version 1:
Blue/green deployment
To perform the blue/green deployment, update the ingress annotation to move all weight to version 2:
Deploy the following ingress resource:
Verify that the responses from the load balancer endpoint are now always from application version 2:
Deploy Ingress and test the canary deployment
Instead of moving all traffic to version 2, we can shift the traffic slowly towards version 2 by increasing the weight on version 2 step by step. This allows version 2 to be verified against a small portion of the production traffic before moving more traffic over. The following example shows that 10 percent of the traffic is shifted to version 2, while 90 percent of the traffic remains with version 1.
Deploy the following ingress resource:
Verify the responses from the load balancer endpoint:
Argo Rollouts
When performing a canary deployment in a production environment, typically the traffic is shifted with small increments. Usually it is done with some level of automation behind it. Various performance monitoring systems can also be integrated into this process, making sure that every step of the way there are no errors, or the errors are below an acceptable threshold. This is where progressive delivery mechanisms such as Argo Rollouts are very beneficial.
Argo Rollouts offers first class support for using the annotation-based traffic shaping abilities of AWS Load Balancer Controller to gradually shift traffic to the new version during an update. Additionally, Argo Rollouts can query and interpret metrics from various providers to verify key KPIs and drive automated promotion or rollback during an update. More information is available at Argo Rollouts integration with Application Load Balancer.
Deploy ingress and test the A/B testing
Ingress annotation alb.ingress.kubernetes.io/conditions.${conditions-name}
provides a method for specifying routing conditions in addition to original host/path condition on ingress spec. The additional routing conditions can be based on http-header, http-request-method
, query-string
and source-ip
. This provides developers multiple advanced routing options for their A/B testing implementation, without the need for setting up and managing a separate routing system, such as service mesh.
AWS Load Balancer Controller configures the listener rules as per the annotation to direct a portion of incoming traffic to a specific backend. In the following example, all requests are directed to version 1 by default. The following ingress resource directs the traffic to version 2 when the request contains a custom HTTP header: HeaderName=HeaderValue1
.
Deploy the following ingress resource:
Verify the responses from the load balancer endpoint:
Cleanup
To avoid orphaned resources in your VPC that prevent you from being able to delete the VPC, you need to delete the Ingress resource, which decommissions the Application Load Balancer first.
Conclusion
There are various ways of adopting blue/green deployment, canary deployment and A/B testing. This article demonstrates how we can achieve similar deployment strategies with native AWS Application Load Balancer by managing the Kubernetes ingress resource with the use of AWS Load Balancer Controller.