Containers

Using AWS Load Balancer Controller for blue/green deployment, canary deployment and A/B testing

In the past, our customers have commonly used solutions such as Flagger, service mesh, or CI/CD to enable blue/green deployment, A/B testing, and traffic management. The AWS Load Balancer Controller (formerly known as ALB Ingress Controller) enables EKS users to realize blue/green deployments, A/B testing, and canary deployments via the Kubernetes ingress resources with the native support of the AWS Application Load Balancer.

In this blog post, we introduce the concept of AWS Application Load Balancer weighted target groups, advanced request routing, and how to manage these configurations via Kubernetes ingress resources.

Solution overview

Weighted target group

To help AWS customers adopt blue/green and canary deployments and A/B testing strategies, AWS announced weighted target groups for Application Load Balancers in November 2019. Multiple target groups can be attached to the same forward action of a listener rule and specify a weight for each group. It allows developers to control how to distribute traffic to multiple versions of their application. For example, when you define a rule having two target groups with weights of 8 and 2, the load balancer will route 80 percent of the traffic to the first target group and 20 percent to the other.

Advanced request routing

In addition to the weighted target group, AWS announced the advanced request routing feature in 2019. Advanced request routing gives developers the ability to write rules (and route traffic) based on standard and custom HTTP headers and methods, the request path, the query string, and the source IP address. This new feature simplifies the application architecture by eliminating the need for a proxy fleet for routing, blocks unwanted traffic at the load balancer, and enables the implementation of A/B testing.

AWS Load Balancer Controller

AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster. It satisfies Kubernetes ingress resources by provisioning Application Load Balancers. Annotations can be added to Kubernetes ingress objects to customize the behavior of the provisioned Application Load Balancers. This allows developers to configure the Application Load Balancer and realize blue/green, canary, and A/B deployments using Kubernetes native semantics. For example, the following ingress annotation configures the Application Load Balancer to split the traffic between two versions of applications:

annotations:
   ...
  alb.ingress.kubernetes.io/actions.blue-green: |
    {
      "type":"forward",
      "forwardConfig":{
        "targetGroups":[
          {
            "serviceName":"hello-kubernetes-v1",
            "servicePort":"80",
            "weight":50
          },
          {
            "serviceName":"hello-kubernetes-v2",
            "servicePort":"80",
            "weight":50
          }
        ]
      }
    }

Diagram of ALB managed by AWS Load Balancer Controller via ingress

Walkthrough

Prerequisites

  • A good understanding of AWS Application Load Balancer, Amazon EKS, and Kubernetes.
  • The Amazon EKS command-line tool, eksctl.
  • The Kubernetes command-line tool, kubectl, and helm.

Create your EKS cluster with eksctl

Create your Amazon EKS cluster with the following command. You can replace cluster name dev with your own value. Replace region-code with any Region that is supported by Amazon EKS.

More details can be found from Getting started with Amazon EKS – eksctl.

$ eksctl create cluster --name dev --region ap-southeast-2
......
2021-12-31 16:15:04 [ℹ]  using region ap-southeast-2
2021-12-31 16:15:05 [ℹ]  setting availability zones to [ap-southeast-2a ap-southeast-2c ap-southeast-2b]
2021-12-31 16:15:05 [ℹ]  subnets for ap-southeast-2a - public:192.168.0.0/19 private:192.168.96.0/19
2021-12-31 16:15:05 [ℹ]  subnets for ap-southeast-2c - public:192.168.32.0/19 private:192.168.128.0/19
2021-12-31 16:15:05 [ℹ]  subnets for ap-southeast-2b - public:192.168.64.0/19 private:192.168.160.0/19
......
2021-12-31 16:31:36 [ℹ]  node "ip-192-168-23-165.ap-southeast-2.compute.internal" is ready
2021-12-31 16:31:36 [ℹ]  node "ip-192-168-79-33.ap-southeast-2.compute.internal" is ready
2021-12-31 16:31:38 [ℹ]  kubectl command should work with "/Users/<username>/.kube/config", try 'kubectl get nodes'
2021-12-31 16:31:38 [✔]  EKS cluster "dev" in "ap-southeast-2" region is ready

Install AWS Load Balancer Controller

Install the latest version of AWS Load Balancer Controller as documented here: Installing the AWS Load Balancer Controller add-on.

Verify the AWS Load Balancer Controller has been deployed:

$ kubectl get deployment -n kube-system aws-load-balancer-controller
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
aws-load-balancer-controller   2/2     2            2           5m34s

Deploy the sample application version 1 and version 2

The sample application used here is hello-kubernetes. Deploy two versions of the applications with custom messages and set the service type to ClusterIP:

$ git clone https://github.com/paulbouwer/hello-kubernetes.git
$ helm install --create-namespace --namespace hello-kubernetes v1 \
  ./hello-kubernetes/deploy/helm/hello-kubernetes \
  --set message="You are reaching hello-kubernetes version 1" \
  --set ingress.configured=true \
  --set service.type="ClusterIP"
NAME: v1
LAST DEPLOYED: Sat Jan  1 15:12:57 2022
NAMESPACE: hello-kubernetes
STATUS: deployed
REVISION: 1
TEST SUITE: None

$ helm install --create-namespace --namespace hello-kubernetes v2 \
  ./hello-kubernetes/deploy/helm/hello-kubernetes \
  --set message="You are reaching hello-kubernetes version 2" \
  --set ingress.configured=true \
  --set service.type="ClusterIP"
NAME: v2
LAST DEPLOYED: Sat Jan  1 15:13:26 2022
NAMESPACE: hello-kubernetes
STATUS: deployed
REVISION: 1
TEST SUITE: None

Deploy ingress and test the blue/green deployment

Ingress annotation alb.ingress.kubernetes.io/actions.${action-name} provides a method for configuring custom actions on the listener of an Application Load Balancer, such as redirect action, forward action. With forward action, multiple target groups with different weights can be defined in the annotation. AWS Load Balancer Controller provisions the target groups and configures the listener rules as per the annotation to direct the traffic. For example, the following ingress resource configures the Application Load Balancer to forward all traffic to hello-kubernetes-v1 service (weight: 100 vs. 0).

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: "hello-kubernetes"
  namespace: "hello-kubernetes"
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/actions.blue-green: |
      {
        "type":"forward",
        "forwardConfig":{
          "targetGroups":[
            {
              "serviceName":"hello-kubernetes-v1",
              "servicePort":"80",
              "weight":100
            },
            {
              "serviceName":"hello-kubernetes-v2",
              "servicePort":"80",
              "weight":0
            }
          ]
        }
      }
  labels:
    app: hello-kubernetes
spec:
  rules:
    - http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: blue-green
                port:
                  name: use-annotation

Note, the action-name in the annotation must match the serviceName in the ingress rules, and servicePort must be use-annotation as in the previous code snippet.

Deploy the following ingress resource and wait two minutes for Application Load Balancer to be created and configured by AWS Load Balancer Controller.

Diagram of ALB managed by AWS Load Balancer Controller via ingress

Verify that responses from the load balancer endpoint are always from application version 1:

$ kubectl apply -f ingress.yaml
ingress.networking.k8s.io/hello-kubernetes configured

ELB_URL=$(kubectl get ingress -n hello-kubernetes -o=jsonpath='{.items[0].status.loadBalancer.ingress[0].hostname}')
while true; do curl -s $ELB_URL | grep version; sleep 1; done
  You are reaching hello-kubernetes version 1
  You are reaching hello-kubernetes version 1
  You are reaching hello-kubernetes version 1

Blue/green deployment

To perform the blue/green deployment, update the ingress annotation to move all weight to version 2:

Diagram showing Application Load Balancer sends all traffic to application version 2

Deploy the following ingress resource:

alb.ingress.kubernetes.io/actions.blue-green: |
  {
     "type":"forward",
     "forwardConfig":{
       "targetGroups":[
         {
           "serviceName":"hello-kubernetes-v1",
           "servicePort":"80",
           "weight":0
         },
         {
           "serviceName":"hello-kubernetes-v2",
           "servicePort":"80",
           "weight":100
         }
       ]
     }
   }

Verify that the responses from the load balancer endpoint are now always from application version 2:

$ kubectl apply -f ingress.yaml
ingress.networking.k8s.io/hello-kubernetes configured

$ while true; do curl -s k8s-hellokub-hellokub-1c21b68bea-597504338.ap-southeast-2.elb.amazonaws.com | grep version; sleep 1; done
  You are reaching hello-kubernetes version 2
  You are reaching hello-kubernetes version 2
  You are reaching hello-kubernetes version 2

Deploy Ingress and test the canary deployment

Instead of moving all traffic to version 2, we can shift the traffic slowly towards version 2 by increasing the weight on version 2 step by step. This allows version 2 to be verified against a small portion of the production traffic before moving more traffic over. The following example shows that 10 percent of the traffic is shifted to version 2, while 90 percent of the traffic remains with version 1.

canary deployment with 10% traffic to version 2.

Deploy the following ingress resource:

alb.ingress.kubernetes.io/actions.blue-green: |
  {
     "type":"forward",
     "forwardConfig":{
       "targetGroups":[
         {
           "serviceName":"hello-kubernetes-v1",
           "servicePort":"80",
           "weight":90
         },
         {
           "serviceName":"hello-kubernetes-v2",
           "servicePort":"80",
           "weight":10
         }
       ]
     }
   }

Verify the responses from the load balancer endpoint:

$ kubectl apply -f ingress.yaml
ingress.networking.k8s.io/hello-kubernetes configured

$ while true; do curl -s k8s-hellokub-hellokub-1c21b68bea-597504338.ap-southeast-2.elb.amazonaws.com | grep version; sleep 1; done
  You are reaching hello-kubernetes version 1
  You are reaching hello-kubernetes version 2
  You are reaching hello-kubernetes version 1
  You are reaching hello-kubernetes version 1
  You are reaching hello-kubernetes version 1

Argo Rollouts

When performing a canary deployment in a production environment, typically the traffic is shifted with small increments. Usually it is done with some level of automation behind it. Various performance monitoring systems can also be integrated into this process, making sure that every step of the way there are no errors, or the errors are below an acceptable threshold. This is where progressive delivery mechanisms such as Argo Rollouts are very beneficial.

Argo Rollouts offers first class support for using the annotation-based traffic shaping abilities of AWS Load Balancer Controller to gradually shift traffic to the new version during an update. Additionally, Argo Rollouts can query and interpret metrics from various providers to verify key KPIs and drive automated promotion or rollback during an update. More information is available at Argo Rollouts integration with Application Load Balancer.

Deploy ingress and test the A/B testing

Ingress annotation alb.ingress.kubernetes.io/conditions.${conditions-name} provides a method for specifying routing conditions in addition to original host/path condition on ingress spec. The additional routing conditions can be based on http-header, http-request-method, query-string and source-ip. This provides developers multiple advanced routing options for their A/B testing implementation, without the need for setting up and managing a separate routing system, such as service mesh.

AWS Load Balancer Controller configures the listener rules as per the annotation to direct a portion of incoming traffic to a specific backend. In the following example, all requests are directed to version 1 by default. The following ingress resource directs the traffic to version 2 when the request contains a custom HTTP header: HeaderName=HeaderValue1.

Diagram of managed ALB showing HeaderName: HeaderValue1

Deploy the following ingress resource:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: "hello-kubernetes"
  namespace: "hello-kubernetes"
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/conditions.ab-testing: >
      [{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "HeaderName", "values":["HeaderValue1"]}}]
    alb.ingress.kubernetes.io/actions.ab-testing: >
      {"type":"forward","forwardConfig":{"targetGroups":[{"serviceName":"hello-kubernetes-v2","servicePort":80}]}}
  labels:
    app: hello-kubernetes
spec:
  rules:
    - http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: ab-testing
                port:
                  name: use-annotation
          - path: /
            pathType: Prefix
            backend:
              service:
                name: hello-kubernetes-v1
                port:
                  name: http

Verify the responses from the load balancer endpoint:

$ kubectl apply -f ingress-ab.yaml
ingress.networking.k8s.io/hello-kubernetes configured

$ while true; do curl -s k8s-hellokub-hellokub-1c21b68bea-597504338.ap-southeast-2.elb.amazonaws.com | grep version; sleep 1; done
  You are reaching hello-kubernetes version 1
  You are reaching hello-kubernetes version 1
  You are reaching hello-kubernetes version 1

$ while true; do curl -s -H "HeaderName: HeaderValue1" k8s-hellokub-hellokub-1c21b68bea-597504338.ap-southeast-2.elb.amazonaws.com | grep version; sleep 1; done
  You are reaching hello-kubernetes version 2
  You are reaching hello-kubernetes version 2
  You are reaching hello-kubernetes version 2

Cleanup

To avoid orphaned resources in your VPC that prevent you from being able to delete the VPC, you need to delete the Ingress resource, which decommissions the Application Load Balancer first.

$ kubectl delete ing hello-kubernetes
ingress.extensions "hello-kubernetes" deleted

$ eksctl delete cluster --name dev
2022-01-03 19:33:48 [ℹ]  eksctl version 0.79.0
2022-01-03 19:33:48 [ℹ]  using region ap-southeast-2
2022-01-03 19:33:48 [ℹ]  deleting EKS cluster "dev"
......
2022-01-03 19:37:59 [ℹ]  will delete stack "eksctl-dev-cluster"
2022-01-03 19:37:59 [✔]  all cluster resources were deleted

Conclusion

There are various ways of adopting blue/green deployment, canary deployment and A/B testing. This article demonstrates how we can achieve similar deployment strategies with native AWS Application Load Balancer by managing the Kubernetes ingress resource with the use of AWS Load Balancer Controller.

Xin Chen

Xin Chen

Xin Chen is a Cloud Architect at AWS, focusing on Containers and Serverless Platform. He engages with customers to create innovative solutions that address customer business problems and to accelerate the adoption of AWS services. In his spare time, Xin enjoys spending time with his family, reading books, and watching movies.

Haofei Feng

Haofei Feng

Haofei is a Senior Cloud Architect at AWS with 16+ years experiences in Containers, DevOps and IT Infrastructure. He enjoys helping customers with their cloud journey. He is also keen to assist his customers to design and build scalable, secure and optimized container workloads on AWS. In his spare time, he spent time with his family and his lovely Border Collies. Haofei is based in Sydney, Australia.

Anish Kumar

Anish Kumar

Anish Kumar is Cloud Architect at AWS with years of experience in Containers, DevOps and Infrastructure development. Anish assists customers in building secure and scalable solutions by adopting AWS Services.