Containers
Exposing Kubernetes Applications, Part 2: AWS Load Balancer Controller
Introduction
The Exposing Kubernetes Applications series focuses on ways to expose applications running in a Kubernetes cluster for external access.
In Part 1 of the series, we explored Service and Ingress resource types that define two ways to control the inbound traffic in a Kubernetes cluster. We discussed the handling of these resource types via Service and ingress controllers, followed by an overview of advantages and drawbacks of some of the controllers’ implementation variants.
In this post, Part 2, we provide an overview the AWS open-source implementation of an ingress controller, AWS Load Balancer Controller (ALB). We demonstrate the controller’s setup, configuration, possible use cases and limitations.
Part 3 is dedicated to a similar walkthrough of an additional open-source implementation of an ingress controller, Ingress-Nginx Controller, and some of the ways it’s different from its AWS counterpart.
AWS Load Balancer Controller Overview
In Part 1of the series, we focused on two approaches to exposing Kubernetes applications: an external load balancer that routes the traffic directly to the application’s Pods and an in-cluster reverse proxy that serves as the single-entry point for the application and routes the traffic to the Pods.
AWS Load Balancer Controller represents the first approach, which we showed schematically in the following diagram:
Note that the AWS Load Balancer Controller also contains a Service controller. We will see examples of its usage in Part 3 of the series.
In the diagram below, we see the step-by-step process of exposing an application behind an Ingress:
- The
alb-ingress-controller
watches for Ingress events. - An ALB is managed for each Ingress object. It is created, configured, and deleted as required.
- Target groups are created, with
instance
(ServiceA and ServiceB) orip
(ServiceC) modes. - The ALB listeners are created and configured.
- Rules are configured to forward traffic from the listeners to the target groups.
In Part 1, we outlined the benefits of outsourcing the work to a managed, highly available, and scalable load balancing service like AWS Load Balancer. In this post (Part 2), we walk through the setup, configuration, and code examples that illustrate the usage of the AWS Load Balancer Controller.
Walkthrough
Prerequisites
1. Obtain Access to an AWS Account
You will need an AWS account and ability to communicate with it from your terminal, using AWS Command Line Interface (AWS CLI) and similar tools.
In the code examples below we encounter several tokens that can’t be given synthetic values (e.g., those referring to AWS account ID or Region). These should be replaced with values that match your environment.
2. Create AWS Load Balancer Controller Identity and Access Management (AWS IAM) Policy
Create AWSLoadBalancerControllerIAMPolicy
using the following instructions (only #2 and #3) that setup IAM Roles for Service Accounts to provide permissions for the controller.
Note that OIDC IAM provider registration and AWS Load Balancer Controller service account creation are done automatically by eksctl
based on the configuration below and do not need to be explicitly handled.
3. Create the Cluster
We use eksctl to provision an Amazon EKS cluster, which in addition to creating the cluster itself, also provisions and configures the necessary network resources: a virtual private cloud (VPC), subnets, and security groups (see here for installation instructions).
The following eksctl
configuration file defines the Amazon EKS cluster and its settings:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: aws-load-balancer-controller-walkthrough
region: ${AWS_REGION}
version: '1.23'
iam:
withOIDC: true
serviceAccounts:
- metadata:
name: aws-load-balancer-controller
namespace: kube-system
attachPolicyARNs:
- arn:aws:iam::${AWS_ACCOUNT}:policy/AWSLoadBalancerControllerIAMPolicy
managedNodeGroups:
- name: main-ng
instanceType: m5.large
desiredCapacity: 1
privateNetworking: true
Put the code above in the config.yml
file.
Verify existence of the following environment variables: AWS_REGION
and AWS_ACCOUNT
and create the cluster:
envsubst < config.yml | eksctl create cluster -f -
Note that this walkthrough uses Amazon EKS platform version eks.3
for Kubernetes version 1.23.
For brevity, the configuration above doesn’t consider many aspects of Kubernetes cluster provision and management like security and monitoring. For more information and best practices explore Amazon EKS and eksctl documentation.
Verify that the cluster is up and running:
kubectl get nodes
kubectl get pods -A
The commands above should return a single Amazon EKS node and four running Pods.
4. Install Helm
We use Helm, a popular package manager for Kubernetes, to install and configure the controller. Follow Helm installation instructions here.
Install the AWS Load Balancer Controller
1. Install the CustomResourceDefinitions (CRDs)
The following installs custom resource definitions necessary for the controller to function:
kubectl apply -k \
"github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master"
2. Install the Controller Using Helm
Note that we use the service account created automatically by eksctl.
helm repo add eks https://aws.github.io/eks-charts
helm upgrade -i aws-load-balancer-controller eks/aws-load-balancer-controller \
--namespace kube-system \
--set clusterName=aws-load-balancer-controller-walkthrough \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller
kubectl -n kube-system rollout status deployment aws-load-balancer-controller
kubectl get deployment -n kube-system aws-load-balancer-controller
Deploy the Testing Services
1. Create the Services’ Namespace
kubectl create namespace apps
2. Create the Service Manifest File
Place the following code in the service.yml
file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${SERVICE_NAME}
namespace: ${NS}
labels:
app.kubernetes.io/name: ${SERVICE_NAME}
spec:
selector:
matchLabels:
app.kubernetes.io/name: ${SERVICE_NAME}
replicas: 1
template:
metadata:
labels:
app.kubernetes.io/name: ${SERVICE_NAME}
spec:
terminationGracePeriodSeconds: 0
containers:
- name: ${SERVICE_NAME}
image: hashicorp/http-echo
imagePullPolicy: IfNotPresent
args:
- -listen=:3000
- -text=${SERVICE_NAME}
ports:
- name: app-port
containerPort: 3000
resources:
requests:
cpu: 0.125
memory: 50Mi
---
apiVersion: v1
kind: Service
metadata:
name: ${SERVICE_NAME}
namespace: ${NS}
labels:
app.kubernetes.io/name: ${SERVICE_NAME}
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: ${SERVICE_NAME}
ports:
- name: svc-port
port: 80
targetPort: app-port
protocol: TCP
The Service above, based on the http-echo image, answers any request with the name of the Service, as defined above by the ${SERVICE_NAME}
token. We also define a single replica for simplicity.
3. Deploy and Verify the Services
Execute the following commands (we will use these Services throughout the post):
SERVICE_NAME=first NS=apps envsubst < service.yml | kubectl apply -f -
SERVICE_NAME=second NS=apps envsubst < service.yml | kubectl apply -f -
SERVICE_NAME=third NS=apps envsubst < service.yml | kubectl apply -f -
SERVICE_NAME=fourth NS=apps envsubst < service.yml | kubectl apply -f -
SERVICE_NAME=error NS=apps envsubst < service.yml | kubectl apply -f -
Verify that all the resources are deployed:
kubectl get pod,svc -n apps
This should produce an output similar to the following:
Deploy a Simple Ingress
1. Create the Ingress Manifest file
Copy the following code into the ingress.yml
file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ${NS}-ingress
namespace: ${NS}
annotations:
alb.ingress.kubernetes.io/load-balancer-name: ${NS}-ingress
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/healthcheck-path: /healthz
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /first
pathType: Prefix
backend:
service:
name: first
port:
name: svc-port
- path: /second
pathType: Prefix
backend:
service:
name: second
port:
name: svc-port
For a brief overview of Ingress, you can refer to Part 1 of the series or explore the full specification and overview in the official Kubernetes documentation.
The code above does several things that provide additional information for the AWS Load Balancer Controller:
- we set the
ingressClassName
toalb
, which indicates to the controller to handle this Ingress resource - we define, via annotations:
- the name of the load balancer the controller will create
- the load balancer target type to be
ip
(i.e., the Pods themselves are registered as targets) - the load balancer to be internet-facing
- the health check path
Before Kubernetes 1.19, you could also define the Ingress class via an Ingress annotation of the form kubernetes.io/ingress.class: alb
, which since has been deprecated. The annotation is still operational, but ingressClassName
has precedence over it.
See here for the full list of available annotations.
2. Deploy the Ingress Resource
Execute:
NS=apps envsubst < ingress.yml | kubectl apply -f -
After a while, we can view the state of the deployed Ingress resource (we replaced the ID and the region with a placeholder):
kubectl get ingress -n apps
Which produces a result similar to this:
Store the Application Load Balancer URL:
export ALB_URL=$(kubectl get -n apps ingress/apps-ingress -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
After a couple of minutes, the load balancer is provisioned and we can send requests:
curl ${ALB_URL}/first
curl ${ALB_URL}/second
This result indicates that the Services and the Ingress are functioning properly:
AWS Load Balancer Controller provisioned, as it does for Ingress resources, an Application Load Balancer with the name apps-ingress
, defining listeners rules configuration (and target groups with the targets):
Best practices for AWS Application Load Balancer Controller require that we configure it with at least two public and two private subnets across availability zones. The nodes of our Amazon EKS cluster reside in the private subnets, while the public subnets are used to provision internal resources that allow the load balancer to route traffic to the IPs in the private subnets.
The controller can automatically discover these subnets based on their tags:
kubernetes.io/role/elb
with1
or''
value for public subnets, andkubernetes.io/role/internal-elb
with1
or''
value for private subnets
We don’t need to explicitly create these tags, because the eksctl
tool, which we used to provision the cluster, did it automatically. We can also explicitly set the relevant subnets using alb.ingress.kubernetes.io/subnets
annotation.
If you were to open one of the target groups above, you would see the registered targets and Pod port and the targets’ health check configuration, as defined by alb.ingress.kubernetes.io/healthcheck-path
annotation in the Ingress definition above, reflected in the provisioned target group. The controller has also updated the cluster security groups to allow the traffic from the load balancer nodes to our Pods.
You may have noticed in the listeners configuration above, that for requests that do not route to either of the specific paths, there is a default, catch-all rule that returns a fixed 404
response.
You can configure this further via additional annotations or you can explicitly define a default backend via the Ingress configuration.
Default Backend
1. Update the Ingress
Add the defaultBackend
configuration to the ingress.yml
file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ${NS}-ingress
namespace: ${NS}
annotations:
alb.ingress.kubernetes.io/load-balancer-name: ${NS}-ingress
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/healthcheck-path: /healthz
spec:
ingressClassName: alb
defaultBackend:
service:
name: error
port:
name: svc-port
rules:
- http:
paths:
- path: /first
pathType: Prefix
backend:
service:
name: first
port:
name: svc-port
- path: /second
pathType: Prefix
backend:
service:
name: second
port:
name: svc-port
2. Deploy the Updated Ingress
NS=apps envsubst < ingress.yml | kubectl apply -f -
After a very short while, the Application Load Balancer listener rules configuration is updated accordingly:
Running the following commands verifies the listeners rules configuration:
curl ${ALB_URL}
curl ${ALB_URL}/first
curl ${ALB_URL}/first/something
curl ${ALB_URL}/second
curl ${ALB_URL}/third
curl ${ALB_URL}/something
This results in this output:
Hostname-Based Routing
In the Ingress definition above, we didn’t specify any host
setting for any of the rules, which means that any request with any value of the Host
header sent to the Application Load Balancer is matched against these rules.
We can combine path-based and host-based routing in the Ingress definition.
1. Alter the Ingress Definition
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ${NS}-ingress
namespace: ${NS}
annotations:
alb.ingress.kubernetes.io/load-balancer-name: ${NS}-ingress
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/healthcheck-path: /healthz
spec:
ingressClassName: alb
defaultBackend:
service:
name: error
port:
name: svc-port
rules:
- host: a.example.com
http:
paths:
- path: /first
pathType: Prefix
backend:
service:
name: first
port:
name: svc-port
- path: /second
pathType: Prefix
backend:
service:
name: second
port:
name: svc-port
- host: '*.example.com'
http:
paths:
- path: /third
pathType: Prefix
backend:
service:
name: third
port:
name: svc-port
- path: /fourth
pathType: Prefix
backend:
service:
name: fourth
port:
name: svc-port
We used a wildcard to route all requests aside from a.example.com
to those defined under *.example.com
(see here for additional wildcards considerations).
2. Deploy the updated Ingress
NS=apps envsubst < ingress.yml | kubectl apply -f -
The load balancer configuration is updated as expected:
We can test the setup using curl
(note the incorrect domain on the second request):
curl ${ALB_URL}/first -H 'Host: a.example.com'
curl ${ALB_URL}/first -H 'Host: a.example.net'
curl ${ALB_URL}/third -H 'Host: b.example.com'
curl ${ALB_URL}/fourth -H 'Host: c.example.com'
The expected result is:
In a production setup, we would, of course, point the DNS records for the subdomains above to the ALB DNS name.
Note that from this point onwards the code examples are for illustration purposes only and are not a part of the walkthrough. You can deploy them as you see fit and explore their impact.
Multiple Ingress Resources
So far, we’ve dealt with a single Ingress resource using ALB to route traffic to backend Services in the same namespace. What if we need multiple Ingress resources, spread across many namespaces? Do we have to provision an ALB for each one of them?
The answer is NO.
We can configure multiple Ingress resources to be handled under the same load balancer. This is done by using alb.ingress.kubernetes.io/group.name annotation that has the same values for all Ingress resources. We can also control the merging order by adding the alb.ingress.kubernetes.io/group.order annotation.
One possible use case for such a consolidation is to reduce dependencies between teams and each responsible for a subset of Services, while still being able to use the same centrally provisioned load balancer.
For example, consider the following Ingress resources (abridged for clarity):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: apps-ingress
namespace: apps
annotations:
alb.ingress.kubernetes.io/load-balancer-name: apps-ingress
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/healthcheck-path: /healthz
alb.ingress.kubernetes.io/group.name: common-ingress-group
spec:
ingressClassName: alb
defaultBackend:
service:
name: error
port:
name: svc-port
rules:
- host: a.example.com
...
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ops-ingress
namespace: ops
annotations:
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.name: common-ingress-group
alb.ingress.kubernetes.io/group.order: '10'
spec:
ingressClassName: alb
rules:
- host: a.example.com
...
- host: b.example.com
...
They will be merged under the same load balancer:
The order we defined for the second Ingress resource is 10
, which is higher (and thus has lower priority) than the default order of 0
for the first Ingress resource. Hence the routes for the a.example.com from the second Ingress are lower on the rules list (see /first
pointing to first.apps
Service in the rule #1 and to third.ops
Service in the rule #3).
Note that we need to define target-type
on both Ingress resources and we can only define a default backend once.
IngressClass
The ingressClassName
property in all Ingress resources definitions above is a reference to a cluster-wide IngressClass
resource. Indeed, alb
is the default value for the controller’s ingressClass
configuration value, and is installed with the controller.
It refers to a definition similar to the following:
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: alb
...
spec:
controller: ingress.k8s.aws/alb
We can define our own Ingress classes in the same manner.
Default IngressClass
In addition to being able to reference an IngressClass
from within an Ingress resource, we can also define a default IngressClass
and remove the necessity to do so altogether. This can be done by adding an ingressclass.kubernetes.io/is-default-class
annotation to an IngressClass
definition:
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: alb
annotations:
ingressclass.kubernetes.io/is-default-class: true
...
spec:
controller: ingress.k8s.aws/alb
All Ingress resources that don’t reference any IngressClass
are handled by the controller defined in the controller filed above.
IngressClassParams
Among the custom resource definitions that we installed along with the controller, there is IngressClassParams
. Its purpose is to pass additional parameters to the controller, with some of these parameters previously handled via annotations. In fact, we can move the group and scheme annotations into it as well. Additionally, we can define a namespaceSelector
that controls which Ingress resources, based on their namespace, are allowed to use the specific IngressClassParams
:
apiVersion: elbv2.k8s.aws/v1beta1
kind: IngressClassParams
metadata:
name: alb-ingress-class-params
...
spec:
namespaceSelector:
matchLabels:
team: some-team
group:
name: common-ingress-group
scheme: internet-facing
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: alb
....
spec:
controller: ingress.k8s.aws/alb
parameters:
apiGroup: elbv2.k8s.aws
kind: IngressClassParams
name: alb-ingress-class-params
Limiting the Controller Scope
When we deployed AWS Load Balancer Controller at the beginning of the walkthrough, we didn’t explicitly limit which Ingress and Service resources it is going to handle.
Alternatively, the controller can be scoped to watch Ingress and Service resources from a specific namespace.
helm upgrade -i aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=aws-load-balancer-controller-walkthrough \
--set serviceAccount.create=false \
--set watchNamespace=apps \
--set serviceAccount.name=aws-load-balancer-controller
The AWS Load Balancer Controller can either watch a specific namespace by providing a watchNamespace
configuration value or watch all namespaces by omitting it. Currently, there is no option to watch several specific namespaces.
Multiple AWS Load Balancer Controller Instances
Given the above ability to limit the controller to a specific namespace, it may make sense to deploy multiple instances of the AWS Load Balancer Controller, each limited to a specific namespace and with different configuration. Currently, the AWS Load Balancer Controller doesn’t support multiple instances, but you can track the following issue for the progress.
Cleanup
To remove the resources that you created during the walkthrough you can execute the following:
NS=apps envsubst < ingress.yml | kubectl delete -f -
helm uninstall -n kube-system aws-load-balancer-controller
envsubst < config.yml | eksctl delete cluster -f -
aws iam delete-policy --policy-arn arn:aws:iam::${AWS_ACCOUNT}:policy/AWSLoadBalancerControllerIAMPolicy
Conclusion
The AWS Load Balancer Controller reduces the operational complexity by offloading ingress traffic handling to a highly available and elastically scalable managed service, the AWS Application Load Balancer. It translates Ingress resources into load balancer’s provision and configuration, providing applications with the ability to control all aspects of handling that ingress traffic, independently of infrastructure provision processes.
Because it provides a Service controller as well as an ingress controller, the AWS Load Balancer Controller is a complete solution for exposing Kubernetes application to external traffic.