Containers

Serve distinct domains with TLS powered by ACM on Amazon EKS


Introduction

AWS Elastic Load Balancers provide native ingress solutions for workloads deployed on Amazon Elastic Kubernetes Service (Amazon EKS) clusters at both L4 and L7 with Network Load Balancer and Application Load Balancer (ALB). The AWS Load Balancer Controller, formerly called the AWS ALB Ingress Controller, satisfies Kubernetes ingress using ALB and service type load balancing with Network Load Balancer (NLB). When a Kubernetes ingress is created, an ALB is provisioned that load balances HTTP or HTTPS traffic to different nodes or pods within the cluster. By default, the Load Balancer Controller creates a new ALB for each ingress resource that matches its requirements via specific Kubernetes annotations.

When an ingress resource with the annotation kubernetes.io/ingress.class: alb is detected by the AWS Load Balancer Controller, it creates various AWS resources based on the configuration specified in the ingress resource as annotations, including ALB, TargetGroups, Listeners, and Rules. A few noteworthy annotations from the full list available here include the alb.ingress.kubernetes.io/scheme that creates an internet-facing or internal ALB, alb.ingress.kubernetes.io/subnets annotation that specifies a list of target subnets for the ALB, and alb.ingress.kubernetes.io/certificate-arn annotation for specifying one or more certificates managed by AWS Certificate Manager, which enables HTTPS traffic on the ALB. This annotation-based configuration provides a flexible and powerful mechanism to provision the desired AWS resources.

The challenge

Many organizations have a need to serve hundreds of distinct domain names with Transport Layer Security (TLS) e.g., a web hosting provider. Having a dedicated load balancer per application can become costly and cumbersome to manage.

Solution overview

The AWS Load Balancer Controller solves this challenge by serving traffic to distinct domains by using ALB’s host-based routing, paired with Server Name Indication (SNI). Utilizing the annotation-based configuration, this post shows how a single ALB is provisioned and configured to securely serve three demonstration websites that’re aimed at serving pet lovers of rabbits, chipmunks, and hamsters.

Below is a high-level illustration on how the traffic will be served to different applications using a single application load balancer.

Serving 3 distinct domains with TLS on EKS via single ALB

In the diagram above:

  1. A user issues a GET request to https://hamster.local/
  2. The ALB receives the request, terminates the TLS, and using SNI extracts the host header
  3. Then using the host-named based routing the ALB maps the request to the proper target group
  4. Target group maps to the corresponding application Pods for your service

High-level the steps of the solution:

  1. Deploy AWS Load Balancer Controller
  2. Setup and import TLS certificates into AWS Certificate Manager (ACM)
  3. Deploy three distinct applications
  4. View Application Load Balancer SNI rules
  5. Create Route 53 availability zones and records
  6. Validate Kubernetes resources
  7. Validate the solution

Note: Currently, the ALB has a default quota of up to 25 certificates per ALB.

Prerequisites

You’ll need these tools to follow the walkthrough:

Lastly, this post assumes that you already have an existing Amazon EKS Cluster configured with an IAM OIDC provider.

Walkthrough

Step A: Deploy AWS Load Balancer Controller

In this step, we’ll create and setup the AWS Load Balancer Controller. For in depth granular instructions, please browse this link.

Replace the example values with your own values.

Fetch the AWS IAM policy:

curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.7/docs/install/iam_policy.json

Create an AWS IAM policy using the policy downloaded in the previous step:

aws iam create-policy \
    --policy-name AWSLoadBalancerControllerIAMPolicy \
    --policy-document file://iam_policy.json

 Create an AWS IAM role:

Replace MY-EKS-CLUSTER-NAME with the name of your Amazon EKS cluster, 111122223333 with your account ID, and then run the command.

eksctl create iamserviceaccount \
  --cluster=<MY-EKS-CLUSTER-NAME> \
  --namespace=kube-system \
  --name=aws-load-balancer-controller \
  --role-name AmazonEKSLoadBalancerControllerRole \
  --attach-policy-arn=arn:aws:iam::<111122223333>:policy/AWSLoadBalancerControllerIAMPolicy \
  --approve

Using Helm (v3), Install the Amazon Load Balancer Controller:

helm repo add eks https://aws.github.io/eks-charts

helm repo update eks

Replace MY-EKS-CLUSTER-NAME with the name of your Amazon EKS cluster:

helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system \
  --set clusterName==<MY-EKS-CLUSTER-NAME>\
  --set serviceAccount.create=false \
  --set serviceAccount.name=aws-load-balancer-controller

Verify that the Amazon Load Balance Controller is installed:

kubectl get deployment -n kube-system aws-load-balancer-controller

Example output:

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
aws-load-balancer-controller   2/2     2            2           92s

Step B: Setup TLS certificates for ACM

We’ll be creating TLS certificates for domains rabbit.local, hamster.local, and chipmunk.local and upload them and their private keys into the AWS Certificate Manager.

Create the TLS certificate for rabbit.local:

openssl req -new -x509 -sha256 -nodes -newkey rsa:4096 -keyout private_rabbit.key -out certificate_rabbit.crt -subj "/CN=rabbit.local"

Note: -nodes flag used in the openssl command is passed to prevent the encryption of the private key for simplicity purposes.

Upload certificate to ACM

aws acm import-certificate --certificate fileb://certificate_rabbit.crt --private-key fileb://private_rabbit.key

Repeat the same process for hamster.local and chipmunk.local domains, respectively.

Validate:

aws acm list-certificates --include keyTypes=RSA_4096

Example output:

{
    "CertificateSummaryList": [
        {
            "CertificateArn": "arn:aws:acm:us-east-2:12345678:certificate/4zzzzzzzcxxxxzxzxzx",
            "DomainName": "hamster.local",
            "SubjectAlternativeNameSummaries": [
                "hamster.local"
            ],
            "HasAdditionalSubjectAlternativeNames": false,
            "Status": "ISSUED",
            "Type": "IMPORTED",
            "KeyAlgorithm": "RSA-4096",
            "KeyUsages": [
                "ANY"
            ],
            "ExtendedKeyUsages": [
                "NONE"
            ],
            "InUse": false,
            "RenewalEligibility": "INELIGIBLE",
            "NotBefore": "2023-03-17T14:58:17+00:00",
            "NotAfter": "2024-03-16T14:58:17+00:00",
            "CreatedAt": "2023-03-17T15:21:00.430000+00:00",
            "ImportedAt": "2023-03-17T15:21:00.439000+00:00"
        },……

Step C: Deploy three distinct applications into the Amazon EKS cluster

Now that the edge is deployed, let’s deploy the entire Kubernetes application backend and its resources. Once deployed we’ll wire the edge to the proper applications.

To provision the necessary resources, the following manifest creates the following:

  • An nginx server Pod, hosting the landing HTML page
  • A configmap that holds the nginx configuration for the static HTML content
  • Kubernetes Service object to expose the pod as Kubernetes service
  • An Ingress object providing HTTPS access to the service via the ALB

By default, satisfying an ingress result in the creation of an ALB per ingress. In our case, we’re looking to consolidate multiple ingress resources into a single ALB (i.e., each ingress represents the distinct domain name and points to the same ALB and SNI certificate).

To achieve this, we set the ingress annotation:

alb.ingress.kubernetes.io/group.name: frontend

Note: IngressGroup Security Best Practice state that the IngressGroup feature should be used when all the Kubernetes users are within a trusted boundary. See this link for more details.

Creating the Template-based manifest:

cat << 'EoF' >> manifest-template.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ${CURRET_PET}-ingress
  namespace: myapplications-ns
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/group.name: frontend
    alb.ingress.kubernetes.io/certificate-arn: ${ACM_ARN}
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
  labels:
    app: ${CURRET_PET}-ingress
spec:
  ingressClassName: alb
  rules:
    - host: ${CURRET_PET}.local
      http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: ${CURRET_PET}-service
              port:
                number: 80

---
apiVersion: v1
kind: Pod
metadata:
  name: ${CURRET_PET}
  namespace: myapplications-ns
  labels:
    app.kubernetes.io/name: ${CURRET_PET}-proxy
spec:
  containers:
  - name: nginx
    image: nginx:stable
    ports:
      - containerPort: 80
        name: ${CURRET_PET}podsvc
    volumeMounts:
    - name: index-nginx
      mountPath: /usr/share/nginx/html/
  volumes:
  - name: index-nginx
    configMap:
      name: configmap-${CURRET_PET}
---
apiVersion: v1
kind: Service
metadata:
  name: ${CURRET_PET}-service
  namespace: myapplications-ns
spec:
  selector:
    app.kubernetes.io/name: ${CURRET_PET}-proxy
  ports:
  - name: name-of-service-port
    protocol: TCP
    port: 80
    targetPort: ${CURRET_PET}podsvc

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: configmap-${CURRET_PET}
  namespace: myapplications-ns
data:
  index.html: |
    <html>
      <body>
        <h1>Welcome to ${CURRET_PET}.local! </h1>
      </body>
    </html>
EoF

Create rabbit.yaml:

export CURRET_PET=rabbit
export ACM_ARN=$(aws acm list-certificates --query "CertificateSummaryList[?DomainName=='rabbit.local'].CertificateArn" --include keyTypes=RSA_4096 --output text)
envsubst < "manifest-template.yaml" > "rabbit.yaml"

Create chipmunk.yaml:

export CURRET_PET=chipmunk
export ACM_ARN=$(aws acm list-certificates --query "CertificateSummaryList[?DomainName=='chipmunk.local'].CertificateArn" --include keyTypes=RSA_4096 --output text)
envsubst < "manifest-template.yaml" > "chipmunk.yaml"

Create hamster.yaml:

export CURRET_PET=hamster
export ACM_ARN=$(aws acm list-certificates --query "CertificateSummaryList[?DomainName=='hamster.local'].CertificateArn" --include keyTypes=RSA_4096 --output text)
envsubst < "manifest-template.yaml" > "hamster.yaml"

Apply the manifests:

kubectl apply -f ./hamster.yaml -f ./chipmunk.yaml ./rabbit.yaml

Validate that hamster, chipmunk, rabbit variables were substituted correctly:

kubectl get pods -n myapplications-ns -o=jsonpath='{range.items[*]}{.metadata.name}{"\n"}'

Example output:

chipmunk
hamster
rabbit

Step D: View Application Load Balancer SNI rules

Head over to the AWS Management Console and select the Name of the load balancer.

Viewing Load Balancer name

Navigate to the Rules to view the listener rules. We can see the three rules created for our applications pointing to their associated target group, as shown in the following diagram.

Checking the ALB Listener Rules

Step E: Create Route53 Zones and records

For demonstration purposes, we’ll create three private hosted zones in Route53 to serve traffic in a private setup. In the real world, it is highly likely that you’ll create public hosted zones, which point to the public facing application load balancer.

Create three private hosted zone on Route53 using the AWS CLI:

Input the VPC-ID (the one that the Amazon EKS cluster is deployed into.

aws route53 create-hosted-zone --name rabbit.local \
--caller-reference my-private-zone\
--hosted-zone-config Comment=”my private zone”,PrivateZone=true \
--vpc VPCRegion=us-east-1,VPCId=<VPC-ID>

Repeat the command above to create private zones for chipmunk.local and hamster.local domains.

Validate that all three zones were created:

aws route53 list-hosted-zones

Example output:

{    "HostedZones": [
        {
            "Id": "/hostedzone/Z082794337ILOZR8XXXX",
            "Name": "rabbit.local.",
            "CallerReference": "07faae71-ba67-4d1b-a3b4-e1e5e527XXXX",
            "Config": {
                "Comment": "my private zone",
                "PrivateZone": true
            },
            "ResourceRecordSetCount": 3
        },
……………..

Pointing the domain to the Application Load Balancer:

Replace the Private-hosted-zone-id with yours, replace the DNS-of-ALB with the application load balancer created in Step C.

aws route53 change-resource-record-sets --hosted-zone-id <Private-hosted-zone-id> --change-batch \ '{"Changes": [ { "Action": "CREATE", "ResourceRecordSet": { "Name": "rabbit.local", "Type": "A", "AliasTarget":{ "HostedZoneId": "<zone-id-of-ALB>","DNSName": "<DNS-of-ALB>",","EvaluateTargetHealth": false} } } ]}'

Repeat the prior step for hamster.local and chipmunk.local domains.

Step F: Validate Kubernetes resources

Validate the Kubernetes resources:

kubectl get ingress,pod,service,configmap -n myapplications-ns

Example output:

Listing Kubernetes resources for ingress, pod, service, Configmap

Step G: Validating the solution

With the applications deployed and all relevant resources up and running, let’s test our applications.

  1. Use System Manager to connect to one of you Amazon EKS Worker nodes
  2. Execute the following commands:
curl -k https://hamster.local

Showing curl output of hamster.local domain

curl -k https://rabbit.local

Showing curl output of rabbit.local domain

curl -k https://chipmunk.local

Showing curl output of chipmunk.local domain

Cleaning up

To avoid incurring future charges, delete the application resources:

kubectl delete -f ./rabbit.yaml ./hamster.yaml ./chipmunk.yaml

Uninstall the AWS load Balancer Controller:

helms uninstall aws-load-balancer-controller -n kube-system

Delete the service account for AWS Load Balancer Controller. Replace MY-EKS-CLUSTER-NAME with the name of your cluster:

eksctl delete iamserviceaccount \
    --cluster my-cluster \
    --name aws-load-balancer-controller \
    --namespace kube-system \
    --wait

Delete the AWS IAM Policy for the AWS Load Balancer Controller. Replace 111122223333 with your account ID.

aws iam delete-policy \
    --policy-arn arn:aws:iam:: 111122223333:policy/AWSLoadBalancerControllerIAMPolicy

Conclusion

In this post, we showed you how to serve multiple domains by using a single instance of an AWS Application Load Balancer using host-based routing with SNI. We created an Ingress resource with annotations of IngressGroup, TLS (via ACM), and traffic listener to serve HTTPS traffic to distinct domains by a single ALB. This solution reduces the management overhead and is a cost-effective option as it doesn’t require creating dedicated ALB per domain.

Samir Khan

Samir Khan

Samir Khan is a Lead Enterprise Account Engineer at Amazon Web Services (AWS) in the Worldwide Public Sector. Samir's areas of interest include containers, DevSecOps, and serverless technologies. He focuses on helping customers build and develop highly scalable and resilient architectures in AWS.

Insoo Jang

Insoo Jang

Insoo Jang is an Enterprise Account Engineer at Amazon Web Services (AWS). He supports Worldwide Public Sector customers build, scale, and optimize container workloads on AWS. In his spare time, he enjoys fishing, soccer, and spending time with his family.

Umair Ishaq

Umair Ishaq

Umair Ishaq is a Specialist Solutions Architect for containers and serverless at AWS. He works with customers to help them build cloud-native, distributed systems. Always in search of better solutions, he is constantly evaluating new tools and techniques to add to his tech tool belt.