Containers

Using ALB Ingress Controller with Amazon EKS on Fargate

In December 2019, we announced the ability to use Amazon Elastic Kubernetes Service to run Kubernetes pods on AWS Fargate. Fargate eliminates the need for you to create or manage EC2 instances for your Kubernetes applications. When your pods start, Fargate automatically allocates compute resources on-demand to run them.

Fargate is great for running and scaling microservices – especially those with spikey, unpredictable traffic patterns. Amazon Elastic Load Balancing Application Load Balancer (ALB) is a popular AWS service that load balances incoming traffic at the application layer (layer 7) across multiple targets, such as pods running on a Kubernetes cluster and is a great way to get traffic to such microservices.

In this blog, we’ll show you how to setup AWS Application Load Balancer (ALB) with your EKS cluster for ingress-based load balancing to Fargate pods using the open source ALB Ingress Controller. You can learn more about the details of Kubernetes ingress with the ALB Ingress controller in our post here.

To get started, we’ll create an Amazon EKS cluster and a Fargate profile (which allows us to launch pods on Fargate), implement IAM roles for service accounts on our cluster in order to give fine-grained IAM permissions to our ingress controller pods, deploy a simple nginx service, and expose it to the internet using an ALB.

The following diagram shows our final architecture:

Prerequisites

In order to successfully execute these steps, follow the steps in the EKS getting started guide (don’t create a cluster) and make sure you have the following components installed:

  • The EKS CLI, eksctl, for example, on macOS with brew tap weaveworks/tap and brew install weaveworks/tap/eksctl
  • The latest version of the AWS CLI.
  • The Kubernetes CLI, kubectl .
    Note, if you used the Homebrew installation to install eksctl on macOS, then kubectl has already been installed on your system
  • jq

Now that everything is properly installed in your environment, we can go ahead and start building.

 

Cluster provisioning

The first step is to create an Amazon EKS cluster using eksctl. Note that AWS Fargate for Amazon EKS is currently available in the following Regions: US East (N. Virginia), US East (Ohio), Europe (Ireland), and Asia Pacific (Tokyo), so make sure you’re creating your cluster in one of these regions.
Create a cluster by running the following commands:

Note: remember to replace <aws_region> with the Region that you are using (Eg.: us-east-1, us-east-2, eu-west-1, or ap-northeast-1)

AWS_REGION=<aws_region>
CLUSTER_NAME=eks-fargate-alb-demo
eksctl create cluster --name $CLUSTER_NAME --region $AWS_REGION --fargate

When you create a cluster using the eksctl tool and pass the flag –fargate, the eksctl tool will not only create a cluster but also a Fargate profile, which allows the cluster administrator to specify which pods will run on Fargate. The default profile created by the eksctl maps everything in the default and kube-system namespaces to Fargate. You can separate the controller from the apps you run by creating new Fargate profiles. This gives you more fine-grained capabilities to manage how your pods are deployed on Fargate.

Once the cluster creation is completed, you can validate that everything went well by running the following command:

kubectl get svc

You should get the following response:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 16h

This response means that the cluster is running and that you are able to communicate with the Kubernetes API.
Set up OIDC provider with the cluster and create the IAM policy used by the ALB Ingress Controller

Now that our cluster is up and running, let’s setup the OIDC ID provider (IdP) in AWS. This step is needed to give IAM permissions to a Fargate pod running in the cluster using the IAM for Service Accounts feature. Let’s setup the OIDC provider for your cluster it with the following command:

eksctl utils associate-iam-oidc-provider --cluster $CLUSTER_NAME --approve

Note: If you want to learn more about IAM Roles for Service Accounts, please refer to this blog post.

The next step is to create the IAM policy that will be used by the ALB Ingress Controller deployment. This policy will be later associated to the Kubernetes service account and will allow the ALB Ingress Controller pods to create and manage the ALB’s resources in your AWS account for you. Download the IAM Policy example document and create it:

wget -O alb-ingress-iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/master/docs/examples/iam-policy.json
aws iam create-policy --policy-name ALBIngressControllerIAMPolicy --policy-document file://alb-ingress-iam-policy.json

Create a cluster role, role binding, and a Kubernetes service account

Let’s start populating some environment variables that we will be using:

STACK_NAME=eksctl-$CLUSTER_NAME-cluster
VPC_ID=$(aws cloudformation describe-stacks --stack-name "$STACK_NAME" | jq -r '[.Stacks[0].Outputs[] | {key: .OutputKey, value: .OutputValue}] | from_entries' | jq -r '.VPC')
AWS_ACCOUNT_ID=$(aws sts get-caller-identity | jq -r '.Account')

Now, create the Cluster Role and Role Binding:

cat > rbac-role.yaml <<-EOF
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/name: alb-ingress-controller
  name: alb-ingress-controller
rules:
  - apiGroups:
      - ""
      - extensions
    resources:
      - configmaps
      - endpoints
      - events
      - ingresses
      - ingresses/status
      - services
    verbs:
      - create
      - get
      - list
      - update
      - watch
      - patch
  - apiGroups:
      - ""
      - extensions
    resources:
      - nodes
      - pods
      - secrets
      - services
      - namespaces
    verbs:
      - get
      - list
      - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/name: alb-ingress-controller
  name: alb-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: alb-ingress-controller
subjects:
  - kind: ServiceAccount
    name: alb-ingress-controller
    namespace: kube-system
EOF

kubectl apply -f rbac-role.yaml

These commands will create two resources for us and the output should be similar to this:

 

clusterrole.rbac.authorization.k8s.io/alb-ingress-controller created
clusterrolebinding.rbac.authorization.k8s.io/alb-ingress-controller created

And finally the Kubernetes Service Account:

eksctl create iamserviceaccount \
--name alb-ingress-controller \
--namespace kube-system \
--cluster $CLUSTER_NAME \
--attach-policy-arn arn:aws:iam::$AWS_ACCOUNT_ID:policy/ALBIngressControllerIAMPolicy \
--approve

This eksctl command will deploy a new CloudFormation stack with an IAM role. Wait for it to finish before keep executing the next steps.

 

Deploy the ALB Ingress Controller

Let’s now deploy the ALB Ingress Controller to our cluster:

This blog post uses the ALB Ingress Controller version 1.1.4. You can find more information about the ALB Ingress Controller and the deployment process in the official GitHub repository.

cat > alb-ingress-controller.yaml <<-EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/name: alb-ingress-controller
  name: alb-ingress-controller
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: alb-ingress-controller
  template:
    metadata:
      labels:
        app.kubernetes.io/name: alb-ingress-controller
    spec:
      containers:
      - name: alb-ingress-controller
        args:
        - --ingress-class=alb
        - --cluster-name=$CLUSTER_NAME
        - --aws-vpc-id=$VPC_ID
        - --aws-region=$AWS_REGION
        image: docker.io/amazon/aws-alb-ingress-controller:v1.1.6
      serviceAccountName: alb-ingress-controller
EOF
kubectl apply -f alb-ingress-controller.yaml

Deploy sample application to the cluster

Now that we have our ingress controller running, we can deploy the application to the cluster and create an ingress resource to expose it.

Let’s start with a deployment:

cat > nginx-deployment.yaml <<-EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: "nginx-deployment"
  namespace: "default"
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: "nginx"
    spec:
      containers:
      - image: nginx:latest
        imagePullPolicy: Always
        name: "nginx"
        ports:
        - containerPort: 80
EOF

kubectl apply -f nginx-deployment.yaml

The output should be similar to:

deployment.apps/alb-ingress-controller created

Then, let’s create a service so we can expose the NGINX pods:

cat > nginx-service.yaml <<-EOF
apiVersion: v1
kind: Service
metadata:
  annotations:
    alb.ingress.kubernetes.io/target-type: ip
  name: "nginx-service"
  namespace: "default"
spec:
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
  type: NodePort
  selector:
    app: "nginx"
EOF

kubectl apply -f nginx-service.yaml

The output will be similar to:

deployment.extensions/nginx-deployment created

Finally, let’s create our ingress resource:

cat > nginx-ingress.yaml <<-EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: "nginx-ingress"
  namespace: "default"
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
  labels:
    app: nginx-ingress
spec:
  rules:
  - http:
      paths:
      - path: /*
        backend:
          serviceName: "nginx-service"
          servicePort: 80
EOF

kubectl apply -f nginx-ingress.yaml

The output will be:

ingress.extensions/nginx-ingress created

Once everything is done, you will be able to get the ALB URL by running the following command:

kubectl get ingress nginx-ingress

The output of this command will be similar to this one:

NAME HOSTS ADDRESS PORTS AGE
nginx-ingress * 5e07dbe1-default-ngnxingr-2e9-113757324.us-east-2.elb.amazonaws.com 80 9s

Note that the ALB URL will be presented under the ADDRESS field. It will take a couple of minutes to the ALB health checks mark all the pods as health and start sending traffic to them. With the following commands you will be able to check if all targets are healthy:

LOADBALANCER_PREFIX=$(kubectl get ingress nginx-ingress -o json | jq -r '.status.loadBalancer.ingress[0].hostname' | cut -d- -f1)
TARGETGROUP_ARN=$(aws elbv2 describe-target-groups | jq -r '.TargetGroups[].TargetGroupArn' | grep $LOADBALANCER_PREFIX)
aws elbv2 describe-target-health --target-group-arn $TARGETGROUP_ARN | jq -r '.TargetHealthDescriptions[].TargetHealth.State'

The output should be:

healthy
healthy
healthy

Make sure that you can access the NGINX interface by accessing the ingress address in your web browser.
You can also make sure that every pod is running using AWS Fargate with the following command:

kubectl get pods -o wide

Note that all the pods for your application are running under Fargate hosts:

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-64fc4c755d-d7cjg 1/1 Running 0 4m23s 192.168.142.15 fargate-ip-192-168-142-15.us-east-2.compute.internal <none> <none>
nginx-deployment-64fc4c755d-gcrgv 1/1 Running 0 4m23s 192.168.121.4 fargate-ip-192-168-121-4.us-east-2.compute.internal <none> <none>
nginx-deployment-64fc4c755d-xdjng 1/1 Running 0 4m23s 192.168.117.179 fargate-ip-192-168-117-179.us-east-2.compute.internal <none> <none>

With that, you should be able to run your application on containers with Amazon EKS without having to manage any infrastructure and being able to expose them to the internet or other applications using the AWS Application Load Balancer.

 

Conclusion

In order to use the ALB Ingress Controler with Fargate on Amazon EKS, you need to follow these steps:

  1. Set up OIDC provider with the cluster and create the IAM policy with proper permissions so the ALB Ingress Controller can manage the AWS resources for you;
  2. Create a cluster role, role binding and a Kubernetes service account that will be attached to the ALB Ingress Controller running pod;
  3. Deploy your application and create the Service and Ingress resources.

You can also find more informations about AWS Fargate in the official product page and about the ALB Ingress Controller in the official GitHub repository.

Bruno Emer

Bruno Emer

Bruno is a Solutions Architect based out of São Paulo, Brazil. When he is not working with customers or writing content he likes to travel and listen to music, specially samba and R&B.

Nathan Taber

Nathan Taber

Nathan is a Principal Product Manager for Amazon EKS. When he’s not writing and creating, he loves to sail, row, and roam the Pacific Northwest with his Goldendoodles, Emma & Leo.