How can I set up Cluster Autoscaler on Amazon EKS?

Last updated: 2019-06-24

How can I set up Cluster Autoscaler on Amazon Elastic Container Service for Kubernetes (Amazon EKS)?

Short Description

Cluster Autoscaler is a tool that automatically adjusts the size of a Kubernetes cluster when one of the following conditions is true:

  • There are pods that failed to run in the cluster due to insufficient resources.
  • There are nodes in the cluster that have been underutilized for an extended period of time and their pods can be placed on other existing nodes.

The Cluster Autoscaler on AWS scales worker nodes within any specified Auto Scaling group and runs as a deployment in your cluster.

Note: The following solution assumes that you have an active Amazon EKS cluster with associated worker nodes created by an AWS CloudFormation template. The following example is using the auto-discovery setup. You can also configure Cluster Autoscaler by specifying one or multiple Auto Scaling groups.

Resolution

Set up Auto-Discovery

1.    Open the AWS CloudFormation console, select your stack, and then choose the Resources tab.

2.    To find the Auto Scaling group resource created by your stack, find the NodeGroup in the Logical ID column. For more information, see Launching Amazon EKS Worker Nodes.

3.    Open the Amazon EC2 console, and then choose Auto Scaling Groups from the navigation pane.

4.    Choose the Tags tab, and then choose Add/Edit tags.

5.    In the Add/Edit Auto Scaling Group Tags window, enter the following tags by replacing awsExampleClusterName with the name of your EKS cluster. Then, choose Save.

Key: k8s.io/cluster-autoscaler/enabled
Key: k8s.io/cluster-autoscaler/awsExampleClusterName

Note: The keys for the tags that you entered don't have values. Cluster Autoscaler ignores any value set for the keys.

Create an IAM Policy

1.    Create an IAM policy called ClusterAutoScaler based on the following example to give the worker node running the Cluster Autoscaler access to required resources and actions.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:DescribeLaunchConfigurations",
                "autoscaling:DescribeTags",
                "autoscaling:SetDesiredCapacity",
                "autoscaling:TerminateInstanceInAutoScalingGroup"
            ],
            "Resource": "*"
        }
    ]
}

Note: By adding this policy to the worker nodes role, you enable all pods or applications running on the respective EC2 instances to use the additional IAM permissions.

2.    Attach the new policy to the instance role that's attached to your Amazon EKS worker nodes.

Deploy the Cluster Autoscaler

1.    To download a deployment example file provided by the Cluster Autoscaler project on GitHub, run the following command:

wget https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml

2.    Open the downloaded YAML file, and set the EKS cluster name (awsExampleClusterName) and environment variable (us-east-1) based on the following example. Then, save your changes.

...          
          command:
            - ./cluster-autoscaler
            - --v=4
            - --stderrthreshold=info
            - --cloud-provider=aws
            - --skip-nodes-with-local-storage=false
            - --expander=least-waste
            - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/awsExampleClusterName
          env:
            - name: AWS_REGION
              value: us-east-1
...

3.    To create a Cluster Autoscaler deployment, run the following command:

kubectl apply -f cluster-autoscaler-autodiscover.yaml

4.    To check the Cluster Autoscaler deployment logs for deployment errors, run the following command:

kubectl logs -f deployment/cluster-autoscaler -n kube-system

Test the scale out of the EKS worker nodes

1.    To see the current number of worker nodes, run the following command:

kubectl get nodes

2.    To increase the number of worker nodes, run the following commands:

kubectl create deployment autoscaler-demo --image=nginx
kubectl scale deployment autoscaler-demo --replicas=50

Note: This command creates a deployment named autoscaler-demo using an NGINX image directly on the Kubernetes cluster, and then launches 50 pods.

3.    To check the status of your deployment and see the number of pods increasing, run the following command:

kubectl get deployment autoscaler-demo --watch

4.    When the number of available pods equals 50, check the number of worker nodes by running the following command:

kubectl get nodes

Clean up the test deployment

1.    To scale down the worker nodes by deleting the deployment autoscaler-demo that was created before, run the following command:

kubectl delete deployment autoscaler-demo

2.    To see the number of worker nodes, wait about 10 minutes, and then run the following command:

kubectl get nodes

Did this article help you?

Anything we could improve?


Need more help?