How can I check, scale, delete, or drain my worker nodes in Amazon EKS?

Last updated: 2020-01-27

After I launch my Amazon Elastic Kubernetes Service (Amazon EKS) worker nodes using eksctl or the AWS Management Console, I want to check, scale, drain, or delete my worker nodes.

Short Description

Complete the steps in the appropriate section based on your needs:

  • Check your worker nodes
  • Scale your worker nodes
  • Drain your worker nodes
  • Delete your worker nodes

Resolution

Check your worker nodes

To list the worker nodes registered to the Amazon EKS control plane, run the following command:

kubectl get nodes -o wide

The output returns the name, Kubernetes version, operating system, and IP address of the worker nodes.

To get additional information on a single worker node, run the following command:

kubectl describe node/node_name

Note: Replace node_name with your value. For example: ip-XX-XX-XX-XX.us-east-1.compute.internal

The output shows more information about the worker node, including labels, taints, system information, and status.

Scale your worker nodes

Note: If your node groups appear in the Amazon EKS console, then use a managed node group. Otherwise, use an unmanaged node group.

(Option 1) To scale your managed or unmanaged worker nodes using eksctl, run the following command:

eksctl scale nodegroup --cluster=clusterName --nodes=desiredCount --name=nodegroupName

Note: Replace clusterName, desiredCount, and nodegroupName with your values.

--or--

(Option 2) To scale your managed worker nodes without eksctl, complete the steps in the "To edit a node group configuration" section of Updating a Managed Node Group.

--or--

(Option 3) To scale your unmanaged worker nodes using AWS CloudFormation, complete the following steps:

1.    Use an AWS CloudFormation template to launch your worker nodes for Windows or Linux.

2.    Modify the NodeAutoScalingGroupDesiredCapacity, NodeAutoScalingGroupMinSize, or NodeAutoScalingGroupMaxSize parameters in your AWS CloudFormation stack.

Drain your worker nodes

Important: The drain action isolates the worker node and tells Kubernetes to stop scheduling any new pods on the node. Pods running on the target node are evicted from draining nodes, which means the pods will be stopped. Consider the impact this can have on your production environment.

You can either drain an entire node group or a single worker node. Choose the appropriate option.

(Option 1) Drain the entire node group:

If you're using eksctl to launch your worker nodes, then run the following command:

eksctl drain nodegroup --cluster=clusterName --name=nodegroupName

Note: Replace clusterName and nodegroupName with your values.

To undo the draining action of a node group, run the following command:

eksctl drain nodegroup --cluster=clusterName --name=nodegroupName --undo

Note: Replace nodegroup, clusterName, and nodegroupName with your values.

If you're not using eksctl to launch your worker nodes, then use the following code to identify and drain all the nodes of a particular Kubernetes version (in this case, 1.14.7-eks-1861c5):

#!/bin/bash
K8S_VERSION=1.14.7-eks-1861c5
nodes=$(kubectl get nodes -o jsonpath="{.items[?(@.status.nodeInfo.kubeletVersion==\"v$K8S_VERSION\")].metadata.name}")
for node in ${nodes[@]}
do
    echo "Draining $node"
    kubectl drain $node --ignore-daemonsets --delete-local-data
done

To undo the draining action of a node group, use the following code to identify and drain all the nodes of a particular Kubernetes version (in this case, 1.14.7-eks-1861c5) with the following code snippet:

#!/bin/bash
K8S_VERSION=1.14.7-eks-1861c5
nodes=$(kubectl get nodes -o jsonpath="{.items[?(@.status.nodeInfo.kubeletVersion==\"v$K8S_VERSION\")].metadata.name}")
for node in ${nodes[@]}
do
    echo "Undo draining $node"
    kubectl uncordon $node
done

Note: To get the version of your worker node, run the following command:

$ kubectl get nodes
NAME                                      STATUS   ROLES    AGE     VERSION
ip-XXX-XXX-XX-XXX.ec2.internal            Ready    <none>   6d4h    v1.14.7-eks-1861c5
ip-XXX-XXX-XX-XXX.ec2.internal            Ready    <none>   6d4h    v1.14.7-eks-1861c5

Note: The version number is displayed in the VERSION column.

(Option 2) Drain a single worker node:

If you're not using eksctl to launch your worker nodes or you want to drain only a specific node, then run the following command to gracefully isolate your worker node:

kubectl drain node_name --ignore-daemonsets

Note: Replace node_name with your value.

To undo the isolation, run the following commands:

kubectl uncordon name

Note: Replace node_name with your value.

To migrate your existing applications to a new worker node group, see Migrating to a New Worker Node Group.

Delete your worker nodes

Important: The delete action is unrecoverable. Consider the impact this can have on your production environment.

If you're using eksctl, then run the following command:

eksctl delete nodegroup --cluster=clusterName --name=nodegroupName

If you have a managed node group, then complete the steps in Deleting a Managed Node Group.

If you have an unmanaged node group and you launched your worker nodes with an AWS CloudFormation template, then delete the AWS CloudFormation stack that you created for your node group for Windows or Linux.

If you have an unmanaged node group and didn't use an AWS CloudFormation template to launch your worker nodes, then delete the Auto Scaling group for your worker nodes. Or, terminate the instance directly if you didn't use an Auto Scaling group.


Did this article help you?

Anything we could improve?


Need more help?