Run kOps Kubernetes clusters for less with Amazon EC2 Spot Instances
30 minute tutorial
Introduction
Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices.
Spot Instances are a great fit for stateless, containerized workloads running on your Kubernetes clusters, because the approach to containers and Spot Instances are similar – ephemeral and autoscaled capacity. This means they can both be added and removed while adhering to SLAs, without impacting performance or availability of your applications.
In this tutorial you will learn how to add Spot Instances to your kOps Kubernetes clusters, while adhering to Spot Instance best practices. This will allow you to run applications without compromising performance or availability. Kubernetes Operations (kOps) is an open source project that provides a cohesive set of tools for provisioning, operating, and deleting Kubernetes clusters in the cloud. As part of the tutorial, you will deploy a kOps Kubernetes deployment and autoscale it on your Spot Instance worker nodes by using Kubernetes Cluster-Autoscaler.
What You Will Learn
- How to set up and use the kOps CLI to create a Kubernetes cluster with On-Demand nodes
- How to add Instance Groups with Spot Instances to your cluster, automatically leveraging best practices
- How to deploy the AWS Node Termination Handler
- How to deploy the Kubernetes Cluster Autoscaler
- How to deploy a sample application, test that it is running on Spot Instances and that it properly scales
- How to clean up your resources
AWS Experience
Intermediate
Time to Complete
30 minutes
Cost to Complete
Less than $10
Services Used
-
Step 1: Set up AWS CLI, kOps, and kubectlIn this step we will install all the dependencies that we will need during the workshop.
- 1.1 —
- Install version 2 of the AWS CLI by running the following commands — if you’re using Linux — or follow the instructions in the AWS CLI installation guide for different operating systems.
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install
- 1.2 —
- kOps requires that you have AWS credentials configured in your environment. The aws configure command is the fastest way to set up your AWS CLI installation for general use. Run the command and follow the prompts. You can use Administrator IAM policy, but if you want to limit the permissions required by kOps, the minimum required IAM privileges you will need are:
- AmazonEC2FullAccess
- AmazonRoute53FullAccess
- AmazonS3FullAccess
- IAMFullAccess
- AmazonVPCFullAccess
- Events:
- DeleteRule
- ListRules
- ListTargetsByRule
- ListTagsForResource
- PutEvents
- PutRule
- PutTargets
- RemoveTargets
- TagResource
- SQS:
- CreateQueue
- DeleteQueue
- GetQueueAtttributes
- ListQueues
- ListQueueTags
- AmazonEC2FullAccess
- kOps requires that you have AWS credentials configured in your environment. The aws configure command is the fastest way to set up your AWS CLI installation for general use. Run the command and follow the prompts. You can use Administrator IAM policy, but if you want to limit the permissions required by kOps, the minimum required IAM privileges you will need are:
- 1.3 —
- Install kOps in your environment. You can also follow this guide to install kOps for other architectures and platforms. At the time of writing, the latest version of kOps is v1.21.0
export KOPS_VERSION=v1.28.4 curl -LO https://github.com/kubernetes/kops/releases/download/${KOPS_VERSION}/kops-linux-amd64 chmod +x kops-linux-amd64 sudo mv kops-linux-amd64 /usr/local/bin/kops kops version
- 1.4 —
- Install Kubectl. You can also follow this guide for other architectures and platforms. We should use the same major kubectl version as the kOps version selected.
export KUBECTL_VERSION=v1.29.2 sudo curl --silent --location -o /usr/local/bin/kubectl https://storage.googleapis.com/kubernetes-release/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl sudo chmod +x /usr/local/bin/kubectl kubectl version
- 1.5 —
- In addition to kOps and kubectl, install yq, a portable command-line YAML processor. You can follow yq installation instructions for your system. On Cloud9 and Linux, we can install yq with the command on the right. The command requires that Go tools are installed in your environment. You can run `go version` to check if Go is already installed in your environment; if it is not, install go tools before proceeding with this step.
For other operating systems, please refer to the following link.
sudo wget https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 -O /usr/bin/yq &&\ sudo chmod +x /usr/bin/yq
- 1.1 —
-
Step 2: Set up kOps Cluster environment and state storeIn this step we will configure some of the environment variables that will be used to set up our environment, and create and configure the S3 bucket that kOps will use as states tore.
- 2.1 —
- The name of our cluster will be “spot-kops-cluster”. To reduce the dependencies on other services, in this tutorial we will create our cluster using Gossip DNS, hence the cluster domain will be k8s.local and the fully qualified name of the cluster spot-kops-cluster.k8s.local.
- We will also create an S3 bucket where kOps configuration and the cluster's state will be stored. We will use uuidgen to generate a unique S3 bucket name.
- In this command, we set the environment variables that will be used across the rest of the session.
export NAME=spot-kops-cluster.k8s.local export KOPS_STATE_PREFIX=spot-kops-$(uuidgen) export KOPS_STATE_STORE=s3://${KOPS_STATE_PREFIX}
- 2.2 —
- Additionally we will set a few other environment variables that define the region and availability zones where our cluster will be deployed. In this tutorial, the region will be “eu-west-1”, you can change this and point it to the region where you would prefer running your cluster.
export AWS_REGION=eu-west-1 export AWS_REGION_AZS=$(aws ec2 describe-availability-zones \ --region ${AWS_REGION} \ --query 'AvailabilityZones[0:3].ZoneName' \ --output text | \ sed 's/\t/,/g')
- 2.3 —
- Now that we have the name of our cluster and S3 State Store bucket defined, let's create the S3 bucket.
aws s3api create-bucket \ --bucket ${KOPS_STATE_PREFIX} \ --region ${AWS_REGION} \ --create-bucket-configuration LocationConstraint=${AWS_REGION}
- 2.4 —
- Once the bucket has been created, we can apply one of kOps best practices by enabling S3 Versioning on the bucket. S3 is acting as the state store, and by enabling versioning on the bucket we will be able to recover our cluster back to a previous state and configuration.
aws s3api put-bucket-versioning \ --bucket ${KOPS_STATE_PREFIX} \ --region ${AWS_REGION} \ --versioning-configuration Status=Enabled
- 2.1 —
-
Step 3: Cluster creation and On-Demand node configurationIn this step we create the cluster control plane and a kOps InstanceGroup with OnDemand instances. We will also add some labels to the group, so that we can place pods accordingly later on.
- 3.1 —
- It is now time to create the cluster. We will build a Highly Available (HA) cluster using m5.large instances for the kubernetes masters spread across three Availability Zones. Additionally we create an InstanceGroup with two t3.large OnDemand worker nodes, that we will use to demonstrate how you can configure your applications to run on Spot or OnDemand Instances, depending on the type of workflow.
kops create cluster \ --name ${NAME} \ --state ${KOPS_STATE_STORE} \ --cloud aws \ --control-plane-size m5.large \ --control-plane-count 3 \ --control-plane-zones ${AWS_REGION_AZS} \ --zones ${AWS_REGION_AZS} \ --node-size t3.large \ --node-count 2 \ --dns private
- 3.2 —
- Great! The output of the command displays all the resources that will be created. We can check the that cluster configuration has been written to the kOps state S3 bucket.
- The following command should showcase the cluster state, and yield and an output similar to the following one:
aws s3 ls --recursive ${KOPS_STATE_STORE}
aws s3 ls --recursive ${KOPS_STATE_STORE}
2020-06-17 13:36:02 5613 spot-kops-cluster.k8s.local/cluster.spec
2020-06-17 13:36:02 1516 spot-kops-cluster.k8s.local/config
2020-06-17 13:36:02 359 spot-kops-cluster.k8s.local/instancegroup/master-eu-west-1a
2020-06-17 13:36:02 359 spot-kops-cluster.k8s.local/instancegroup/master-eu-west-1b
2020-06-17 13:36:02 359 spot-kops-cluster.k8s.local/instancegroup/master-eu-west-1c
2020-06-17 13:36:02 363 spot-kops-cluster.k8s.local/instancegroup/nodes
2020-06-17 13:36:02 406 spot-kops-cluster.k8s.local/pki/ssh/public/admin/55ecc7ffb2f113a7ac354cc7b7c8adf2- 3.3 —
- As for the two nodes in the InstanceGroup that we created, we should label those as OnDemand nodes by adding a lifecycle label. kOps created an instance group per AZ for our nodes, so we will apply the changes to each of them. To merge the new configuration attributes to the cluster nodes, we will use yq.
for availability_zone in $(echo ${AWS_REGION_AZS} | sed 's/,/ /g') do NODEGROUP_NAME=nodes-${availability_zone} echo "Updating configuration for group ${NODEGROUP_NAME}" cat << EOF > ./nodes-extra-labels.yaml spec: nodeLabels: kops.k8s.io/lifecycle: OnDemand EOF kops get instancegroups --name ${NAME} ${NODEGROUP_NAME} -o yaml > ./${NODEGROUP_NAME}.yaml yq ea -i 'select(fileIndex == 0) *+ select(fileIndex == 1)' ./${NODEGROUP_NAME}.yaml ./nodes-extra-labels.yaml aws s3 cp ${NODEGROUP_NAME}.yaml ${KOPS_STATE_STORE}/${NAME}/instancegroup/${NODEGROUP_NAME} done
- 3.4 —
- We can validate the result of our changes by running the following command, and verifying that the labels have been added to the spec.nodeLabels section.
- The output of this command should be:
- Instancegroup nodes-eu-west-1a contains label kops.k8s.io/lifecycle: OnDemand
- Instancegroup nodes-eu-west-1b contains label kops.k8s.io/lifecycle: OnDemand
- Instancegroup nodes-eu-west-1c contains label kops.k8s.io/lifecycle: OnDemand
for availability_zone in $(echo ${AWS_REGION_AZS} | sed 's/,/ /g') do NODEGROUP_NAME=nodes-${availability_zone} kops get ig --name ${NAME} ${NODEGROUP_NAME} -o yaml | grep "lifecycle: OnDemand" > /dev/null if [ $? -eq 0 ] then echo "Instancegroup ${NODEGROUP_NAME} contains label kops.k8s.io/lifecycle: OnDemand" else echo "Instancegroup ${NODEGROUP_NAME} DOES NOT contains label kops.k8s.io/lifecycle: OnDemand" fi done
- 3.5 —
- Aside from validating that the lifecycle label is set up, we would encourage you to inspect one of the nodegroup's configuration. Run the following command to view it.
kops get ig --name ${NAME} nodes-$(echo ${AWS_REGION_AZS}|cut -d, -f 1) -o yaml
- 3.1 —
-
Step 4: Adding Spot workers with kops toolbox instance-selectorUntil recently, to adhere to Spot best practices using kOps, users were required to select a group of spot instances to diversify manually. They then had to configure a MixedInstancePolicy InstanceGroup, in order to apply diversification within the instance group. We introduce a new tool: kOps toolbox instance-selector. This tool is distributed as part of the standard kOps distribution, and it simplifies the creation of kOps Instance Groups, by creating groups that fully adhere to Spot Instances best practices.
- 4.1 —
- In order to tap into multiple Spot capacity pools, you will create two Instance Groups, each containing multiple instance types. Diversifying into more capacity pools increases the chances of achieving the desired scale, and maintaining it if some of the capacity pools get interrupted (when EC2 needs the capacity back). Each Instance Group (EC2 Auto Scaling group) will launch instances using Spot pools that are optimally chosen based on the available Spot capacity.
- The following command creates an Instance Group, which will be called spot-group-base-4vcpus-16gb. To create the group, we use kOps toolbox instance-selector, which saves us the effort of manually configuring the new group for diversification. In this case, we use the "--instance-type-base” with m5.xlarge as our base instance, made up of pools from the latest generations (gen4 and gen5). You can get more information about which parameters kops toolbox instance-selector uses by running “kops toolbox instance-selsector –-help”
kops toolbox instance-selector "spot-group-base-4vcpus-16gb" \ --usage-class spot --cluster-autoscaler \ --base-instance-type "m5.xlarge" --burst-support=false \ --deny-list '^?[1-3].*\..*' --gpus 0 \ --node-count-max 5 --node-count-min 1 \ --name ${NAME}
- 4.2 —
- Now let’s create the second Instance Group. This time, we will create the group “spot-group-base-2vcpus-8gb”, following the same approach as in the previous step.
kops toolbox instance-selector "spot-group-base-2vcpus-8gb" \ --usage-class spot --cluster-autoscaler \ --base-instance-type "m5.large" --burst-support=false \ --deny-list '^?[1-3].*\..*' --gpus 0 \ --node-count-max 5 --node-count-min 1 \ --name ${NAME}
- 4.3 —
- Before we proceed with the final instantiation of the cluster, let’s validate and review the newly created Instance Group's configuration. Run the following command to display the configuration of the “spot-group-base-2vcpus-8gb" Instance Group.
kops get ig spot-group-base-2vcpus-8gb --name $NAME -o yaml
- 4.4 —
- Our cluster is now configured with all the resources depicted in the architecture diagram below.
- However, we have only configured the cluster up to this point. To actually instantiate it, we must execute the following command:
- Note: If your environment previously had a kubeconfig file, you may need to run `kops export kubecfg –name ${NAME}’ to store the configuration and change the config.
kops update cluster --state=${KOPS_STATE_STORE} --name=${NAME} --yes --admin
- 4.5 —
- The command in the previous step will start requesting for all the cluster resources, and end up with an output similar to the following one. This may take around five minutes.
- You can run the kops validate cluster command to evaluate the state of the cluster a few times per minute, capturing the progress of its creation.
- Once the cluster is in a healthy state, you can run kubectl get nodes --show-labels to check that the cluster and all its associated resources are up and running.
NODE STATUS
NAME ROLE READY
ip-172-20-113-157.eu-west-1.compute.internal node True
ip-172-20-49-151.eu-west-1.compute.internal master True
ip-172-20-64-43.eu-west-1.compute.internal node True
ip-172-20-64-52.eu-west-1.compute.internal master True
ip-172-20-99-157.eu-west-1.compute.internal master TrueYour cluster spot-kops-cluster.k8s.local is ready
kops validate cluster --wait 10m
- 4.1 —
-
Step 5: Deploying the aws-node-termination-handlerWhen an interruption happens, EC2 sends a Spot interruption notification to the instance, giving the application two minutes to gracefully handle that interruption, and minimize the impact to its availability or performance. Also, recently the new Instance Rebalance Recommendation signal was made available, which notifies you when a Spot Instance is at elevated risk of interruption; it can arrive sooner that the Spot interruption notice, giving you extra time to proactively manage the Spot Instance, by rebalancing to new or existing Spot Instances that are not at risk. In order to gracefully handle either scenario on Kubernetes, we will deploy the aws-node-termination-handler in this section.
- 5.1 —
- Let us proceed to installing the aws-node-termination-handler in Queue Processor mode, with the help of kOps. The Handler will continuously poll an Amazon SQS queue, which receives events emitted by Amazon EventBridge that can lead to the termination of the nodes in our cluster (Spot Interruption/Rebalance events, maintenance events, Auto-Scaling Group lifycle hooks and more). This enables the Handler to cordon and drain the node - also issuing a SIGTERM to the Pods and containers running on it, in order to achieve a graceful application termination.
- kOps facilitates the deployment of the aws-node-termination-handler, allowing you to add its configuration as an addon to the kOps cluster spec. This addon also takes care of deploying all the necessary AWS infrastructure for you: SQS Queue, EventBridge rules, and the necessary Auto-Scaling group Lifecycle hooks.
kops get cluster --name ${NAME} -o yaml > ~/environment/cluster_config.yaml cat << EOF > ./node_termination_handler_addon.yaml spec: nodeTerminationHandler: enabled: true enableSQSTerminationDraining: true managedASGTag: "aws-node-termination-handler/managed" EOF yq ea -i 'select(fileIndex == 0) *+ select(fileIndex == 1)' ~/environment/cluster_config.yaml ~/environment/node_termination_handler_addon.yaml aws s3 cp ~/environment/cluster_config.yaml ${KOPS_STATE_STORE}/${NAME}/config kops update cluster --state=${KOPS_STATE_STORE} --name=${NAME} --yes --admin
- 5.2 —
- To check that the aws-node-termination-handler has been deployed successfully, execute the following command.
kubectl get deployment aws-node-termination-handler -n kube-system -o wide
- 5.1 —
-
Step 6: (Optional) Deploy the Kubernetes Cluster AutoscalerCluster Autoscaler is a Kubernetes controller that dynamically adjusts the size of the cluster. If there are pods that can't be scheduled in the cluster due to insufficient resources, Cluster Autoscaler will issue a scale-out action. When there are nodes in the cluster that have been under-utilized for a period of time, Cluster Autoscaler will scale-in the cluster. Internally Cluster Autoscaler evaluates a set of instance groups to scale up the cluster. When Cluster Autoscaler runs on AWS, instance groups are implemented using Auto Scaling Groups. To calculate the number of nodes to scale-out/in when required, Cluster Autoscaler assumes all the instances in an instance group are homogenous (i.e. have the same number of vCPUs and memory size).
- 6.1 —
- Cluster Autoscaler requires access to an additional set of IAM policies. Before we proceed with its installation, we will need to add the extra set of policies, which will allow the nodes in the cluster to call the right API calls to manage Auto Scaling Groups. Once the extra policies have been added, we will update the cluster for them to take effect.
kops get cluster --name ${NAME} -o yaml > ./cluster_config.yaml cat << EOF > ./extra_policies.yaml spec: additionalPolicies: node: | [ { "Effect":"Allow", "Action": [ "autoscaling:DescribeAutoScalingGroups", "autoscaling:DescribeAutoScalingInstances", "autoscaling:DescribeTags", "autoscaling:DescribeLaunchConfigurations", "autoscaling:SetDesiredCapacity", "autoscaling:TerminateInstanceInAutoScalingGroup", "ec2:DescribeLaunchTemplateVersions" ], "Resource":"*" } ] EOF yq ea -i 'select(fileIndex == 0) *+ select(fileIndex == 1)' ./cluster_config.yaml ./extra_policies.yaml aws s3 cp ./cluster_config.yaml ${KOPS_STATE_STORE}/${NAME}/config kops update cluster --state=${KOPS_STATE_STORE} --name=${NAME} --yes
- 6.2 —
- We do recommend the use of Helm to deploy the Cluster Autoscaler. Helm is a package manager for Kubernetes applications. It allows the creation of a set of Kubernetes resources to be deployed in a single logical deployment unit called a Chart. The following command will install Helm in your environment. If you are not running on Linux, you can follow Helm documentation on an environment has been set up to authenticate and link with your cluster. The last line of this command should validate that Helm version 3 was installed successfully, showing the installed version.
curl -sSL https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash helm repo add stable https://charts.helm.sh/stable helm version --short
- 6.3 —
- We are now ready to install and set up Cluster Autoscaler. There are a few parameters that we will pass to its configuration. One of those parameters is the autoDiscovery.clusterName. This matches with the tag that kOps toolbox instance-selector has set to the Instance Groups that we created earlier, and will result in Cluster Autoscaler being able to auto-discover and take ownership of those groups.
- Aside from the logs, there is a parameter that should be highlighted. We have set the parameter expander=random . The Expanders configuration is used to define how we want Cluster Autoscaler to scale-up when there are pending pods. The random expander will select randomly which Instance Group to scale. Random allocation across Instance Groups is useful for example in production clusters, where we want to diversify the allocation of instances across multiple pools. In test or development environments, you may want to change this setting to least-waste ; least-waste selects the node group that will have the least idle CPU and memory after the scaling activity takes place, thus right-sizing the Instance Group and optimizing the utilization of the EC2 instances.
helm repo add autoscaler https://kubernetes.github.io/autoscaler helm upgrade --install cluster-autoscaler autoscaler/cluster-autoscaler \ --set fullnameOverride=cluster-autoscaler \ --set nodeSelector."kops\.k8s\.io/lifecycle"=OnDemand \ --set cloudProvider=aws \ --set extraArgs.scale-down-enabled=true \ --set extraArgs.expander=random \ --set extraArgs.balance-similar-node-groups=true \ --set extraArgs.scale-down-unneeded-time=2m \ --set extraArgs.scale-down-delay-after-add=2m \ --set autoDiscovery.clusterName=${NAME} \ --set rbac.create=true \ --set awsRegion=${AWS_REGION} \ --wait
- 6.4 —
- You can also check the logs and steps taken by Cluster Autoscaler with the following command. This command will display Cluster Autoscaler logs.
kubectl logs -f deployment/cluster-autoscaler --tail=10
- 6.1 —
-
Step 7: Deploy a sample app
Finally let's deploy a test application and scale our cluster. To scale our application, we will use a Deployment. Deployments include the definition of a set of replicas being deployed. We can change the number of replicas in the deployment so that some of the replicas will be pending, as they cannot be allocated into the available resources.
- 7.1 — Deploy the sample nginx ap:
kubectl create deployment nginx-app --image=nginx
- 7.2 — To confirm that the application is deployed and running one replica of the Nginx web server, run the following command:
kubectl get deployment/nginx-app
- 7.3 — The output of this command should be:
deployment.apps/nginx-app created
- 7.4 —
- Scale the deployment (increase the number of replicas).
kubectl scale --replicas=20 deployment/nginx-app
- 7.5 —
- Check that some pods are in Status=Pending. The pending status is used as a signal by Cluster Autoscaler to trigger a scale-out event.
- Expected output:
bash-4.2$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cluster-autoscaler-5b9d46ffcb-pj4jn 1/1 Running 0 13m
kube-ops-view-5d455db74f-j7v4t 1/1 Running 0 6m23s
nginx-app-746b9b4bbc-57v7b 0/1 Pending 0 27s
…
nginx-app-746b9b4bbc-72zsr 0/1 Pending 0 27s
nginx-app-746b9b4bbc-bz9g2 0/1 Pending 0 27skubectl get pods
- 7.6 —
- Check in Cluster Autoscaler logs that it has identified the pending pods and is triggering a scale-out activity, increasing the size of the selected Instance Group.
- Expected output:
kubectl logs -f deployment/cluster-autoscaler | grep -I scale_up
I0810 11:34:33.384647 1 scale_up.go:431] Best option to resize: spot-group-base-2vcpus-8gb.spot-kops-cluster.k8s.local
I0810 11:34:33.384656 1 scale_up.go:435] Estimated 4 nodes needed in spot-group-base-2vcpus-8gb.spot-kops-cluster.k8s.local
I0810 11:34:33.384698 1 scale_up.go:539] Final scale-up plan: [{spot-group-base-2vcpus-8gb.spot-kops-cluster.k8s.local 1->5 (max: 5)}]- 7.7 —
- Confirm in the AWS Management Console that the selected EC2 Auto Scaling Group now contains more Spot Instances.
- After some time (around 1 to 3 minutes), confirm that new Spot Instance nodes have joined the cluster. The output should show more than two workers with the role "node, spot-worker”
kubectl get nodes -L node.kubernetes.io/instance-type
- 7.8 —
- Once the node joins the cluster, confirm that all the pending pods have been scheduled.
kubectl get pods
-
Step 8: Clean up
- 8.1 —
- Remove the test application.
- Remove the test application.
kubectl delete deployment/nginx-app
- 8.2 —
- Remove the Cluster Autoscaler.
helm delete cluster-autoscaler
- 8.3 —
- Remove the kOps cluster; delete cluster state and all associated resources.
kops delete cluster --name ${NAME} --yes
- 8.4 —
- In the console, remove the S3 bucket.
- Read "Deleting a single object" section of the AWS Documentation to find out how to delete a bucket from the console.
- 8.1 —
Congratulations
You deployed a kOps cluster with Spot Instances, using
the right tools to follow best practices and easily handle interruptions.
Spot Instances are a great choice to cost-optimize your
fault-tolerant workloads running on Kubernetes.
Additional resources
- Read more about the kops toolbox instance-selector
- Read more about the AWS Node Termination Handler
- See a more advanced Kubernetes tutorial using EKS and eksctl in the Using Spot Instances with EKS workshop
- Learn how to run other types of workloads on Spot with self-paced labs on the Spot workshops website