AWS Open Source Blog
Using the K3s Kubernetes distribution in an Amazon EKS CI/CD pipeline
Modern microservices application stack, CI/CD pipeline, Kubernetes as orchestrator, hundreds or thousands of deployments per day—this all sounds good, until you realize that your Kubernetes development or staging environments are messed up by these deployments, and changes done by one developer team are affecting your developer team’s Kubernetes environment. In this post, we will walk through why these external changes affect our Kubernetes environments and how to prevent it.
This problem happens because usually we have various code checks and image scans in our pipeline before pushing images to the repository and deploying our resources; however, there are no proper integration or unit tests happening inside the pipeline itself, as there is no Kubernetes cluster available inside the pipeline. Effectively, we are testing our changes after deployment.
One solution is to provision a clean Kubernetes cluster during each build, test changes, and then tear it down. However, this is time consuming and not cost effective. Instead, we can solve this problem using an open source, lightweight K3s Kubernetes distribution from Rancher, with Amazon Elastic Kubernetes Service (Amazon EKS) and AWS CodePipeline.
What is K3s?
K3s is an open source, lightweight, and fully compliant Kubernetes distribution that is less than 100 MB in size and designed for IoT, Edge, and CI/CD environments. Startup time only takes about 40 seconds.
What is even more interesting, especially for CI/CD use case, is that we can run K3s inside a Docker container. Rancher provides another tool called k3d, which is a lightweight wrapper to run K3s in a Docker container. In this case, the package is about 10 MB and startup time is even faster at around 15-20 seconds.
Let’s get started and learn how to implement this solution.
Prerequisites
To complete this tutorial, we need:
- An AWS account
- A GitHub account
- To install and configure the AWS Command Line Interface (AWS CLI), kubectl, and eksctl tools. Follow the instructions provided in the “Getting started with eksctl” user guide.
Provision Amazon EKS cluster
There are many ways to provision, including by using the AWS Management Console, AWS CLI, and others. We recommend using eksctl, but use whatever way you prefer, and modify the node type and region to your preference. Cluster provisioning typically takes around 15 minutes.
eksctl create cluster \
--name k3s-lab \
--version 1.16 \
--nodegroup-name k3s-lab-workers \
--node-type t2.medium \
--nodes 2 \
--alb-ingress-access \
--region us-west-2
For the purpose of this exercise, we use the t2.medium instance family. Remember to use the appropriate instance family type if you are spinning up an Amazon EKS cluster in the production environment.
After the cluster is provisioned, we verify that it is up and that kubectl is properly configured, using the command:
kubectl get nodes
Our output should look like this:
NAME STATUS ROLES AGE VERSION
ip-192-168-12-121.ec2.internal Ready <none> 82s v1.16.8-eks-e16311
ip-192-168-38-246.ec2.internal Ready <none> 80s v1.16.8-eks-e16311
Set up AWS CodePipeline
We set up CodePipeline by doing the following:
1. Set the ACCOUNT_ID variable:
ACCOUNT_ID=$(aws sts get-caller-identity --output text --query 'Account')
2. In CodePipeline, we use AWS CodeBuild to deploy a sample Kubernetes service. This requires an AWS Identity and Access Management (IAM) role capable of interacting with the Amazon EKS cluster. In this step, we are going to create an IAM role and add an inline policy to use in the CodeBuild stage. This policy will allow AWS CodeBuild to interact with the Amazon EKS cluster via kubectl. Execute the below commands to create the role and attach the policy.
TRUST="{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::${ACCOUNT_ID}:root\" }, \"Action\": \"sts:AssumeRole\" } ] }"
echo '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "eks:Describe*", "Resource": "*" } ] }' > /tmp/iam-role-policy
aws iam create-role --role-name EksWorkshopCodeBuildKubectlRole --assume-role-policy-document "$TRUST" --output text --query 'Role.Arn'
aws iam put-role-policy --role-name EksWorkshopCodeBuildKubectlRole --policy-name eks-describe --policy-document file:///tmp/iam-role-policy
3. Now that we have the IAM role created, we will add the role to the aws-auth ConfigMap for the Amazon EKS cluster. Once included, this new role allows kubectl to interact with the Amazon EKS cluster via the IAM role.
ROLE=" - rolearn: arn:aws:iam::$ACCOUNT_ID:role/EksWorkshopCodeBuildKubectlRole\n username: build\n groups:\n - system:masters"
kubectl get -n kube-system configmap/aws-auth -o yaml | awk "/mapRoles: \|/{print;print \"$ROLE\";next}1" > /tmp/aws-auth-patch.yml
kubectl patch configmap/aws-auth -n kube-system --patch "$(cat /tmp/aws-auth-patch.yml)"
4. Next we will fork the sample Kubernetes service so that we can modify the repository and trigger builds. Log in to GitHub and fork the sample service to the account of choice. Refer to the sample Kubernetes service for more information. After the repository is forked, clone it to the local environment so we can work with files using our favorite IDE or text editor.
git clone https://github.com/YOUR-USERNAME/eks-workshop-sample-api-service-go.git
5. In order for CodePipeline to receive callbacks from GitHub, we must generate a personal access token. (For more information, see the CodePipeline documentation.) Once created, an access token is stored in a secure enclave and reused. This step is only required during the first run, or when there is a need to generate new keys.
6. Next we will create the CodePipeline using AWS CloudFormation. Navigate to the AWS Management Console to create the CloudFormation stack. After the console is open, enter the GitHub user name, personal access token (created in the previous step), and Amazon EKS cluster name (k3s-lab
). Then, select the acknowledge box and select Create stack. This step takes about 10 minutes to complete.
After the CodePipeline creation, we can check the status in the CodePipeline console and verify that the deployment was applied to our cluster using the command:
kubectl describe deployment hello-k8s
Add k3d to AWS CodePipeline
Now let’s modify the buildspec.yml file in our forked repository and add unit testing using k3d.
We will walk through the required modifications, which can be done manually. Or, alternatively, a full buildspec.yml file is provided at the end of this section.
1. Install k3d in the CodeBuild environment.
- curl -sS https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG=v1.7.0 bash
2. Create the k3 cluster during the build phase and wait 20 seconds for the cluster to spin up.
- k3d create
- sleep 20
3. Configure kubectl for the k3 cluster.
- export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"
4. By default, Amazon EKS cluster nodes are configured by eksctl to have access to pull images from the Amazon Elastic Container Registry (Amazon ECR) image repository. Non-Amazon EKS clusters, however, require additional configuration for this. Find the instructions for this configuration in the documentation. Because there are a few steps, I’ve moved them to a separate script (create_secret.sh
), and call it inside buildspec.yml file.
- ./create_secret.sh
Add the file create_secret.sh
to the working folder of the forked repository with the following context:
5. Deploy our pipeline application resources to the k3d cluster and wait 20 seconds for resources to come up.
- kubectl apply -f hello-k8s.yml
- sleep 20
Configure testing
During this step, we run our unit and/or integration testing. For this example, we’ve provided a simple script to hit the endpoint of our service. We can deploy other necessary microservices or services for integration testing from our stack.
- ./unit_test.sh
Add the file unit_test.sh to the working folder of the forked repository with the following context.
Check whether testing was successful
The last step is to check whether testing was successful and to deploy our application to the Amazon EKS cluster. If testing failed, we fail our CodePipeline and do not deploy to Amazon EKS. CodeBuild has a built-in variable CODEBUILD_BUILD_SUCCEEDING
to indicate the status of build phase. Let’s use it in our code.
- bash -c "if [ /"$CODEBUILD_BUILD_SUCCEEDING/" == /"0/" ]; then exit 1; fi"
- echo Build stage successfully completed on `date`
buildspec.yml
After all the changes are complete and the new files are in our local forked repository, we need to commit the changes so CodePipeline can pick them up and apply them to our pipeline.
git add .
git commit -m "k3d modified pipeline"
git push
After we push the changes, we can go to the CodePipeline console and check the pipeline status and logs.
Navigate to Details in the Build section. Here, we can inspect what was happening during our pipeline run under Build Logs.
Cleaning up
To avoid incurring future charges, we need to perform a few clean-up steps.
1. Delete the CloudFormation stack created for CodePipeline. Open the CloudFormation management console, select the box next to the eksws-codepipeline stack, select Delete, and then confirm deletion in the pop-up window.
2. Delete the Amazon ECR repository. Open the Amazon ECR management console, and select the box next to repository name starting with eksws. Select Delete, and then confirm deletion.
3. Empty and delete the Amazon S3 bucket used by CodeBuild for build artifacts. The bucket name begins with eksws-codepipeline.
Select the bucket, then select Empty. Select Delete to finish deleting the bucket.
4. Finally, delete the Amazon EKS cluster using the command:
eksctl delete cluster --name=k3s-lab
Conclusion
In this blog post, we explored how to add unit and integration testing to an Amazon EKS CI/CD pipeline, using the open source, lightweight K3s Kubernetes distribution. If you are using different CI/CD tooling for your Amazon EKS deployments, you can easily incorporate K3s there as well.
Get involved
You can join the open source K3s community, where you can ask questions, collaborate and contribute to the project.