Containers

Continuous Delivery of Amazon EKS Clusters Using AWS CDK and CDK Pipelines

This blog is no longer up to date and we recommend reviewing the Amazon EKS Blueprints for CDK Pipeline SDK module which makes it easier to create infrastructure Continuous Delivery pipelines via AWS CodePipeline.

Customers are looking for ways to automate the deployment of their Amazon EKS clusters across different versions, environments, accounts, and Regions. The deployment of these clusters involves tasks like creating your clusters with desired networking and logging configuration, selecting Amazon EKS add-ons versions, and, once it is ready, deploying other infrastructure components.

This post shows the use of AWS CDK and CDK Pipelines to deploy Amazon EKS clusters.

Overview

In this post, we will show a sample pipeline using CDK Pipelines, a high-level construct library that makes it easy to set up a continuous deployment pipeline for your CDK applications, powered by AWS CodePipeline.

This pipeline creates two different Amazon EKS clusters, one on version 1.20 and another on 1.21, with a managed node group for each one. It also deploys Controllers and Operators such as AWS Load Balancer Controller, Calico for Network Policies, ExternalDNS, Cluster Autoscaler, Metrics Server, and Container Insights.

It also includes a stage to swap users of a sample application using a blue/green strategy across different clusters. It leverages an existing Amazon Route 53 domain and two subdomains managed by each cluster’s ExternalDNS to route users.

Pipeline stages:

  • Source: This stage fetches the source of your CDK app from your forked GitHub repo and triggers the pipeline every time you push new commits to it.
  • Build: This stage compiles your code (if necessary) and performs a CDK synth. The output of that step is a cloud assembly, which is used to perform all actions in the rest of the pipeline.
  • UpdatePipeline: This stage changes the pipeline if necessary. For example, if you update your code to add a new deployment stage to the pipeline or add a new asset to your application, it automatically updates the pipeline to reflect the changes you made.
  • PublishAssets: This stage prepares and publishes all file assets you are using in your app to Amazon Simple Storage Service (Amazon S3) and all Docker images to Amazon Elastic Container Registry (Amazon ECR) in every account and Region from which it’s consumed so that they can be used during the subsequent deployments.
  • Deploy (DeployEKSClusters): This stage deploys your CDK applications in two different Stacks that describe your Amazon EKS clusters, their configuration, and components.
  • Validate: This stage validates that the workloads you deployed on the previous stage are functional.
  • Release (PromoteEnvironment): This stage updates your Route 53 record to point to the cluster specified as the production environment in your code. It requires manual approval to execute.

At a high level, we will use the following steps for deploying the infrastructure above:

  1. Fork the sample repository.
  2. Create environment-specific parameters and secrets.
  3. Create subdomains for each cluster (e.g., blue.my.domain and green.my.domain) in Amazon Route 53.
  4. Change your fork and push changes.
  5. Deploy your CDK stack(s).

Prerequisites:

Getting started

  • To start, fork our sample repository (https://github.com/aws-samples/aws-cdk-pipelines-eks-cluster) and clone it.
    $ git clone https://github.com/YOUR-USERNAME/aws-cdk-pipelines-eks-cluster
  • Create a GitHub personal access token with scopes repo and admin:repo_hook using the following link or step by step.
  • Store the token created in the previous step to AWS Secrets Manager using:
    $ aws secretsmanager create-secret --name github-oauth-token --description "Secret for GitHub" --secret-string TOKEN-GENERATED-PREVIOUS-STEP
  • Store hosted zone ID and domain name to AWS Systems Manager Parameter Store using:
    $ aws ssm put-parameter --name '/eks-cdk-pipelines/hostZoneId' --type String --value YOUR-HOSTED-ZONE-ID
    $ aws ssm put-parameter --name '/eks-cdk-pipelines/zoneName' --type String --value YOUR-ZONE-NAME

    Navigate to Route 53 through the AWS Console to access your hosted zones, and use domain name as zoneName and hosted zone ID as hostZoneId.

  • You can use parameters.sh inside your fork to provide the configuration above interactively:
    $ chmod +x parameters.sh; ./parameters.sh
  • Create a subdomain for each cluster (e.g. blue.my.domain and green.my.domain) in your DNS:

We will create a subdomain per cluster, so each cluster application has its own domain name and we can swap traffic using the pipeline. For this example, we will create blue and green subdomains for the domain name we used in the previous step.

For Amazon Route 53, you can use the following steps:

  1. First, you create a hosted zone that has the same name as the subdomain that you want to route traffic for, such as blue.example.org (replace example.org with your own domain).
  2. You get the name servers that Route 53 assigned to the new hosted zone when you created it.
  3. You create a new NS record in the hosted zone for your parent domain (example.org), and you specify the four name servers you got in step 3.
  4. Repeat steps 1–3 using green.example.com.

In the end, your parent domain should look like this:

Change your fork

There is only one mandatory change to your code:

  • Update your pipeline definition in lib/eks-pipeline-stack.ts to point to your own forked repository, replacing aws-samples with your GitHub username.
const pipeline = new CodePipeline(this, "Pipeline", {
    synth: new ShellStep("Synth", {
    input: CodePipelineSource.gitHub(
        "aws-samples/aws-cdk-pipelines-eks-cluster",
        "main",
        {
        authentication:
            cdk.SecretValue.secretsManager("github-oauth-token"),
        }
    ),
    commands: ["npm ci", "npm run build", "npx cdk synth"],
    }),
    pipelineName: "EKSClusterBlueGreen",
});

Note: Make sure you commit and push your code after changing it, otherwise the pipeline will change itself to the latest pushed commit during the SelfMutate stage.

The code above will create a Source, Build, UpdatePipeline, and PublishAssets stage, and you only have to define the next stages. In our sample code, there are two Stages, one for each EKS cluster (blue and green), and they run in parallel using a Wave:

const clusterANameSuffix = "blue";
const clusterBNameSuffix = "green";
const eksClusterStageA = new EksClusterStage(this, "EKSClusterA", {
    clusterVersion: eks.KubernetesVersion.V1_20,
    nameSuffix: clusterANameSuffix,
});
const eksClusterStageB = new EksClusterStage(this, "EKSClusterB", {
    clusterVersion: eks.KubernetesVersion.V1_21,
    nameSuffix: clusterBNameSuffix,
});
const eksClusterWave = pipeline.addWave("DeployEKSClusters");

We are reusing EksClusterStage with two parameters: clusterVersion and nameSuffix, which is also used as the subdomain to manage your DNS from the Kubernetes cluster using ExternalDNS. This Stage deploys a Stack, which contains the definition of our cluster and its major components:

const cluster = new eks.Cluster(this, `acme-${props.nameSuffix}`, {
    clusterName: `acme-${props.nameSuffix}`,
    version: props.clusterVersion,
    defaultCapacity: 0,
    vpc,
    vpcSubnets: [{ subnetType: ec2.SubnetType.PRIVATE }],
});

new EksManagedNodeGroup(this, "EksManagedNodeGroup", {
    cluster: cluster,
    nameSuffix: props.nameSuffix,
});

new AWSLoadBalancerController(this, "AWSLoadBalancerController", {
    cluster: cluster,
});

new ExternalDNS(this, "ExternalDNS", {
    cluster: cluster,
    hostZoneId: hostZoneId,
    domainFilters: [`${props.nameSuffix}.${zoneName}`],
});

new ClusterAutoscaler(this, "ClusterAutoscaler", {
    cluster: cluster,
});

new ContainerInsights(this, "ContainerInsights", {
    cluster: cluster,
});

new Calico(this, "Calico", {
    cluster: cluster,
});

new Prometheus(this, "Prometheus", {
    cluster: cluster,
});

new Echoserver(this, "EchoServer", {
    cluster: cluster,
    nameSuffix: props.nameSuffix,
    domainName: zoneName,
}); 

You can explore each Construct code in lib/infrastructure and lib/app; they are the basic building blocks of AWS CDK apps. A construct represents a “cloud component” and encapsulates everything AWS CloudFormation needs to create the component. Below you’ll find a comprehensive description of the role of each construct we built:

Once you’re done changing your code, you should bootstrap your environment:

npm install
cdk bootstrap

After it is done, commit and push the changes to your repository using:

git add .
git commit -m "Update cluster configuration."
git push

First deploy

The first time, you will have to deploy your pipeline manually, using cdk deploy. After that, each change you push to your repository will trigger your pipeline, which will update itself and execute.

Your first execution will take a while since some resources, like EKS cluster(s) and managed node groups, may take a few minutes to be ready. You can keep track of the progress by accessing the pipeline through AWS CodePipeline.

If you check your AWS CloudFormation stacks, you will find a stack for the pipeline itself (EksPipelineStack) and one stack (with nested stacks) for each EKS cluster.

In the output of your EKSCluster Stack(s) there are commands to set up your kubeconfig to access your cluster, copy it, and execute it:

Then you can you execute regular kubectl commands:

You can access the application directly from your browser using echoserver.subdomain.my.domain or using curl:

curl -H “Host: app.my.domain” elb.<AWS_REGION>.elb.amazonaws.com

The final infrastructure provisioned for this pipeline will look like this:

The pipeline definition uses the variable prodEnv to define the target cluster, in other words, where echoserver’s users will access. Once you set it and push to your fork, you’ll have to manually approve this change in CodePipeline:

After it finishes updating the DNS Record and it is propagated, you can check where the app.my.domain record is pointing.

Cleaning up

To clean up after this tutorial, log in to the AWS console to the different accounts you used, go to the AWS CloudFormation console of the Region(s) where you chose to deploy, and select and click Delete on the following stacks: EksPipelineStack, EKSClusterA-EKSCluster, EKSClusterB-EKSCluster, UpdateDNS-AppDns, CDKToolkit.

The pipeline stack (EksPipelineStack) and bootstrapping stack (CDKToolkit) each contain an AWS Key Management Service key that you will be charged $1/month for if you do not delete them.

To remove this sample using the command line you can use:

$ cdk destroy -y
$ aws cloudformation delete-stack —stack-name EksPipelineStack
$ aws cloudformation delete-stack —stack-name EKSClusterA-EKSCluster
$ aws cloudformation delete-stack —stack-name EKSClusterB-EKSCluster
$ aws cloudformation delete-stack —stack-name UpdateDNS-AppDns
$ aws cloudformation delete-stack —stack-name CDKToolkit

Conclusion

In this post, we showed how to use AWS CDK and CDK Pipelines to deploy and manage the entire lifecycle of your Amazon EKS cluster(s). This Infrastructure as Code approaches delivery changes through an automated and standardized pipeline that is also defined in AWS CDK, allowing you to deploy and upgrade clusters consistently across different versions, environments, accounts, and Regions while keeping track of your cluster and pipeline changes through Git. The sample code provides several examples of components commonly installed in EKS clusters like AWS Load Balancer Controller, Calico for Network Policies, ExternalDNS, Cluster Autoscaler, Metrics Server, and Container Insights.

This example project can be used as a starting point to build your own solution to automate the deployment of your own Amazon EKS cluster(s). Visit the AWS CDK Pipelines documentation and Amazon EKS Construct Library documentation for more information on using these libraries.