Integration & Automation

Simplify integration of your Amazon EKS cluster with Amazon EKS Blueprints for CDK

With the recent deprecation of the Amazon Elastic Kubernetes Service (Amazon EKS) Quick Start based on AWS CloudFormation, customers and partners need to achieve similar results with alternative solutions. Specifically, they need an approach that simplifies integration of common tooling and provisioning of complete, opinionated EKS clusters that meet specific application requirements.

The extensible nature of Kubernetes (and EKS by extension) allows customers to use a wide range of popular commercial and open-source tools, commonly referred to as add-ons. With so many tooling and design choices available, building a tailored EKS cluster that meets your application’s specific needs can take a significant amount of time. It involves integrating a wide range of open-source tools and AWS services, often requiring deep expertise in AWS and Kubernetes.

In this article, we introduce the Amazon EKS Blueprints for CDK framework as the recommended replacement for the deprecated Amazon EKS Quick Start. We cover common usage scenarios, provide examples and sample code, and present available resources for full implementations. We use the AWS Cloud Development Kit (AWS CDK) flavor of Amazon EKS Blueprints, but you can also use Amazon EKS Blueprints for Terraform based on HashiCorp Terraform to achieve similar results, as described in the EKS Blueprints launch blog.

About this blog post
Time to read ~10 min
Learning level Expert (400)
AWS services AWS Cloud Development Kit (AWS CDK)
AWS CloudFormation
Amazon Elastic Compute Cloud (Amazon EC2)
Amazon Elastic Kubernetes Service (Amazon EKS)
AWS Fargate

Solution overview

Amazon EKS Blueprints for AWS CDK is a collection of Infrastructure as Code (IaC) modules that are available in public repositories on GitHub. The collection can help you configure and deploy consistent, ready-to-use EKS clusters across accounts and Regions.

The main repository contains the framework itself, packaged as an npm module (Node Package Manager) for external consumption. In addition, a collection of patterns is available to serve as usage examples along with complete solutions that you can use out of the box from the patterns repository. A pattern implementation is a codified reference architecture that conveys architectural and educational value.

You can use Amazon EKS Blueprints to bootstrap EKS clusters with Amazon EKS add-ons. You can also use common, open-source add-ons, such as Prometheus, Karpenter, NGINX, AWS Load Balancer Controller, Fluent Bit, Keda, and Argo CD. The framework helps implement security controls that are required to operate workloads in a shared environment.

Many customers use GitOps to decouple IaC component and application deployment on Kubernetes from the infrastructure. Access our example workloads repository to facilitate these patterns, which can be bootstrapped with the Argo CD add-on.

You can define Amazon EKS Blueprints for CDK using Typescript and Node.js tooling. While the programming language enabled us to create domain-oriented APIs for programmatic configuration, Node.js provides a stable and maintained runtime.

To define a blueprint, use the builder design pattern as a starting point. This pattern simplifies complex object construction, such as the cluster blueprint or pipeline.

The following is an example of a simple starter blueprint:

const app = new cdk.App();

blueprints.EksBlueprint.builder()
   .version("auto")
    .addOns(
        new blueprints.AwsLoadBalancerControllerAddOn,
        new blueprints.VpcCniAddOn(), 
        new blueprints.MetricsServerAddOn,
        new blueprints.ClusterAutoScalerAddOn,
    )
    .build(app, "my-first-blueprint");

This blueprint example performs the following tasks:

  • Creates a new Amazon Virtual Private Cloud (Amazon VPC) with three public and three private subnets
  • Creates an Amazon EKS cluster in the Amazon VPC
  • Creates a managed node group in the private subnets
  • Adds four add-ons so that the blueprint is ready to accept customer applications

To see a complete version of the starter blueprint, see the AWS samples repository.

Use cases

Let’s review a few common use cases that customers and partners have implemented with the deprecated Amazon EKS Quick Start to see how they compare to Amazon EKS Blueprints for CDK. First, we introduce you to an example blueprint (see Figure 1) to highlight a few high-level capabilities of the framework and concepts described later in this article:

Example blueprint

Figure 1: Example blueprint

The Clusters (bottom) layer represents the available AWS compute options and cluster configurations. The Add-ons (middle) layer shows a mix of add-ons, including open-source, AWS-managed and commercial tools. The Teams (top) layer represents application teams that can be onboarded on to the cluster to run applications within the infrastructure and guardrails supplied by the lower layers.

Add-ons

One of the most popular features of Amazon EKS Blueprints for CDK is the portfolio of the supported add-ons. These add-ons represent AWS components, such as the Amazon Elastic Block Store (Amazon EBS) Container Storage Interface (CSI) driver, popular open-source solutions like Metrics Server or External DNS, and partner-created add-ons.

You can choose add-ons and create a blueprint from the framework blocks. You can also extend the list of supported add-ons through the framework’s extensibility options. Several add-ons such as Calico Operator, Grafana, Prometheus (based on Amazon Managed Service for Prometheus) were also available in the original Amazon EKS Quick Start.

Cluster configuration

By default, the framework assumes a simple cluster configuration with a new Amazon VPC and a managed node group with a minimum of one and a maximum of three nodes of m5.large instances.

You can overwrite the default parameters in one of the source files of context values by setting the following parameters:

eks.default.instance-type
eks.default.min-size
eks.default.max-size
eks.default.desired-size

This configuration is useful for demonstration and prototyping. For more complex use cases, you can choose a more explicit cluster configuration, controlled by the implementations of the ClusterProvider interface. Out of the box, the framework provides implementations for managed node groups, autoscaling (self-managed) node groups, AWS Fargate, and a generic cluster provider that can combine all of the compute options.

Managed node groups

You can configure a cluster that contains a single managed node group for workloads with an MngClusterProvider class. With this cluster provider, you can apply version upgrades to the EKS control plane and specify the Amazon Machine Image (AMI) for the worker nodes. For more information, see Managed Node Group Cluster Provider.

Note: Upgrading the control plane (specifically, the version attribute of the cluster provider) impacts only the control plane version and not the worker nodes in the node group. For upgrades, provide both the version of the control plane in the version attribute and the AMI release version for the worker nodes in the amiReleaseVersion attribute.

The following example creates a new EKS cluster with 1-10 worker nodes with the desired size of 4 nodes. To control cluster upgrades, the example explicitly specifies the versions of the control plane and AMI release.

const mngClusterProvider = new bp.MngClusterProvider({
        minSize: 1,
        maxSize: 10,
        desiredSize: 4,
        instanceTypes: [new ec2.InstanceType('m5.large')],
        amiType: eks.NodegroupAmiType.AL2_X86_64,
        nodeGroupCapacityType: eks.CapacityType.ON_DEMAND,
        version: KubernetesVersion.V1_27,
        amiReleaseVersion: "1.27.3-20230728" // this will upgrade kubelet to 1.27.3
});

blueprints.EksBlueprint.builder()
    .clusterProvider(mngClusterProvider)
    .build(app, "my-mng-stack");

To upgrade the cluster, change the version and amiReleaseVersion attributes to the desired values and run the cdk deploy command for your stack. If needed, look up AMI release versions for your EKS cluster.

Self-managed node groups

Self-managed nodes use Amazon EC2 Auto Scaling groups in EKS. The framework provides an AsgClusterProvider property for adding self-managed nodes to the cluster:

const asgClusterProvider = new bp.AsgClusterProvider({
    id: "my-asg-group",
    minSize: 1,
    maxSize: 10,
    desiredSize: 4,
    instanceType: new ec2.InstanceType('m5.large'),
    machineImageType: eks.MachineImageType.AMAZON_LINUX_2,
    updatePolicy: UpdatePolicy.Rolling
});

However, this option is a self-managed option when you are responsible for the worker node’s AMI (for example, patching), operating system, and Kubelet. You can still use Amazon EKS optimized Amazon Linux AMIs with self-managed nodes and upgrade them when needed either in-place or by migrating to new nodes.

AWS Fargate

For a serverless Amazon EKS cluster, apply the FargateClusterProvider property to create a control plane and additional AWS Fargate profiles for the kube-system and default namespaces. This type of cluster can be useful to teams that adopt a workload-per-cluster approach. Extend the cluster configuration with additional Fargate profiles to target additional user namespaces, for example:

const clusterProvider = new blueprints.FargateClusterProvider({
    fargateProfiles: {
      "team1": { selectors: [{ namespace: "team1" }] }
    },
    version: eks.KubernetesVersion.V1_27
});

Combining multiple node groups

Customers working in production-level scenarios typically combine multiple node groups and Fargate profiles into the same cluster. This option especially applies to a shared cluster scenario when workloads running on the cluster come from multiple teams with different requirements. The following code snippet includes a node group for generic workloads, a node group for Amazon EC2 Spot instances to run shorter and less mission-critical workloads, and a couple of Fargate profiles.

const clusterProvider = new blueprints.GenericClusterProvider({
    version: eks.KubernetesVersion.V1_27,
    managedNodeGroups: [
        {
            id: "mng-ondemand",
            amiType: eks.NodegroupAmiType.AL2_X86_64,
            instanceTypes: [new ec2.InstanceType('m5.2xlarge')]
        },
        {
            id: "mng2-spot",
            instanceTypes: [ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE3, ec2.InstanceSize.MEDIUM)],
            nodeGroupCapacityType: eks.CapacityType.SPOT
        }
    ],
    fargateProfiles: {
        "fp1": {
            fargateProfileName: "fp1",
            selectors:  [{ namespace: "serverless1" }] 
        },
        "fp2": {
            fargateProfileName: "fp2",
            selectors:  [{ namespace: "serverless2" }] 
        }
    }
});

Network Configuration

Amazon VPCs for EKS clusters with Amazon EKS Blueprints

VpcProvider is a resource provider that creates a new Amazon VPC with the default values or, optionally, allows you to specify your primary and secondary CIDR ranges, and map those to individual subnets. This VPC resource provider can also import an existing Amazon VPC into your blueprint if you provide the VPC ID. If the VPC ID is set to default, this resource provider looks up the default Amazon VPC in your account.

Configuring Amazon VPC with options

Create a new Amazon VPC with a primary CIDR with VpcProvider:

blueprints.EksBlueprint.builder()
  .resourceProvider(GlobalResources.Vpc, new VpcProvider({primaryCidr: "10.0.0.0/16"}))
  ...
  .build();

Create a new Amazon VPC with a primary CIDR and secondary CIDR and subnet CIDRs with VpcProvider:

blueprints.EksBlueprint.builder()
    .resourceProvider(blueprints.GlobalResources.Vpc, new blueprints.VpcProvider(undefined, {
        primaryCidr: "10.2.0.0/16", 
        secondaryCidr: "100.64.0.0/16",
        secondarySubnetCidrs: ["100.64.0.0/24","100.64.1.0/24","100.64.2.0/24"]
    }))

Use an external VPC with a VPC ID with VpcProvider:

blueprints.EksBlueprint.builder()
  .resourceProvider(GlobalResources.Vpc, new blueprints.VpcProvider("<vpc-id>"))
  ...
  .build();

Provision a custom VPC in a separate CDK stack and pass it over to the blueprint using DirectVpcProvider:

const vpcStack = new VPCStack(app, 'eks-blueprint-vpc', { env: { account, region } }); // contains myVpc member variable
blueprints.EksBlueprint.builder()
  .resourceProvider(GlobalResources.Vpc, new blueprints.DirectVpcProvider(vpcStack.myVpc))
  ...
  .build();

Look up a secondary subnet provider by ID and pass it over to the blueprint under the provided name using LookupSubnetProvider:

blueprints.EksBlueprint.builder()
  .resourceProvider('my-subnet', new blueprints.LookupSubnetProvider("subnet-id"))
  ...
  .build();

Windows support

You can use the WindowsBuilder construct to create EKS clusters with Windows node groups. The WindowsBuilder construct applies the required configuration using a builder pattern to set up your EKS cluster with Windows support. It creates an EKS cluster with a Linux managed node group for standard software and a Windows-managed node group to schedule Windows workloads.

The following example demonstrates how to use the WindowsBuilder construct to configure a Windows-managed node group on a new EKS cluster:

// Create a role for the worker nodes with the required policies
const nodeRole = new blueprints.CreateRoleProvider("blueprint-node-role", new iam.ServicePrincipal("ec2.amazonaws.com"),
    [
        iam.ManagedPolicy.fromAwsManagedPolicyName("AmazonEKSWorkerNodePolicy"),
        iam.ManagedPolicy.fromAwsManagedPolicyName("AmazonEC2ContainerRegistryReadOnly"),
        iam.ManagedPolicy.fromAwsManagedPolicyName("AmazonSSMManagedInstanceCore"),
        iam.ManagedPolicy.fromAwsManagedPolicyName("AmazonEKS_CNI_Policy")
    ]);

WindowsBuilder.builder({ // passing WindowsOptions here
        kubernetesVersion: eks.KubernetesVersion.V1_27,
        instanceClass: ec2.InstanceClass.M5,
        instanceSize: ec2.InstanceSize.XLARGE4
    })
    .addOns(new WindowsVpcCni())
    .account(account)
    .region(region)
    .resourceProvider("node-role", nodeRole)
    .build(app, "my-windows-blueprint");

To learn about input parameters and see a demonstration of using Windows with Amazon EKS, see the Windows Builder documentation.

Working with existing clusters

Amazon EKS Blueprints can import existing clusters and add or configure additional software on top of them. This can be handy when a clear split exists between infrastructure, platform, and site reliability engineering (SRE) teams that contribute separate aspects to the cluster. For example, the infrastructure team can provision an Amazon EKS cluster compute capacity and control the network configuration across multiple availability zones. This team may also control ingress and supply the AWS Load Balancer Controller. The platform and SRE teams may focus on observability and developer tooling to support CI/CD processes for Prometheus, Grafana, or GitOps engines (such as Flux or Argo CD).

The ImportClusterProvider construct can import an existing Amazon EKS cluster into your blueprint. This means that you can implement add-ons and limited team capabilities. The blueprints framework provides the following set of convenience methods to instantiate the ImportClusterProvider resource provider using an SDK API call to describe the cluster before importing:

Option 1: Use this option to retrieve the cluster information through the DescribeCluster API (requires eks:DescribeCluster permission at build time) and to instantiate the ImportClusterProvider class to import the cluster and to set up the blueprint VPC based on the discovered VPC configuration, for example:

const clusterName = "quickstart-cluster";
const region = "us-east-2";

const kubectlRoleName = "MyClusterAuthConfigRole"; // this is the role registered in the aws-auth config map in the target cluster 
const sdkCluster = await blueprints.describeCluster(clusterName, region); // get cluster information using EKS APIs

const importClusterProvider = blueprints.ImportClusterProvider.fromClusterAttributes(
    sdkCluster, 
    blueprints.getResource(context => new blueprints.LookupRoleProvider(kubectlRoleName).provide(context))
);

const vpcId = sdkCluster.resourcesVpcConfig?.vpcId;

blueprints.EksBlueprint.builder()
    .clusterProvider(importClusterProvider)
    .resourceProvider(blueprints.GlobalResources.Vpc, new blueprints.VpcProvider(vpcId)) // this is required with import cluster provider

Option 2: Use this option if you already know the VPC ID of the target cluster (requires the eks:DescribeCluster permission at build time):

const clusterName = "quickstart-cluster";
const region = "us-east-2";

const kubectlRole: iam.IRole = blueprints.getNamedResource('my-role');

const importClusterProvider2 = await blueprints.ImportClusterProvider.fromClusterLookup(clusterName, region, kubectlRole); // note await here

const vpcId = ...; // you can always get it with blueprints.describeCluster(clusterName, region);

blueprints.EksBlueprint.builder()
    .clusterProvider(importClusterProvider2)
    .resourceProvider('my-role', new blueprints.LookupRoleProvider('my-role'))
    .resourceProvider(blueprints.GlobalResources.Vpc, new blueprints.VpcProvider(vpcId)) 

Option 3: Use this option when you want to avoid providing special permissions at build time. You must pass an OpenID Connect (OIDC) provider if you use an IAM Role for Service Account (IRSA) with your blueprint:

const importClusterProvider3 = new ImportClusterProvider({
    clusterName: 'my-existing-cluster',
    version: KubernetesVersion.V1_27,
    clusterEndpoint: 'https://B792B88BC60999B1AD.gr7.us-east-2.eks.amazonaws.com',
    openIdConnectProvider: getResource(context =>
        new LookupOpenIdConnectProvider('https://oidc.eks.us-east-2.amazonaws.com/id/B792B88BC60999B1A3').provide(context)),
    clusterCertificateAuthorityData: 'S0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCasdd234................',
    kubectlRoleArn: 'arn:...',
    clusterSecurityGroupId: 'sg...';
});

const vpcId = ...; 

blueprints.EksBlueprint.builder()
    .clusterProvider(importClusterProvider3)
    .resourceProvider(blueprints.GlobalResources.Vpc, new blueprints.VpcProvider(vpcId)) 

The AWS CDK Observability Accelerator provides a pattern to achieve AWS Native Observability and a pattern for open-source (OSS)-based monitoring and logging on top of existing clusters. See Announcing AWS CDK Observability Accelerator for Amazon EKS to learn more about the AWS CDK Observability Accelerator.

Pipelines

If you provision your first set of clusters with Amazon EKS Blueprints, the Git-based pipeline processes can help you to provision and maintain your clusters through Git. With pipelines, you can model your environments across multiple AWS accounts and Regions covering the complete enterprise landscape for software delivery. Typically, customers refer to such environments as development, testing, staging, and production.

The following code defines development, testing, and production environments that deploy a specific blueprint. This enables consistency and standardized maintenance for more complex scenarios. Use the stage construct to deploy a single step (for example, apply a blueprint to the account or Region) and the wave construct to group several stages that should execute in parallel:

const blueprint = EksBlueprint.builder()
    .addOns(...)
    .teams(...)
    .clusterProvider(...);

blueprints.CodePipelineStack.builder()
    .name("eks-blueprints-pipeline")
    .owner("aws-samples")
    .repository({
        //...
    })
    .stage({
        id: 'dev-single-cluster',
        stackBuilder: blueprint.clone('us-west-1')
    })
    .wave( {  // adding two clusters for test env
        id: "test",
        stages: [
            { id: "test-west-1", stackBuilder: blueprint.clone('us-west-1').account(TEST_ACCOUNT)}, // requires trust relationship with the code pipeline role
            { id: "test-east-2", stackBuilder: blueprint.clone('us-east-2').account(TEST_ACCOUNT)}, // See https://docs.aws.amazon.com/cdk/api/v1/docs/pipelines-readme.html#cdk-environment-bootstrapping

        ]
    })
    .wave( {
        id: "prod",
        stages: [
            { id: "prod-west-1", stackBuilder: blueprint.clone('us-west-1')}, // add prod level customizations
            { id: "prod-east-2", stackBuilder: blueprint.clone('us-east-2')}, // add prod level customizations
        ]
    })

Extending Amazon EKS Blueprints with third-party and partner add-ons

Amazon EKS Blueprints for CDK is extensible, allowing you to add new capabilities to the framework (or solutions based on the framework). You can also modify or override the existing behavior.

Use the following abstractions to add new features to the framework:

  • Add-on – Implement new add-ons that are used the same way as the core add-ons supplied by the framework. Implementing add-ons is the most common way to extend the framework. Support exists for Helm and non-Helm add-ons, and add-ons that can use GitOps-based distribution (such as, Argo CD or Flux).
  • Resource provider – Create reusable resources and apply them across multiple clusters, add-ons, and teams. These resources include AWS Identity and Access Management (IAM) roles, Amazon VPCs, hosted zones, Amazon Relational Database Service (Amazon RDS) databases, and more. This construct is reserved for AWS resources.
  • Cluster providers – Create custom code that provisions an Amazon EKS cluster with node groups. Use this abstraction to extend behavior such as control plane customization and custom settings for node groups.
  • Teams – Create team templates for application and platform teams. Use this abstraction to model your team namespaces with network isolation rules, policies (network and security), software wiring (e.g. auto-injection of proxies), and other features.

You can make your extensions private or public. Additionally, you can have your extension (such as an add-on) validated by the AWS team and published in the Add-ons documentation. For a complete guide, see Extensibility.

Conclusion

In this post, we provided examples of how you can replicate the cluster setup that was supported by the now-deprecated Amazon EKS Quick Start with Amazon EKS Blueprints for CDK. We encourage you to explore the patterns repository containing usage examples and complete solutions .

Scaling your EKS provisioning and maintenance across the entire organizational structure (including accounts and Regions) is an important feature for enterprise adoption. We recommend that you become familiar with our pipeline support, which enables consistent and centralized cluster configuration management across your enterprise environments. It enables you to control roll-out of changes with a clear promotion strategy.

The range of features supported by the Amazon EKS Quick Start was very broad. Therefore, this article focused on the most common use cases. Use the main GitHub repository for support and feedback, for example if you have a use case that’s missing. In addition, use issues and discussions to ask questions and create feature requests. We also welcome community contributions.

About the authors

Mikhail Shapirov

Mikhail is a Principal Partner Solutions Architect at AWS, focusing on container services, application modernization, and cloud management services. Mikhail helps partners and customers drive their products and services on AWS with AWS Container services, serverless compute, development tools, and cloud management services. He is also a software engineer.

Elamaran Shanmugam

Elamaran (Ela) Shanmugam is a Sr. Container Specialist Solutions Architect with AWS. Ela is a Container, Observability and Multi-Account Architecture SME and helps customers design and build scalable, secure, and optimized container workloads on AWS. His passion is building and automating infrastructure so customers can focus more on their business. He is based out of Tampa, Florida and can be reached on Twitter @IamElaShan and on GitHub.