Containers

Streamline production grade clusters with Amazon AFT and Terraform EKS Blueprints

AWS users need to continuously enhance their infrastructure and security processes. This typically involves a two-phase approach: discovery and design, followed by implementation. In the discovery phase, an assessment of the current infrastructure is conducted, leading to the creation of architecture documents and patterns for the subsequent implementation phase. This process delves into Account Structure, Networking, DNS, Security, and Operations. During the implementation phase, AWS Control Tower plays a crucial role, offering an easy way to set up and govern a secure, multi-account AWS environment.

To further streamline this process, AWS introduced the Account Factory for Terraform (AFT), an advancement over the Customizations for AWS Control Tower (CFCT), allowing seamless integration between account launching and GitOps management while maintaining AWS Control Tower’s workflow efficiency. It significantly reduces the time and effort required for provisioning thanks to its automation capabilities. AFT’s introduction reflects the growing preference for Terraform in automating AWS Control Tower customizations.

Once the foundational setup is complete, the focus shifts to creating workloads within the accounts, with Amazon Elastic Kubernetes Service (EKS) clusters being a popular choice. The Terraform EKS Blueprints simplify the lifecycle management of Kubernetes add-ons, facilitating a range of tasks such as Ingress Controller setup, Secrets Management, Network setup, and GitOps integration, among others. Termed as “Day 2 Operations”, this process allows users to swiftly configure and deploy purpose-built EKS clusters, accelerating the onboarding of workloads from months to hours, or even minutes! In summary, AFT and Terraform EKS Blueprints together represent a robust solution for efficiently setting up and managing production-level AWS environments and EKS clusters.

Solution overview

Figure 1. Production ready vended account

Figure 1. Production ready vended account

Prerequisites

The following prerequisites are necessary before continuing:

  • AWS Command Line Interface (AWS CLI) (Linux, Mac and Windows)
  • kubectl (Linux, Mac and Windows)
  • Terraform (>=1.5.0) (Install)
  • Familiarity with Terraform, continuous integration/continuous development (CI/CD) concepts and tools, Git, and Kubernetes.
  • Fully working AWS with the following considerations
    • Administrator access in the AFT account
    • AWS Control Tower (>=3.2) (docs)
    • AFT (>=1.11.1)
    • You can setup AFT following installations best practices using this pattern.

Walkthrough

The following steps outline the process of this post:

  1. (REQUIRED FOR NEW AFT DEPLOYMENTS ONLY) Create the Account Provisioning and Account Request Pipelines.
  2. Create the eks-prd on aft-account-customizations repository.
  3. Launch a new account on aft-account-request referencing the eks-prd customization
  4. Run the account pipeline to apply the customization.
  5. Test the EKS cluster access and resources.

Note that the following steps described in this post incur cost.

(REQUIRED FOR NEW AFT DEPLOYMENTS ONLY) Create the Account Provisioning and Account Request Pipelines

This is step is optional. If you already have the Account Provisioning Customizations and Account Request Pipelines already in place in your environment, then jump to the Create the eks-prd on aft-account-customizations repository.

Make sure you have the AWS_REGION environment variable set, define your BASE_DIR ,  clone the this AWS Samples repository. Within it you should find the baseline for creating account provisioning and requests using AFT

export BASE_DIR=$(pwd)
echo $BASE_DIR $AWS_REGION
git clone https://github.com/aws-samples/control-tower-aft-with-eks-blueprints
  1. Clone the aft-account-provisioning-customizations repository, and populate it with the baseline code provided in the AWS Sample repository to perform automated account provisioning using AFT
cd $BASE_DIR
git clone https://git-codecommit.us-west-2.amazonaws.com/v1/repos/aft-account-provisioning-customizations
cp -a control-tower-aft-with-eks-blueprints/baseline/aft-account-provisioning-customizations/* aft-account-provisioning-customizations/
cd aft-account-provisioning-customizations/
git checkout -b main
git add -A
git commit -m "Initial commit"
git push --set-upstream origin main

The git push command triggers the ct-aft-account-provisioning-customizations pipeline.

  1. Clone the aft-account-request repository and populate it with the baseline code provided in the AWS Sample repository to perform the automated request for new accounts provisioning using AFT.
cd $BASE_DIR
git clone https://git-codecommit.$AWS_REGION.amazonaws.com/v1/repos/aft-account-request
cp -a control-tower-aft-with-eks-blueprints/baseline/aft-account-request/* aft-account-request/
cd aft-account-request/
git checkout -b main
git add -A
git commit -m "Initial commit"
git push --set-upstream origin main

The git push command triggers the ct-aft-account-request pipeline.

  1. Clone the aft-global-customizations repository and populate it with the baseline code provided in the AWS Sample repository to perform the automated request for new accounts provisioning using AFT.
cd $BASE_DIR
git clone https://git-codecommit.$AWS_REGION.amazonaws.com/v1/repos/aft-global-customizations
cp -a control-tower-aft-with-eks-blueprints/baseline/aft-global-customizations/* aft-global-customizations/
cd aft-global-customizations/
git checkout -b main
git add -A
git commit -m "Initial commit"
git push --set-upstream origin main

Create the eks-prd on aft-account-customizations repository

This section guides you through the creation of eks-prd, a production-ready EKS cluster using AFT.

  1. Begin by cloning the AWS Samples repository

If note done yet, then make sure you have the AWS_REGION environment variable set, and define your BASE_DIR.

Clone this AWS Samples repository, and within you should find some available customizations. For this guide, we’re focusing on eks-prd, which has the EKS Terraform Blueprints content to deploy EKS clusters with the account provisioning.

export BASE_DIR=$(pwd)
echo $BASE_DIR $AWS_REGION
git clone https://github.com/aws-samples/control-tower-aft-with-eks-blueprints
  1. Populate the AFT repository with the required structure.

In the delegated AFT Management account, clone the aft-account-customizations repository from

Copy the referred customizations from the public samples repository and push to the aft-account-customizations CodeCommit repository.

cd $BASE_DIR
git clone https://git-codecommit.$AWS_REGION.amazonaws.com/v1/repos/aft-account-customizations
cp -a control-tower-aft-with-eks-blueprints/eks-prd aft-account-customizations/
cd aft-account-customizations
git checkout -b main
git add -A
git commit -m "Adding Amazon EKS Cluster customization"
git push --set-upstream origin main

Note that we need to push to the main branch, since this is the branch that is watched by AFT.

Code details

In this segment, we dive deep into the specifics of the eks-prd Terraform configuration, examining the files that we pushed to the eks-prd/terraform directory of the aft-account-customizations repo.

Providers and backend definition

AFT manages the AWS provider and the backend configuration through jinja templates, making sure of the correct AWS account and   settings for Terraform operations.

## Auto generated providers.tf ##
## Updated on: {{ timestamp }} ##

provider "aws" {
  region = "{{ provider_region }}"
  assume_role {
    role_arn    = "{{ target_admin_role_arn }}"
  }
}

Furthermore, in the backend configuration, AFT populated the encrypted to guarantee Terraform state locking and consistency checking during account management.

## Auto generated backend.tf ##
## Updated on: {{ timestamp }} ##

{% if tf_distribution_type == "oss" -%}
terraform {
  required_version = ">= 0.15.0"
  backend "s3" {
    region         = "{{ region }}"
    bucket         = "{{ bucket }}"
    key            = "{{ key }}"
    dynamodb_table = "{{ dynamodb_table }}"
    encrypt        = "true"
    kms_key_id     = "{{ kms_key_id }}"
    role_arn       = "{{ aft_admin_role_arn }}"
  }
}
{% else -%}
terraform {
    backend "remote" {
        organization = "{{ terraform_org_name }}"
        workspaces {
        name = "{{ terraform_workspace_name }}"
        }
    }
}
{% endif %}

Cluster definition

Therefore, in the eks.tf file, we have two main sections. Define both the Kubernetes and Helm providers, which are essential for interacting with our EKS cluster and managing Kubernetes applications.

provider "kubernetes" {
  host                   = module.eks.cluster_endpoint
  cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
  token                  = data.aws_eks_cluster_auth.this.token
}

provider "helm" {
  kubernetes {
    host                   = module.eks.cluster_endpoint
    cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
    token                  = data.aws_eks_cluster_auth.this.token
  }
}

Moreover, on the eks.tf file, the Amazon EKS module configures a simple yet robust Kubernetes cluster with Managed Node Group, and bootstrap admin permissions for the platform-team, to be able to execute cluster management tasks.

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 20.0"

  cluster_name                   = local.name
  cluster_version                = "1.29"
  cluster_endpoint_public_access = true

  cluster_addons = {
    coredns    = {}
    kube-proxy = {}
    vpc-cni    = {}
  }

  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.vpc.private_subnets

  enable_cluster_creator_admin_permissions = true
  access_entries = {
    platform-team = {
      kubernetes_groups = []
      principal_arn     = try(data.aws_ssm_parameter.platform_team_arn[0].value, data.aws_iam_roles.platform_team.arns)

      policy_associations = {
        cluster_admin = {
          policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"
          access_scope = {
            type       = "cluster"
          }
        }
      }
    }
  }

  eks_managed_node_groups = {
    initial = {
      instance_types = ["m5.large"]

      min_size     = 1
      max_size     = 3
      desired_size = 2
    }
  }
}

EKS Blueprints Teams

The developers_team set on the teams.tf file and focus on setting up permission for developers on the EKS cluster using the EKS Blueprints Teams module. This step is vital for defining who gets granular privileges for users to access the cluster, in their own specific Namespaces. In the following example, a new Team with the provided AWS Identity and Access Management (IAM) Roles, and a Namespace for it, are created.

module "development_team" {
  source  = "aws-ia/eks-blueprints-teams/aws"
  version = "~> 1.1.0"

  name = "development-team"

  cluster_arn       = module.eks.cluster_arn
  oidc_provider_arn = module.eks.oidc_provider_arn

  users = try([ data.aws_ssm_parameter.dev_team_arn[0].value ], data.aws_iam_roles.developers_team.arns) 

  labels = {
    team = "development"
  }

  annotations = {
    team = "development"
  }

  namespaces = {
    default = {
      create = false
    }

    app01 = {
      labels = {
        projectName = "app01",
      }
    }
  }

  tags = {
    Environment = "PRODUCTION"
  }
}

EKS Blueprints add-ons

In the addons.tf file, the eks_blueprints_addons module enriches the EKS cluster with key functionalities, such as load balancing, monitoring, and DNS management.

module "eks_blueprints_addons" {
  source  = "aws-ia/eks-blueprints-addons/aws"
  version = "~> 1.16"

  cluster_name      = module.eks.cluster_name
  cluster_endpoint  = module.eks.cluster_endpoint
  cluster_version   = module.eks.cluster_version
  oidc_provider_arn = module.eks.oidc_provider_arn

  enable_aws_load_balancer_controller = true
  aws_load_balancer_controller = {
    set = [{
      name  = "enableServiceMutatorWebhook"
      value = "false"
    }]
  }
  enable_metrics_server = true
  enable_cert_manager   = true
  cert_manager = {
    wait = true
  }
  enable_kube_prometheus_stack = true

  tags = local.tags

VPC definition

Finally, the vpc.tf file holds the VPC module and sets up the network environment for the EKS cluster, such as public and private subnets for enhanced security and connectivity.

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 5.0"

  name = local.name
  cidr = local.vpc_cidr

  azs             = local.azs
  private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)]
  public_subnets  = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)]

  enable_nat_gateway = true
  single_nat_gateway = true

  public_subnet_tags = {
    "kubernetes.io/role/elb" = 1
  }

  private_subnet_tags = {
    "kubernetes.io/role/internal-elb" = 1
  }

  tags = local.tags
}

In case your requirements extend beyond the scope of the provided configuration, the Terraform EKS Blueprints public repository is a rich resource, offering a wide array of examples for various add-ons.

Note that when deploying a private EKS cluster using AFT, it’s crucial to make sure that your AFT VPC has the necessary network connectivity with the EKS cluster’s network. This connectivity is crucial to enabling seamless communication and operation between your AFT environment and the private EKS cluster.

Launch a new account on aft-account-request referencing the eks-prd customization

  1. Create the terraform/amazon-eks-prd-01.tf file in the aft-account-requestrepository with the following content.
cd $BASE_DIR
vim aft-account-request/terraform/amazon-eks-prd-01.tf
module "amazon-eks-prd-01" {
  source = "./modules/aft-account-request/"

  control_tower_parameters = {
    AccountEmail = "<ACCOUNT-EMAIL>"
    AccountName  = "amazon-eks-prd-01"
    ManagedOrganizationalUnit = "<ACCOUNT-OU>"
    SSOUserEmail     = "<ACCOUNT-EMAIL>"
    SSOUserFirstName = "amazon-eks-prd-01"
    SSOUserLastName  = "account"
  }

  account_tags = {
    "ABC:Environment" = "Prd"
  }

  change_management_parameters = {
    change_requested_by = "<REQUESTER NAME>"
    change_reason       = "Production ready account with EKS Blueprints"
  }
  
 # These custom fields are optional.
  custom_fields = {
    platform_team_arn = "<PLATFORM-TEAM-IAM-ROLE-ARN>" 
    dev_team_arn = "<DEVELOPER-TEAM-IAM-ROLE-ARN>" 
  }

  account_customizations_name = "eks-prd"
}
  1. Make sure to replace the following values.
  • <ACCOUNT-EMAIL> in both the AccountEmail and SSOUserEmail parameters with the new account’s email.
  • <ACCOUNT_OU>with the respective Organizations ManagedOrganizationalUnit where this account belongs.
  • <REQUESTER NAME>with the appropriate name in the change_requested_by parameter.
  • If declaring the following IAM Roles, you must change the aft-account-customizations/eks-prd/terraform/variables.tf, setting the respective values of custom_platform_team or custom_developers_team to true.
    • <PLATFORM-TEAM-IAM-ROLE-ARN> specify the correct ARN in the platform_team_arn field under custom_fields to give administrative access to the cluster.
    • <DEVELOPER-TEAM-IAM-ROLE-ARN> specify the correct ARN in the dev_team_arn field under custom_fields to give administrative access to the cluster.

3.Once done, validate the content of the file and push the changes:

cd $BASE_DIR/aft-account-request
cat terraform/amazon-eks-prd-01.tf
git add -A
git commit -m "Adding the new AWS Account customized with Amazon EKS cluster"
git push origin main

Run the account pipeline to apply the customization

Once you’ve pushed your code changes to the aft-account-request repository, AFT automatically initiates the aft-account-request pipeline. This process is a crucial step in deploying your customized configurations.

This action triggers the AWS Control Tower account creation process. Note that this step is time-intensive, typically taking up to 35 minutes to complete. Patience is key here, as this process lays the foundational setup for your new AWS account.

Upon successful completion of the AWS Control Tower account creation, AFT proceeds to create a dedicated pipeline for the new account. This pipeline is named following the convention <ACCOUNT-ID>-customizations-pipeline.

An important aspect to remember is that during its initial setup, this pipeline runs automatically. However, subsequent runs of this pipeline need manual initiation. This design makes sure that future customizations are applied deliberately and with oversight, maintaining control over the account’s configuration.

Test the EKS cluster access and resources

After the customization process is complete, it’s crucial to verify that you have proper access to your newly configured EKS cluster. For this, you want to consult the Account Customizations log. Within this log, you should find detailed output that includes specific commands necessary for testing access to your cluster.

Before running tests, confirm that you’re logged in with the AWS role that you specified in the aft-account-request. You can verify your current AWS identity with the following command:

aws sts get-caller-identity —query Arn

The output should be:

"arn:aws:sts::444455556666:assumed-role/Admin/AROA123456789EXAMPLE"

Make sure this ARN matches the role you intended to use for accessing the EKS cluster.

Next, update your Kubernetes configuration file (kubeconfig) with the correct context for your EKS cluster. Run the command provided in the output of the Account Customizations log:

aws eks —region $AWS_REGION update-kubeconfig —name amazon-eks-prod-01 —role-arn <ADMIN-TEAM-ROLE>

Replace the <ADMIN-TEAM-ROLE> with the ARN of the admin team role you have set up.

Finally, you can view the resources running in your cluster using kubectl. This command lists all the pods across all namespaces:

kubectl get pods -A

You should see output similar to the following:

NAME                                                        READY   STATUS    RESTARTS   AGE
cert-manager            cert-manager-6d988558d6-v25l7                               1/1     Running   0          16h
cert-manager            cert-manager-cainjector-6976895488-gv4nz                    1/1     Running   0          16h
cert-manager            cert-manager-webhook-fcf48cc54-qr7m5                        1/1     Running   0          16h
kube-prometheus-stack   alertmanager-kube-prometheus-stack-alertmanager-0           2/2     Running   0          16h
kube-prometheus-stack   kube-prometheus-stack-grafana-5b549598d-8pl29               3/3     Running   0          16h
kube-prometheus-stack   kube-prometheus-stack-kube-state-metrics-68d977bb59-tzhr8   1/1     Running   0          16h
kube-prometheus-stack   kube-prometheus-stack-operator-767dcccb8d-nr9sr             1/1     Running   0          16h
kube-prometheus-stack   kube-prometheus-stack-prometheus-node-exporter-cx5lw        1/1     Running   0          16h
kube-prometheus-stack   kube-prometheus-stack-prometheus-node-exporter-tmhfb        1/1     Running   0          16h
kube-prometheus-stack   prometheus-kube-prometheus-stack-prometheus-0               2/2     Running   0          16h
kube-system             aws-load-balancer-controller-84c6cf67c6-gbr2p               1/1     Running   0          16h
kube-system             aws-load-balancer-controller-84c6cf67c6-gf7rn               1/1     Running   0          16h
kube-system             aws-node-fw2l9                                              2/2     Running   0          16h
kube-system             aws-node-kl2vt                                              2/2     Running   0          16h
kube-system             coredns-5b8cc885bc-rvzrl                                    1/1     Running   0          16h
kube-system             coredns-5b8cc885bc-v6hbd                                    1/1     Running   0          16h
kube-system             kube-proxy-s9gvv                                            1/1     Running   0          16h
kube-system             kube-proxy-wqfmj                                            1/1     Running   0          16h
kube-system             metrics-server-5dc9dbbd5b-pt265                             1/1     Running   0          16h

This output confirms that various services, such as cert-manager, kube-prometheus-stack, and metrics-server, are operational within your cluster.

It’s also possible to check the access to the EKS cluster through the AWS Console. Access the created account within the Organizations access portal, and open the Amazon EKS Console, then select the amazon-eks-prod-01 cluster and explore the tabs.

Cleaning up

By its nature, AFT does not have a feature to delete resources created by the Account Request process on purpose, to avoid unintentional account deletions and reduce the blast radius of impacts. However it is possible to remove the Terraform resources from the aft-account-customizations repository, triggering Terraform to delete the respective resources from the target account.

Because there are different providers in this example, and both kubernetes and helm providers require RBAC access to the cluster to remove the resources, the process needs to be executed twice for a complete clean-up

  1. Remove the addons.tf, and the teams.tf files from the aft-account-customizations/eks-prd repository directory to delete the resources deployed by kubernetes and helm providers inside the EKS cluster, and push the code.
cd $BASE_DIR/aft-account-customizations
rm eks-prd/terraform/addons.tf
rm eks-prd/terraform/teams.tf
git add -A
git commit -m "Removing EKS Blueprints Addons and Teams"
git push origin main
  1. Run the pipeline <ACCOUNT-ID>-customizations-pipeline to trigger Terraform to destroy the resources.
  2. Remove the eks.tf and vpc.tf files to delete the AWS Resources for Amazon EKS and Amazon VPC, and push the code.
cd $BASE_DIR/aft-account-customizations
rm eks-prd/terraform/eks.tf
rm eks-prd/terraform/vpc.tf
rm eks-prd/terraform/outputs.tf
git add -A
git commit -m "Removing EKS Cluster and VPC"
git push origin main
  1. Run the pipeline <ACCOUNT-ID>-customizations-pipeline again to trigger Terraform to destroy the resources.
  2. If you don’t want to use the created account anymore, then delete the amazon-eks-prd-01.tf file from aft-account-request repository and push the code to remove this account from AFT.
rm aft-account-request/terraform/amazon-eks-prd-01.tf
cd aft-account-request/
git add -A
git commit -m "Removing Account PRD-01"
git push origin main
  1. Proceed with the account deletion in the Organizations management console.

Conclusion

In this post, we showed the process of deploying a production-ready Amazon EKS cluster using AWS Account Factory for Terraform (AFT). This journey has not only demonstrated the capabilities of AFT in simplifying complex AWS configurations, but also showed the strategic approach required to successfully manage and deploy a robust Kubernetes environment using the same customizations code.

This pattern is a ready-to-use starting point for Amazon EKS users that manage multiple clusters across environments with GitOps based cluster delivery.

Rodrigo Bersa

Rodrigo Bersa

Rodrigo is a Specialist Solutions Architect for Containers and AppMod, with a focus on Security and Infrastructure-as-Code automation. In this role, Rodrigo aims to help customers achieve their business goals by leveraging best practices on AWS Containers Services, such as Amazon EKS, Amazon ECS, and Red Hat OpenShift on AWS (ROSA) during their Cloud Journey, when building new environments, or migrating existing technologies.

Edgar Costa Filho

Edgar Costa Filho

Edgar is a Senior Cloud Infrastructure Architect with a focus on Foundations and Containers, including expertise in integrating Amazon EKS with open source tooling like Crossplane, Terraform, and GitOps. In his role, Edgar is dedicated to assisting customers in achieving their business objectives by implementing best practices in cloud infrastructure design and management.