Containers

Amazon EKS on AWS Outposts now supports local clusters

Introduction

Since its release, Amazon Elastic Kubernetes Service (Amazon EKS) has made it easier to run Kubernetes and container applications reliably at scale. With Amazon EKS on AWS Outposts, you can simplify application delivery onto on-premises AWS Outposts infrastructure by using the same application programming interfaces (APIs), console, and tools you use to run Amazon EKS clusters in the cloud. Now with the local clusters deployment option for Amazon EKS on AWS Outposts, you can run the entire Amazon EKS cluster locally on AWS Outposts to mitigate the risk of temporary network disconnects to the cloud, such as those caused by fiber cuts or weather events.

Local clusters maintain the benefits of Amazon EKS on AWS Outposts for a consistent, managed Kubernetes experience across on-premises and cloud environments. Because the entire Amazon EKS cluster runs locally on AWS Outposts, applications remain available, and you can perform cluster operations during network disconnects to the cloud. This post provides an overview of the local cluster architecture, how to use local clusters, and a step-by-step procedure for deploying local clusters, nodes, and applications on AWS Outposts.

This post assumes you’re familiar with AWS Outposts, and more information can be found in the AWS Outposts User GuideIf you prefer a video explanation of how local clusters work, check out the video below.

Overview

Understanding the local clusters architecture

AWS Outposts function as an extension of the AWS Region where it’s anchored, and you can extend Virtual Private Clouds (VPCs) and subnets from the parent AWS Region to your AWS Outpost. Local clusters work like other resources on AWS Outposts, which are deployed into the AWS Outposts subnet that you pass during cluster creation. At launch, the AWS Outposts subnet used for local clusters must have outbound internet access; however, fully private clusters (i.e., no internet access) are not supported yet. It’s required to use a private AWS Outposts subnet for local clusters with a public subnet in the AWS Region to host the public Network Address translation (NAT) gateway that routes traffic to an internet gateway. This allows local clusters to connect to other AWS service endpoints outside of the VPC. This traffic route enables outbound internet access, and prevents unsolicited inbound connections from the internet.

Because the capacity on your AWS Outposts is slotted according to your configuration, during local cluster creation you have the option to choose the Amazon Elastic Compute Cloud (Amazon EC2) instance type to use for the Kubernetes control plane. The local cluster runs across three Amazon EC2 instances on your AWS Outposts in a stacked configuration, which means components such as the Kubernetes API server and etcd run on each Kubernetes control plane instance.

During AWS Outposts provisioning, a service link connection is created that connects your Outpost back to your chosen AWS Region. The service link is an encrypted set of VPN connections that are used whenever the AWS Outposts communicates with your chosen parent AWS Region. Amazon EKS uses this service link connection to manage the local clusters on the Outpost. You can check the networking requirements for the service link in AWS Outposts documentation.

Unlike Amazon EKS clusters in a region, the Kubernetes control plane instances run in your account on the AWS Outposts. As the Kubernetes control plane and worker nodes operate under the same customer account, cross-account Elastic Network Interfaces (ENIs) are not used, and the local VPC network enables communication between the Kubernetes control plane and worker nodes. Local clusters support private cluster endpoint access only, and the Kubernetes API server is exposed over the AWS Outposts rack’s local gateway (LGW) via Direct VPC Routing for connectivity over the local network.

Architecture diagram of the AWS cloud

Understanding how to use local clusters

Creating clusters

You can create local clusters using the AWS Management Console, eksctl command line interface (CLI), AWS CLI, AWS APIs, AWS Software Development Kit (AWS SDKs), or Terraform. Each of these interfaces has a new option where you can pass your AWS Outposts Amazon Resource Name (ARN) and Kubernetes control plane instance type. These options are used by the Amazon EKS service to know where and how to deploy your local cluster on AWS Outposts. Guidance for Kubernetes control plane instance type selection can be found in the Amazon EKS documentation. At launch, local clusters support Kubernetes minor version 1.21, and will progress toward parity with Amazon EKS in the cloud in subsequent releases.

Amazon EKS uses an AWS Identity and Access Management (AWS IAM) service-linked role to manage AWS resources on your behalf. To deploy a local cluster, you must create and provide your own cluster IAM role with the managed policy AmazonEKSLocalOutpostClusterPolicy. The IAM user used to create the cluster must have access to this cluster role. For detailed instructions on how to create this IAM role, reference the documentation.

As the local cluster is being created, you can describe your cluster with one of the user interfaces to understand the state of the cluster via the status field, and observe any issues that occur during cluster creation via the health field. When the local cluster creation completes, you see that the Kubernetes control plane instances are visible when you run kubectl get nodes and also in the console. This gives you visibility into the Kubernetes control plane, but these nodes are managed by the Amazon EKS service and are tainted to prevent workloads from running on them. A report a status of NotReady discourages workload scheduling. At this time, running workloads on these nodes and modifications to the Kubernetes control plane components are not supported.

The Amazon EKS service is continuously monitoring the state of local clusters, and performs automatic management actions such as security patches or repair of unhealthy control plane instances. When local clusters are disconnected from the cloud, Amazon EKS carries out all necessary actions to ensure the cluster is repaired to a healthy state upon reconnect.

Connecting to clusters

To connect to your local cluster’s Kubernetes API server, you must have access to the LGW, or connect from within the VPC. For connecting an AWS Outposts rack to your on-premises network, see how Local gateways work. If using Direct VPC Routing, the private IPs of the Kubernetes control plane instances are automatically broadcasted over your local network if the AWS Outposts subnet has a route to your LGW. The local cluster’s Kubernetes API server endpoint is hosted in Amazon Route53, and can be resolved by public Domain Name System (DNS) servers to the Kubernetes API servers’ private IP addresses.

Local clusters’ Kubernetes control plane instances are configured with static Elastic Network Interfaces (ENIs) with fixed private IPs that don’t change throughout the cluster lifecycle. If machines that interact with the Kubernetes API server do not have connectivity to Amazon Route53 during network disconnects, we recommend configuring /etc/hosts with the static private IP addresses for continued operations. We also recommend setting up local DNS servers and connecting them to your AWS Outposts as defined in the documentation.

Worker nodes

Local clusters currently support self-managed node groups with Amazon EKS-optimized Amazon Linux AMIs. You can choose the Amazon Machine Image (AMI) ID for the corresponding Kubernetes version, AWS Region, and processor type according to your AWS Outposts environment. If you require a custom AMI, then follow the instructions in the documentation.

Add-ons

Local clusters currently support self-managed add-ons. Amazon EKS automatically installs the self-managed VPC CNI, core-dns, and kube-proxy add-ons during local cluster creation. Other add-ons such as Amazon EBS CSI driver and AWS Load Balancer Controller can be used to provide additional operational support if your use case requires.

Observability

Local clusters support Amazon EKS control plane logging, and you can select the log types you want to send directly from the cluster to Amazon CloudWatch Logs in your account. These logs make it easy for you to secure and monitor your clusters. To learn more about Amazon EKS control plane logging, refer to the Amazon EKS User Guide.

The Amazon EKS control plane logs are cached locally on the Kubernetes control plane instances during network disconnects. Upon reconnect, the logs are sent to Amazon CloudWatch Logs in the parent AWS Region. Additionally, local clusters support the Kubernetes API server /metrics endpoint, and you can use your choice of tools such as Prometheus, Grafana, and Amazon EKS Partner solutions to monitor the cluster locally.

Preparing for network disconnects

Similar to Amazon EKS clusters in the cloud, local clusters use AWS IAM as the default authentication mechanism using the AWS IAM authenticator for Kubernetes. As IAM isn’t available during disconnect, local clusters support an alternative authentication mechanism using x509 certificates. For more information on how to use the client x509 certificates, refer to the Preparing for network disconnects topic in the documentation.

To enable mutating operations (create, update, and scale) for workload resources (Deployments, Jobs, CronJobs, and Services) your application’s container images must be accessible over the local network, and your cluster must have enough capacity. Local clusters don’t host a container registry for you. Container images are cached on the nodes if the pods have previously run on those nodes. If you typically pull your application’s container images from Amazon ECR in the cloud, consider running a local cache or registry if you require mutating operations for workload resources during network disconnects. If you anticipate increases in traffic during network disconnects, then you can provision spare worker node capacity in your cluster when connected to the cloud.

Local clusters use the VPC Container Network Interface (CNI) in a secondary IP mode and the Pods receive IP addresses from the AWS Outposts subnet Classless Inter-Domain Range (CIDR) range. The VPC CNI maintains the warm pool of IPs for faster Pod launch time. Consider changing warm pool values according to your scaling needs during a disconnect state.

Local clusters use Amazon Elastic Block Storage (Amazon EBS) as the default storage class for persistent volumes, and the Amazon EBS CSI Driver to manage the lifecycle of Amazon EBS persistent volumes. During network disconnects, Pods backed by Amazon EBS can’t be created, updated, or scaled because this requires calls to the Amazon EBS API in the cloud. If you are deploying stateful workloads on local clusters, and require mutating operations during network disconnects, then consider using an alternative storage mechanism. Similarly, deploying new Ingress resources backed by AWS Application Load Balancer (AWS ALB) is not supported during network disconnects.

Walkthrough

Getting started with local clusters

In this section, we provision a local cluster on an AWS Outposts rack, deploy a sample application, and demonstrate cluster communication from an on-premises host over the AWS Outposts rack’s local gateway.

Prerequisites

  • AWS CLI version 2.7.32 or later or 1.25.72 or later with appropriate credentials
  • Amazon EKS vended kubectl
  • eksctl version 0.112.0 or later
  • An existing AWS Outposts installed and configured in your on-premises data center
  • An existing VPC and subnet that meet the local cluster requirements
  • A host with access to the on-premises local network and AWS Outposts rack’s LGW

Create a local cluster

The eksctl and Amazon EKS APIs support new AWS Outposts specific parameters for local clusters. Refer to the Amazon EKS API documentation and eksctl documentation for a complete list of the new parameters. The AWS Outposts used for the purpose of this walkthrough is configured (i.e., slotted) with m5d.large instance type, and Kubernetes’s control plane and worker nodes use the m5d.large instance type.

In this post, we use eksctl to create a local cluster. Make sure you’re using eksctl version 0.112.0 or later. When you do not explicitly specify a VPC and subnets, eksctl creates these resources on your behalf. Subnet route table and route target to the AWS Outposts rack’s local gateway are not automatically added by eksctl. You must configure a local gateway as the route target for the AWS Outposts subnet and associate the VPC with the local gateway route table after local cluster creation to access your Kubernetes API server over your local network. Follow the user guide for AWS Outposts to create a VPC association and refer subnet user guide to add a local gateway route target.

For this walkthrough, we use a VPC and AWS Outposts subnet that are already created and configured. In this example, eksctl creates worker nodes in the same AWS Outposts subnet as the Kubernetes control plane instances.

We use the on-premise bastion host to run eksctl commands from. Save the following settings to a file called local-cluster.yaml. Replace the VPC, Subnet, and Outpost identifiers in the local-cluster.yaml file.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: my-local-cluster
  version: "1.21"
  region: us-west-2

vpc:
  clusterEndpoints:
    privateAccess: true
  id: <vpc-id>
  subnets:
    private:
      outpost-subnet-1:
        id: <subnet-id>
            
outpost:
  controlPlaneOutpostArn: <outpost-arn>
  controlPlaneInstanceTypes: m5d.large
  
nodeGroups:
  - name: outpost-worker-nodes
    amiFamily: AmazonLinux2
    instanceType: m5d.large
    desiredCapacity: 2
    minSize: 1
    maxSize: 3
    volumeSize: 50
    volumeType: gp2
    volumeEncrypted: true
    privateNetworking: true

Create a cluster:

eksctl create cluster -f local-cluster.yaml --without-nodegroup

The cluster creation takes a few minutes. You can query the status of your cluster with the AWS EKS APIs.

aws eks describe-cluster \
  --name my-local-cluster \
  --query "cluster.status"

Wait for cluster status to be active, before proceeding to the next steps.

Connecting to the cluster

The local clusters support private endpoint access. Therefore, you need to edit cluster Security group eks-cluster-sg-my-local-cluster-* inbound rules to open cluster API endpoint access from the on-premise bastion host. You can now use kubectl command to access Kubernetes API server.

kubectl get nodes -o wide

Create worker nodes

Create a self-managed node group.

eksctl create nodegroup -f local-cluster.yaml

Before deploying the application, make sure that the nodes have joined the cluster and are in the “READY” state.

Install AWS load balancer controller add-on

AWS Outposts currently support the AWS Application Load Balancer (AWS ALB), and the Application Load Balancer Controller is responsible for the management of AWS Elastic Load Balancers in Amazon EKS clusters. Currently local clusters do not support IAM roles for service accounts (IRSA), and we recommend adding AWSLoadBalancerControllerIAMPolicy to the node role.

helm repo add eks https://aws.github.io/eks-charts

helm repo update

helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=my-local-cluster \
--set serviceAccount.create=true \
--set enableShield=false \
--set enableWaf=false \
--set enableWafv2=false

Deploy a sample workload

Copy the below content to eks-sample-app.yaml. The sample application uses Kubernetes ingress to allow access to the application outside of the cluster. Make sure the subnet you are using for the Loadbalancer service has the tag kubernetes.io/role/internal-elb: 1 added.

---
apiVersion: v1
kind: Namespace
metadata:
  name: "eks-sample-app"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: eks-sample-deployment
  namespace: eks-sample-app
  labels:
    app: eks-sample-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: eks-sample-app
  template:
    metadata:
      labels:
        app: eks-sample-app
    spec:
      containers:
        - name: nginx
          image: public.ecr.aws/nginx/nginx:1.21
          ports:
            - containerPort: 80
---

apiVersion: v1
kind: Service
metadata:
  namespace: eks-sample-app
  name: eks-sample-svc
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  type: NodePort
  selector:
    app: eks-sample-app
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  namespace: eks-sample-app
  name: eks-sample-ingress
  annotations:
    alb.ingress.kubernetes.io/scheme: internal
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/subnets: subnet-0f3aff5463c9efa5e
spec:
  ingressClassName: alb
  rules:
    - http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: eks-sample-svc
              port:
                number: 80

Create an eks-sample-app application.

kubectl apply -f eks-sample-app.yaml

The service takes a few minutes to become active. You can access your newly deployed sample application via a load balancer endpoint using the following command.

export SAMPLE_APP=$(kubectl get ingress/eks-sample-ingress -n eks-sample-app -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
echo http://${SAMPLE_APP}

Cleanup

To avoid incurring future charges, you can delete all resources created during this exercise.

eksctl delete cluster -f local-cluster.yaml

Conclusion

In this post, we showed you how to run Amazon EKS on AWS Outposts. Local clusters were the most requested feature by AWS Outposts customers to run Kubernetes applications on-premises, in co-location facilities, and at the edge. We’re excited to see how you use this new feature and would love to hear your feedback directly on the Containers Roadmap GitHub repo or through your account teams.

Sheetal Joshi

Sheetal Joshi

Sheetal Joshi is a Principal Developer Advocate on the Amazon EKS team. Sheetal worked for several software vendors before joining AWS, including HP, McAfee, Cisco, Riverbed, and Moogsoft. For about 20 years, she has specialized in building enterprise-scale, distributed software systems, virtualization technologies, and cloud architectures. At the moment, she is working on making it easier to get started with, adopt, and run Kubernetes clusters in the cloud, on-premises, and at the edge.

Chris Splinter

Chris Splinter

Chris is a Senior Product Manager for Amazon EKS. When he’s not building products and writing, he loves both playing and watching sports, and exploring the world with family and friends.