Containers
Deliver Namespace as a Service multi tenancy for Amazon EKS using Karpenter
Introduction
Karpenter is an open-source, high-performance Kubernetes cluster autoscaler that automatically provisions new nodes in response to unschedulable pods. Customers choose Karpenter for many reasons, such as improving the efficiency and cost of running workloads in their clusters. Karpenter works by configuring a custom resource called Provisioner. This Provisioner sets constraints on the nodes that can be created by Karpenter and the pods that can run on those nodes.
Customers who are considering a multi-tenant Amazon Elastic Kubernetes Service (Amazon EKS) clusters are looking to share cluster resources across different teams (i.e., tenants). However, they still require a node isolation for critical and highly regulated workloads. While deploying Karpenter on a multi-tenant Amazon EKS cluster, Karpenter doesn’t support namespace as metadata on its own CRDs (both AWSNodeTemplate and Provisioners). This post will show how we used Karpenter to provision node, scale up, and scale down the cluster per tenant without impacting other ones.
Walkthrough
This solution uses admission controllers using Open Policy Agent GateKeeper to enforce taint/tolerations and Node Selector on those Nodes that’ll be created by Karpenter on specific namespaces.
This example, we’re going to use the following scenario:
- We have two tenants, TenantA and TenantB, that need to run deployments in different namespaces
- TenantA will use a namespace named tenant-a and TenantB will use a namespace named tenant-b
- TenantA workloads must run on nodes apart of PoolA and TenantB workloads should run on nodes apart of PoolB
- As a consumer of the Amazon EKS cluster, your tenant will be completely isolated and they won’t make changes to their pod specs to schedule their pods in a particular namespace
Prerequisites
- An Amazon EKS Cluster
- Enable VPC-CNI Network Policy
- Follow the Getting Started section in the Amazon EKS documentation to install aws cli, kubectl, and eksctl on your machine
- Karpenter (v0.31 or older) (Installation Guide) **Karpenter has graduated to beta therefore the APIs have changed since the writing of this blog. Please ensure you use the alpha APIs with this blog. If you would like to test with Beta, you can review the Beta changes here **
- Open Policy Agent Gatekeeper (3.13.0 at least) (Installation Guide)
Create two namespaces
Create two namespaces called tenant-a and tenant-b with the commands:
Confirm you have two newly created namespaces with the following command:
Create a default deny-all network policy
By default, pods aren’t isolated for egress and ingress traffic: all inbound and outbound connections are allowed. In a multi-tenant environment, where users share the same Amazon EKS cluster, they require an isolation between their namespaces, pods, or external services. Kubernetes NetworkPolicy helps control traffic flow at the IP address or port. Please check this section on Amazon EKS best practices to build a multi-tenant EKS cluster.
VPC-CNI supports Network Policies natively start from its version 1.14 on Amazon EKS 1.25 or later. It integrates with the upstream Kubernetes Network Policy Application Programming Interface (API), ensuring compatibility and adherence to Kubernetes standards. You can define policies using different identifiers supported by the upstream API. As best practices, defining network policies has to follow a principal of least privilege. First, we create a deny all policy that restricts all inbound and outbound traffic across namespaces, and then we start allowing traffic like allow Domain Name System (DNS) queries, etc. For more details, you can check this section on Amazon EKS best practices guide.
In this example, we’ll use a network policy that denies all traffic across namespaces and allow dns queries for service name resolutions:
Install Karpenter with proper Provisioner files for node pools configuration
After Installing Karpenter on the Amazon EKS Cluster, create an AWSNodeTemplate and two Provisioner files as shown below, we’ll create node pools ourselves using a combination of taints/tolerations and node labels. Use the schema below when creating the Provisioner:
- Nodes in PoolA will have:
- A NoSchedule taint with key node-pool and value pool-a
- A label with key node-pool and value pool-a
- Nodes in PoolB will have:
- A NoSchedule taint with key node-
- pool and value pool-b
- A label with key node-pool and value pool-b
Create manifest default-awsnodetemplate.yaml:
Create manifest called pool-a.yaml:
Create manifest called pool-b.yaml:
You can save and apply the Provisioner to your cluster by running the following command:
Deploy OPA Gatekeeper policies
Confirm OPA Gatekeeper is deployed and running in your cluster with this command:
Forcing deployments on specific Namespace to be deployed on the proper Node Pool
Using OPA Gatekeeper ,we can force our Deployment to be deployed on the proper Node Pool based on the Namespace. Using Admission Controller we can Mutate the pod to add a nodeSelector and tolerations to the spec. By using a nodeSelector, it allows teams to still define their own nodeAffinity to provide additional guidance on how Karpenter should provision nodes. Rather than writing our own admission controller, we will used OPA Gatekeeper and its mutation capability.
Here are the assigned policies that we will used for Pool A and Pool B and similarly we need to do for each Namespace (node Pool).
Create a policy called nodepool-selector-pool-a:
Create a policy called nodepool-selector-pool-b:
We need pods to be assigned to the worker nodes so the OPA policy can use that and apply it to the newly created worker nodes. Create the toleration using the manifests as shown below:
Testing it out
Now that we have our node pools defined and the mutation capability, let’s create a deployment for each of our tenants and make sure it is functioning.
Run the follow command to create the deployment:
As you can see when creating a specific deployment in tenant-a namespace, we have the NodeSelector and Tolerations added to the Pod Specs through OPA.
In the following pod specification, note the node-selectors and tolerations:
Let’s confirm these new nodes is running with the following command:
Last thing, let’s make sure that ingress traffic for both namespaces are blocked:
Now you should see, the proper worker nodes starting up in their own namespace and isolated, thanks to Karpenter and Network Policies implemented by VPC-CNI!
Note: it might take a moment or two for the nodes to startup and join the cluster.
Cleaning up
After you complete this experiment, you can delete the Kubernetes deployment and respective resources.
Delete your EKS Cluster (this depends on how you created your EKS cluster)
Conclusion
In this post, we showed you how to use Karpenter along with admission controller ,like OPA Gatekeeper. We were able to assign labels, tolerations to our deployment of pods, labels, and taints assigned to the newly created nodes by Karpenter via different provisioner (i.e., one provisioner per namespace). Together with Network policy provided by VPC-CNI, we were able to have a multi-tenant environment on top of Amazon EKS scalable and each workload is isolated from one to another.