Set up soft multi-tenancy with Kiosk on Amazon Elastic Kubernetes Service
Achieving complete isolation between multiple tenants running in the same Kubernetes cluster is impossible today. The reason is because Kubernetes was designed to have a single control plane per cluster and all the tenants running in the cluster, share the same control plane. Hosting multiple tenants in a single cluster brings some advantages, the main ones being: efficient resource utilization and sharing, reduced cost, and reduced configuration overhead.
However, a multi-tenant Kubernetes setup creates some special challenges when it comes to resource sharing and security. Let’s understand these better. In a shared cluster, one of the goals is for each tenant to get a fair share of the available resources to match its requirements. A possible side effect that needs to be mitigated in this case is the noisy neighbor effect, by ensuring the right level of resource isolation among tenants. The second challenge, the main one, is security. Isolation between tenants is mandatory in order to avoid malicious tenants compromising others. Depending on the security level implemented by the isolation mechanisms, the industry divides the shared tenancy models into hard and soft multi-tenancy.
Hard multi-tenancy implies no trust between tenants and one tenant cannot access anything from others. This approach fits, for example, to service providers that host multiple tenants, which are unknown to each other and the main focus for this setup is to completely isolate the business among tenants. In the open-source community, there is ongoing work to solve this challenge, but this approach is not widely used across production workloads yet.
On the other end of the spectrum, is soft multi-tenancy. This implies a trust relationship between tenants, which could be part of the same organization or team, and the main focus in this approach is not the security isolation but the fair utilization of resources among tenants.
There are a few initiatives in the open-source community to implement soft multi-tenancy and one of them is Kiosk. Kiosk is an open source framework for implementing soft multi-tenancy in a Kubernetes cluster. In this post, you will see a step-by-step guide to implement it in an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
Before proceeding with the setup, make sure you fulfill the following pre-requisites:
- Log in to your AWS account.
- Create an Amazon EKS cluster in the AWS Management Console.
- Connect to the Amazon EKS cluster from the local machine.
- Install kubectl on the local machine. This is the command line tool for controlling Kubernetes clusters.
- Install helm version 3 on the local machine. This is a package manager for Kubernetes.
- Kiosk requires Kubernetes version 1.14 and higher.
In order to demonstrate how to set up Kiosk on Amazon EKS the following architecture will be deployed, depicting a single Kubernetes cluster shared between two tenants: a Node.js application and a Redis data store.
Before starting with the setup, here are some of the basic building blocks of Kiosk:
- Cluster Admin – has administrator permissions to perform any operation across the cluster.
- Account – resource associated to a tenant. This is defined and managed by the cluster admin.
- Account User – can be a Kubernetes user, group, or service account. This is managed by the Cluster Admin and can be associated to multiple accounts.
- Space – is a virtual representation of a regular Kubernetes namespace and can belong to a single account.
- Account Quota – defines cluster-wide aggregated limits for an account.
- Template – is used to initialize a space with a set of Kubernetes resources. A template is enforced through account configurations. This is defined and managed by the cluster admin.
- TemplateInstance – is an actual instance of a template when it is applied to a space. This contains information about the template and parameters used to instantiate it.
Account, space, account quota, template, and template instance are custom resources created in the cluster when the kiosk chart is installed. Granular permissions can be added to these resources, and this enables tenant isolation.
1. Verify that you can view the worker nodes in the node group of the EKS cluster. The EKS cluster used in this guide consists of 3 x m5.large (2 vCPU and 8 GiB) instances.
$kubectl get nodes NAME STATUS ROLES AGE VERSION ip-192-168-xxx-xxx.ec2.internal Ready <none> 48m v1.16.8-eks-e16311 ip-192-168-xxx-xxx.ec2.internal Ready <none> 48m v1.16.8-eks-e16311 ip-192-168-xxx-xxx.ec2.internal Ready <none> 48m v1.16.8-eks-e16311
2. Create a dedicated namespace and install Kiosk using helm. Helm is a package manager for Kubernetes.
$kubectl create namespace kiosk $helm install kiosk --repo https://charts.devspace.sh/ kiosk --namespace kiosk --atomic
This step creates a pod in the
You will create two IAM (Identity and Access Management) users
dba, each managing a separate tenant. Because EKS supports integration of Kubernetes RBAC (Role-Based Access Control) with the IAM service through the AWS IAM Authentication for Kubernetes, the next step is to add RBAC access to the two users.
1. Create the users
dba by following these steps. Because the IAM service is used for authentication only, you don’t need to grant any permissions during the user creation. Permissions for each user in the Kubernetes cluster will be granted through the RBAC mechanism in the next steps.
The IAM user that created the EKS cluster in the initial setup phase is granted automatically administrator permissions for the cluster so you will use it as a cluster admin in this guide. If IAM access keys have not been created already for the cluster admin, then follow these steps to do so and include them under the
kube-cluster-admin named profile in the credentials file as described here.
Note: all commands in this guide will be executed as a cluster admin unless explicitly stated in the
kubectl command. To use the cluster admin IAM credentials, override the
AWS_PROFILE environment variable.
Linux or macOS
2. Add RBAC access to the two users by updating the
$kubectl edit configmap aws-auth -n kube-system
3. Add the two users under
data.mapUsers. The user ARN can be copied from the IAM console.
Note: the IAM entity that creates the cluster is automatically granted
system:masters permissions in the cluster’s RBAC configuration. Users
dba will have read-only permissions by default, as they haven’t been added to any group.
Kubernetes allows a user to act as another user when running
kubectl commands from the command line, through user impersonation. To do this, the impersonating user must have the permission to perform the
impersonate action on the type of attribute being impersonated, in this case
user. As the cluster admin has
system:masters permissions by default, it can impersonate users
dba. To impersonate a user, use
--as=<username> flag in the
Create a Kiosk account for each tenant
1. Create a definition file for the Node.js application’s account.
An account defines subjects, which are the account users that can access the account. Account users can be a Kubernetes user, group, or service account. In this case, the account user is
dev, which has been previously added to the
2. Create the account
$kubectl apply -f node-account.yml
3. Repeat the step for the Redis application. Update the
4. View the created accounts as a cluster admin.
$kubectl get accounts
5. View the created accounts as user
dev. You will have access to view only the accounts associated to this user.
$kubectl get accounts --as=dev
Create a Kiosk space for each account
1. By default, only the cluster admin can create spaces. In order to allow an account user to create spaces, create a Kubernetes RBAC
ClusterRoleBinding. Let’s allow users
dba to create spaces in their accounts.
kiosk-edit is a
ClusterRole created when the chart for kiosk was installed in the cluster and allows create, update, and delete actions on space resources by the subjects included in the
ClusterRoleBinding configuration. The full configuration of the
kiosk-edit role can be seen by running:
$kubectl get clusterroles kiosk-edit -n kiosk -o yaml
2. Create the
ClusterRoleBinding as a cluster admin.
$kubectl apply -f cluster-role-binding.yml
3. Create a space for the Node.js application. First, create the definition file.
4. Impersonate user
dev to create the space for the
$kubectl apply -f node-space.yml --as=dev
5. Repeat the step for the Redis application. Update
6. Note that trying to create the space in the
redis-account as user
dev will result in an error.
7. View the current spaces as a cluster admin. You will see both
$kubectl get spaces
8. View the current spaces as user
dev. Note that you only have access to the spaces owned by user
dev, which in this case is
node-space belonging to
$kubectl get spaces --as=dev
9. Spaces are a virtual representation of Kubernetes namespaces, so the same syntax can be used in the command line. For example, to list all pods in a space:
$kubectl get pods -n redis-space
Apply restrictions on the Kiosk accounts
Limit the number of spaces per account
1. Limit the number of spaces that can be associated to an account. Let’s update the definition file
node-account.yml and add the space limit.
2. Apply the changes to the
node-account as a cluster admin.
$kubectl apply -f node-account.yml
node-account can only have two spaces. When attempting to create a third space, an error will be thrown.
3. Apply the same limit for the second account by updating the definition file
Apply account quotas to existing accounts
1. Define the limits of compute resources for an account by defining account quotas. Create the quota definition file.
2. Create the account quota as a cluster admin.
$kubectl apply -f node-quota.yml
AccountQuotas are very similar to the Kubernetes resource quotas by restricting the same resources, with the added benefit that the restrictions apply across all spaces of the account unlike the resource quotas, which apply to a single namespace.
3. AccountQuotas can be created by cluster admins only. Trying to create an account quota as an account user, results in an error.
$kubectl apply -f node-quota.yml –-as=dev User "dev" cannot get resource "accountquotas" in API group "config.kiosk.sh" at the cluster scope
4. View the account quotas across the cluster as a cluster admin.
$kubectl get accountquotas
Create templates for spaces
1. A template in kiosk serves as a blueprint for a space. Templates are defined and managed by cluster admins by default.
2. Let’s create a template to limit every container deployed in this space to be assigned CPU request of 500 milli CPUs and a CPU limit of 1 CPU.
3. Create the template as a cluster admin.
$kubectl apply -f template-definition.yml
4. By default, templates are optional. In order to enforce the space creation to follow the template rules, this needs to be added in the account configuration. Let’s update the
redis-account to follow the template when spaces are created within the account.
5. Apply the changes to the account as a cluster admin.
$kubectl apply -f redis-account.yml
6. Let’s test this by creating a space within the
7. Create the space as a cluster admin.
$kubectl apply -f redis-mandatory-space.yml
8. Once the space is created, you can view that the
LimitRange resource has been created.
$kubectl get limitrange -n redis-mandatory-space
9. For each space created from a template, a template instance is created. Template instances can be used to track resources created from templates. View the instances in the new space.
$kubectl get templateinstances -n redis-mandatory-space
10. To test that the template is enforced, you can deploy a test pod in the new space and verify if the limit ranges are applied.
$kubectl run nginx --image nginx -n redis-mandatory-space --restart=Never
11. Check the pod configuration and verify the resource limits applied.
$kubectl describe pod nginx -n redis-mandatory-space
12. Delete the pod to continue the setup.
$kubectl delete pod nginx -n redis-mandatory-space
Deploy applications in the two accounts
1. Because an account quota has been created for
node-account, the required compute resources need to be specified in the definition file of the deployment.
$kubectl apply -f node-deployment.yml -n node-space --as=dev
2. Deploy the Redis data store in the second account as user
$kubectl create deploy redis --image redis -n redis-space --as=dba
Verify account isolation
1. Check the resources accessible by account user
$kubectl get all -n node-space --as=dev
2. Check if the account user
dev has access to any resources in the
redis-space. You will get plenty of errors.
$kubectl get all -n redis-space --as=dev
Verify access between tenants
1. Verify the pod in the
node-account. Note the name of the pod.
$kubectl get pods -n node-space --as=dev
2. Verify the pod in the
redis-account. Note the IP address of the pod.
$kubectl get pods -n redis-space -o wide --as=dba
3. Test the connection between the two pods.
$kubectl exec -n node-space <pod-name> --as=dev -- ping <ip-address>
Remove the Amazon Elastic Kubernetes Service cluster to avoid further costs.
Multi-tenancy in Kubernetes is a hot topic in the open source community these days due to the evolution of the platform and the complexity that this feature brings in the implementation. To get the latest updates, you can follow the Kubernetes multi-tenancy Special Interest Group community at kubernetes-sigs/multi-tenancy.
In this post, you have seen how easy it is to set up soft multi-tenancy in a single Kubernetes cluster with Kiosk and the added benefits over the native Kubernetes functionality. You achieved resource isolation across the cluster through account quotas and implemented security boundaries through primitives like accounts, account users, and spaces.