AWS Open Source Blog
Using Pod Security Policies with Amazon EKS Clusters
You asked for it and with Kubernetes 1.13 we have enabled it: Amazon Elastic Container Service for Kubernetes (EKS) now supports Pod Security Policies. In this post we will review what PSPs are, how to enable them in the Kubernetes control plane and how to use them, from both the cluster admin and the developer perspective.
What is a Pod Security Policy and why should I care?
As a cluster admin, you may have wondered how to enforce certain policies concerning runtime properties for pods in a cluster. For example, you may want to prevent developers from running a pod with containers that don’t define a user (hence, run as root). You may have documentation for developers about setting the security context in a pod specification, and developers may follow it … or they may choose not to. In any case, you need a mechanism to enforce such policies cluster-wide.
The solution is to use Pod Security Policies (PSP) as part of a defense-in-depth strategy.
As a quick reminder, a pod’s security context defines privileges and access control settings, such as discretionary access control (for example, access to a file based on a certain user ID), capabilities (for example, by defining an AppArmor profile), configuring SECCOMP (by filtering certain system calls), as well as allowing you to implement mandatory access control (through SELinux).
A PSP, on the other hand, is a cluster-wide resource, enabling you as a cluster admin to enforce the usage of security contexts in your cluster. The enforcement of PSPs is carried out by the API server’s admission controller. In a nutshell: if a pod spec doesn’t meet what you defined in a PSP, the API server will refuse to launch it. For PSPs to work, the respective admission plugin must be enabled, and permissions must be granted to users. An EKS 1.13 cluster now has the PSP admission plugin enabled by default, so there’s nothing EKS users need to do.
In general, you want to define PSPs according to the least-privilege principle: from enforcing rootless containers, to read-only root filesystems, to limitations on what can be mounted from the host (the EC2 instance the containers in a pod are running on).
Usage
A new EKS 1.13 cluster creates a default policy named eks.privileged
that has no restriction on what kind of pod can be accepted into the system (equivalent to running the cluster with the PodSecurityPolicy
controller disabled).
To check the existing pod security policies in your EKS cluster:
$ kubectl get psp NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES eks.privileged true * RunAsAny RunAsAny RunAsAny RunAsAny false *
Now, to describe the default policy we’ve defined for you:
$ kubectl describe psp eks.privileged
As you can see in the output below – anything goes! This policy is permissive to any sort of pod specification:
Name: eks.privileged Settings: Allow Privileged: true Allow Privilege Escalation: true Default Add Capabilities: <none> Required Drop Capabilities: <none> Allowed Capabilities: * Allowed Volume Types: * Allow Host Network: true Allow Host Ports: 0-65535 Allow Host PID: true Allow Host IPC: true Read Only Root Filesystem: false SELinux Context Strategy: RunAsAny User: <none> Role: <none> Type: <none> Level: <none> Run As User Strategy: RunAsAny Ranges: <none> FSGroup Strategy: RunAsAny Ranges: <none> Supplemental Groups Strategy: RunAsAny Ranges: <none>
Note that any authenticated users can create any pods on this EKS cluster as currently configured, and here’s the proof:
$ kubectl describe clusterrolebindings eks:podsecuritypolicy:authenticated
The output of above command shows that the cluster role eks:podsecuritypolicy:privileged
is assigned to any system:authenticated
users:
Name: eks:podsecuritypolicy:authenticated Labels: eks.amazonaws.com/component=pod-security-policy kubernetes.io/cluster-service=true Annotations: kubectl.kubernetes.io/last-applied-configuration: ... Role: Kind: ClusterRole Name: eks:podsecuritypolicy:privileged Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated
Note that if multiple PSPs are available, the Kubernetes admission controller selects the first policy that validates successfully. Policies are ordered alphabetically by their name, and a policy that does not change pod is preferred over mutating policies.
Now let’s create a new PSP that we will call eks.restrictive
. First, create a dedicated namespace as well as a service account. We’ll use this service account for a non-admin user:
$ kubectl create ns psp-eks-restrictive namespace/psp-eks-restrictive created $ kubectl -n psp-eks-restrictive create sa eks-test-user serviceaccount/eks-test-user created $ kubectl -n psp-eks-restrictive create rolebinding eks-test-editor \ --clusterrole=edit \ --serviceaccount=psp-eks-restrictive:eks-test-user rolebinding.rbac.authorization.k8s.io/eks-test-editor created
Next, create two aliases to highlight the difference between admin and non-admin users:
$ alias kubectl-admin='kubectl -n psp-eks-restrictive' $ alias kubectl-dev='kubectl --as=system:serviceaccount:psp-eks-restrictive:eks-test-user -n psp-eks-restrictive'
Now, with the cluster admin role, create a policy that disallows creation of pods using host networking:
$ cat > /tmp/eks.restrictive-psp.yaml <<EOF apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: eks.restrictive spec: hostNetwork: false seLinux: rule: RunAsAny supplementalGroups: rule: RunAsAny runAsUser: rule: RunAsAny fsGroup: rule: RunAsAny volumes: - '*' EOF $ kubectl-admin apply -f /tmp/eks.restrictive-psp.yaml podsecuritypolicy.policy/eks.restrictive created
Also, don’t forget to remove the default (permissive policy) eks.privileged
:
$ kubectl delete psp eks.privileged $ kubectl delete clusterrole eks:podsecuritypolicy:privileged $ kubectl delete clusterrolebindings eks:podsecuritypolicy:authenticated
WARNING
Deleting the default EKS policy before adding your own PSP can impair the cluster. When you delete the default policy, no pods can be created on the cluster, except those that meet the security context in your new namespace. For an existing cluster, be sure to create multiple restrictive policies that cover all of your running pods and namespaces before deleting the default policy
Now, to confirm that the policy has been created:
$ kubectl get psp NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES eks.restrictive false RunAsAny RunAsAny RunAsAny RunAsAny false *
Finally, try creating a pod that violates the policy, as the unprivileged user (simulating a developer):
$ kubectl-dev apply -f- <<EOF apiVersion: v1 kind: Pod metadata: name: busybox spec: containers: - name: busybox image: busybox command: [ "sh", "-c", "sleep 1h" ] EOF
As you might expect, you get the following result:
Error from server (Forbidden): error when creating "STDIN": pods "busybox" is forbidden: unable to validate against any pod security policy: []
The above operation failed because we have not yet given the developer the appropriate permissions. In other words, there is no role binding for the developer user eks-test-user
. So let’s change this by creating a role psp:unprivileged
for the pod security policy eks.restrictive
:
$ kubectl-admin create role psp:unprivileged \ --verb=use \ --resource=podsecuritypolicy \ --resource-name=eks.restrictive role.rbac.authorization.k8s.io/psp:unprivileged created
Now, create the rolebinding
to grant the eks-test-user
the use
verb on the eks.restrictive
policy.
$ kubectl-admin create rolebinding eks-test-user:psp:unprivileged \ --role=psp:unprivileged \ --serviceaccount=psp-eks-restrictive:eks-test-user rolebinding.rbac.authorization.k8s.io/eks-test-user:psp:unprivileged created
To verify that eks-test-user
can use the PSP eks.restrictive
:
$ kubectl-dev auth can-i use podsecuritypolicy/eks.restrictive yes
At this point in time the developer eks.restrictive
user should be able to create a pod:
$ kubectl-dev apply -f- <<EOF apiVersion: v1 kind: Pod metadata: name: busybox spec: containers: - name: busybox image: busybox command: [ "sh", "-c", "sleep 1h" ] EOF pod/busybox created
Yay, that worked! However, we would expect that a host networking-based pod creation should be rejected, because of what we defined in our eks.restrictive
PSP, above:
$ kubectl-dev apply -f- <<EOF apiVersion: v1 kind: Pod metadata: name: privileged spec: hostNetwork: true containers: - name: busybox image: busybox command: [ "sh", "-c", "sleep 1h" ] EOF Error from server (Forbidden): error when creating "STDIN": pods "privileged" is forbidden: unable to validate against any pod security policy: [spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used]
Great! This confirms that the PSP eks.restrictive
works as expected, restricting the privileged pod creation by the developer.
What’s new
For all new EKS clusters using Kubernetes version 1.13, PSPs are now available. For clusters that have been upgraded from previous versions, a fully-permissive PSP is automatically created during the upgrade process. Your main task is to define sensible PSPs that are scoped for your environment, and enable them as described above. By sensible, I mean that (for example) you may choose to be less restrictive in a dev/test environment compared to a production environment. Or, equally possible, different projects or teams might require different levels of protection and hence different PSPs.
Here’s a final tip: as a cluster admin, be sure to educate your developers about security contexts in general and PSPs in particular. Have your CI/CD pipeline testing PSP as part of your smoke tests, along with other security-related topics such as testing permissions defined via RBAC roles and bindings.
You can learn more about PSP in the Amazon EKS documentation. Please leave any comments below or reach out to me via Twitter!
— Michael