Containers

Easy as one-two-three policy management with Kyverno on Amazon EKS

This post is contributed by Raj Seshadri and Jimmy Ray

As containers are used in cloud native production environments, DevOps and security teams need to gain real-time visibility into container activity, restrict container access to host and network resources, and detect and prevent exploits and attacks on running containers.

Kyverno is a policy engine for Kubernetes that doesn’t require that users learn a programming language. Kyverno provides an intuitive and Kubernetes-native means to apply policy-enabled governance and compliance to Kubernetes clusters.

Use case

Real time container runtime security is hard because there are limited open source tools to implement the security best practices on container runtime security until now. Kyverno installs as an admission controller, which receives webhook events when an API object changes. We will show you in this article how Kubernetes cluster administrators can validate and mutate configurations. All of this without having to know Rego or any other language. Kyverno makes it simple and easy to do policy management on your EKS cluster.

Prerequisites

We will assume that you already have an EKS or a similar Kubernetes cluster up and running. For example, you can follow this link to get started with Amazon EKS. Please note: Your k8s cluster version must be above v1.14, which adds webhook timeouts. Run kubectl version to check.

Step 1: easy-peasy install

Kyverno installation is easy and the output from the install process is as shown below.

kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/master/definitions/release/install.yaml

namespace/kyverno created
customresourcedefinition.apiextensions.k8s.io/clusterpolicies.kyverno.io (http://customresourcedefinition.apiextensions.k8s.io/clusterpolicies.kyverno.io) created
customresourcedefinition.apiextensions.k8s.io/clusterpolicyviolations.kyverno.io (http://customresourcedefinition.apiextensions.k8s.io/clusterpolicyviolations.kyverno.io) created
customresourcedefinition.apiextensions.k8s.io/generaterequests.kyverno.io (http://customresourcedefinition.apiextensions.k8s.io/generaterequests.kyverno.io) created
customresourcedefinition.apiextensions.k8s.io/policies.kyverno.io (http://customresourcedefinition.apiextensions.k8s.io/policies.kyverno.io) created
customresourcedefinition.apiextensions.k8s.io/policyviolations.kyverno.io (http://customresourcedefinition.apiextensions.k8s.io/policyviolations.kyverno.io) created
serviceaccount/kyverno-service-account created
clusterrole.rbac.authorization.k8s.io/kyverno:customresources (http://clusterrole.rbac.authorization.k8s.io/kyverno:customresources) created
clusterrole.rbac.authorization.k8s.io/kyverno:generatecontroller (http://clusterrole.rbac.authorization.k8s.io/kyverno:generatecontroller) created
clusterrole.rbac.authorization.k8s.io/kyverno:policycontroller (http://clusterrole.rbac.authorization.k8s.io/kyverno:policycontroller) created
clusterrole.rbac.authorization.k8s.io/kyverno:userinfo (http://clusterrole.rbac.authorization.k8s.io/kyverno:userinfo) created
clusterrole.rbac.authorization.k8s.io/kyverno:webhook (http://clusterrole.rbac.authorization.k8s.io/kyverno:webhook) created
clusterrole.rbac.authorization.k8s.io/kyverno:admin-policies (http://clusterrole.rbac.authorization.k8s.io/kyverno:admin-policies) created
clusterrole.rbac.authorization.k8s.io/kyverno:edit-policies-policyviolations (http://clusterrole.rbac.authorization.k8s.io/kyverno:edit-policies-policyviolations) created
clusterrole.rbac.authorization.k8s.io/kyverno:policyviolations (http://clusterrole.rbac.authorization.k8s.io/kyverno:policyviolations) created
clusterrole.rbac.authorization.k8s.io/kyverno:view-clusterpolicyviolations (http://clusterrole.rbac.authorization.k8s.io/kyverno:view-clusterpolicyviolations) created
clusterrole.rbac.authorization.k8s.io/kyverno:view-policies-policyviolations (http://clusterrole.rbac.authorization.k8s.io/kyverno:view-policies-policyviolations) created
clusterrolebinding.rbac.authorization.k8s.io/kyverno:customresources (http://clusterrolebinding.rbac.authorization.k8s.io/kyverno:customresources) created
clusterrolebinding.rbac.authorization.k8s.io/kyverno:generatecontroller (http://clusterrolebinding.rbac.authorization.k8s.io/kyverno:generatecontroller) created
clusterrolebinding.rbac.authorization.k8s.io/kyverno:policycontroller (http://clusterrolebinding.rbac.authorization.k8s.io/kyverno:policycontroller) created
clusterrolebinding.rbac.authorization.k8s.io/kyverno:userinfo (http://clusterrolebinding.rbac.authorization.k8s.io/kyverno:userinfo) created
clusterrolebinding.rbac.authorization.k8s.io/kyverno:webhook (http://clusterrolebinding.rbac.authorization.k8s.io/kyverno:webhook) created
configmap/init-config created
service/kyverno-svc created
deployment.apps/kyverno created

Step 2: demo on validating configurations

Using privileged containers is not an ideal security practice. This will allow the container all the capabilities a host can perform. For example, compromised containers with root capabilities can spawn rogue containers or pry into other containers running on the host. In this demo, we will show you how to add a policy that will not allow pods that require “root” privileges. The first step is to create and apply this policy as shown below. Kyverno runs as a dynamic admission controller in our EKS cluster. Kyverno, in this case, receives a validating admission webhook call from the API server and applies the matching policy to enforce admission policies or reject requests. Kyverno also reports policy violations for existing Kubernetes resources.

First, let’s create the policy below. Create a YAML file called disallow_privileged.yaml as shown below. This policy below will not allow a process to run in a privileged mode.

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: disallow-privileged
  annotations:
    policies.kyverno.io/category: Security
    policies.kyverno.io/description: Privileged containers are defined as any
      container where the container uid 0 is mapped to the host’s uid 0.
      A process within a privileged container can get unrestricted host access.
      With `securityContext.allowPrivilegeEscalation` enabled, a process can
      gain privileges from its parent.
spec:
  validationFailureAction: enforce
  rules:
  - name: validate-privileged
    match:
      resources:
        kinds:
        - Pod
    validate:
      message: "Privileged mode is not allowed. Set privileged to false"
      pattern:
        spec:
          containers:
          - =(securityContext):
              =(privileged): false
  - name: validate-allowPrivilegeEscalation
    match:
      resources:
        kinds:
        - Pod
    validate:
      message: "Privileged mode is not allowed. Set allowPrivilegeEscalation to false"
      pattern:
        spec:
          containers:
          - securityContext:
              allowPrivilegeEscalation: false

Note: Even though the above policy applies to “pods,” kyverno’s “auto-gen” rules for pod controllers will also create the deployment policy for you. Kyverno inserts an annotation pod-policies.kyverno.io/autogen-controllers=DaemonSet,Deployment,Job,StatefulSet,CrobJob by default.

➜ core_best_practices git:(master) ✗ kubectl create -f disallow_privileged.yaml
clusterpolicy.kyverno.io/disallow-privileged created

You can see from the command below that the disallow-privileged policy was created. cpol is a short name configured in the Custom Resource Definition (CRD) for cluster policies.

➜ core_best_practices git:(master) ✗ kubectl get cpol
NAME AGE
disallow-privileged 62s

Now let’s create a NGINX pod with privileged access, as shown in the YAML block below.

➜  core_best_practices git:(master) ✗ cat nginx-privileged.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-privileged
  labels:
    app: nginx-privileged
spec:
  containers:
  - name: nginx
    image: nginx
    securityContext:
      allowPrivilegeEscalation: true
➜  core_best_practices git:(master) ✗ kubectl create -f nginx-privileged.yaml
Error from server: error when creating "nginx-privileged.yaml": admission webhook "nirmata.kyverno.resource.validating-webhook" denied the request:

resource Pod/default/nginx-privileged was blocked due to the following policies

disallow-privileged:
  validate-allowPrivilegeEscalation: 'Validation error: Privileged mode is not allowed. Set allowPrivilegeEscalation to false; Validation rule validate-allowPrivilegeEscalation failed at path /spec/containers/0/securityContext/allowPrivilegeEscalation/'

As shown above, the pod creation fails as it contains the allowPrivilegeEscalation: true in the above YAML file. If we change the above YAML to allowPrivilegeEscalation: false, the pod creation succeeds. This is one of the best security practices where privilege escalations on pods should not be allowed.

Step 3: demo on disallowing unknown image registries

Using admission controllers, a Kyverno rule can be used to block images from unknown image registries. This sample policy requires that all images come from either registry.k8s.io or gallery.ecr.aws. You can customize this policy to allow other or different image registries that you trust.

Let’s create the following policy and name it unknown-image-registry.yaml as follows.

➜  core_best_practices git:(master) ✗ cat unknown-image-registry.yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: deployment-pod-valid-registry
  labels:
    app: kyverno
  annotations:
    policies.kyverno.io/category: Compliance
    policies.kyverno.io/description: Rules to enforce correct image source registry
spec:
  validationFailureAction: enforce
  rules:
  - name: validate-registries
    match:
      resources:
        kinds:
        - Pod
    validate:
      message: "Unknown image registry"
      pattern:
        spec:
          containers:
          - image: "registry.k8s.io/* | gallery.ecr.aws/*"

To test the above policy, let’s create a mongo:latest pod as follows and due to the above policy, we will see that the image pull will fail as the image is being pulled from Docker Hub and it is not one of the approved registry as per the above policy.

➜  core_best_practices git:(master) ✗ cat mongo.yaml
apiVersion: v1
kind: Pod
metadata:
  name: mongo
  labels:
    app: mongo
spec:
  containers:
  - name: mongo
    image: mongo:latest

The above YAML fails due to the following error of the unknown image registry.

Error from server: error when creating "mongo.yaml": admission webhook "nirmata.kyverno.resource.validating-webhook" denied the request:

resource Pod/default/mongo was blocked due to the following policies

deployment-pod-valid-registry:
validate-registries: 'Validation error: Unknown image registry; Validation rule validate-registries failed at path /spec/containers/0/image/'

Automating the policies with the help of ArgoCD:

In a DevOps world, automation is at the forefront of the thought process. The best way to manage and maintain these policies in your environment is via a GitOps tool. We need to automate all of the above Kyverno policies with ArgoCD, a CD tool that applies GitOps patterns. In this section, we will show you on how to implement some of the core best practices for container security using ArgoCD. To install ArgoCD on your EKS cluster, please refer to this link. Once ArgoCD is installed, login to the UI and click on “New App” as shown below.

Enter the details for the Kyverno app as shown below. The sample YAML files are located at https://github.com/texanraj/kyverno/tree/master/samples/core_best_practices

After this step, click on “Sync” and “Synchronize” as shown below. This will implement all the Kyverno policies that are shown on the right side of the image.

This is evident by inquiring the cluster policies (cpol) as shown below.

➜ aws kubectl get cpol
NAME AGE
disallow-bind-mounts 37s
disallow-docker-sock-mount 37s
disallow-helm-tiller 37s
disallow-host-network-port 37s
disallow-new-capabilities 37s
disallow-privileged 37s
disallow-root-user 37s
disallow-sysctls 37s
require-ro-rootfs 37s

To get an idea on some of the immediate policy violations in your EKS cluster, query the policy violations as show below and report on the offending deployments. For example, my EKS cluster had the follow policy violations (polv) with my deployments below.

➜ aws kubectl get polv
NAME POLICY RESOURCEKIND RESOURCENAME AGE
disallow-privileged-9bzrp disallow-privileged Deployment octank-fintech-dev 5m43s
disallow-privileged-jqdq7 disallow-privileged Deployment jenkins 5m45s
disallow-root-user-lbzvw disallow-root-user Deployment octank-fintech-dev 7m20s
require-ro-rootfs-8k27x require-ro-rootfs Deployment octank-fintech-dev 6m45s
require-ro-rootfs-pmcmn require-ro-rootfs Deployment jenkins 6m46s

Improving operations and user experiences by shifting left

Policy-enabled Kubernetes, through the use of mutating and validating admissions controls, enables teams to erect guardrails around their Kubernetes operations. The guardrails reduce unwanted and unauthorized behaviors within clusters. There is also a value added side effect of reducing the cognitive-load associated with cluster operations. Not all users interacting with and deploying to Kubernetes clusters are Kubernetes subject-matter experts (SMEs). When pods do not start as expected, users often enlist the help of cluster operators/admins/SMEs to help them troubleshoot issues. Something as simple as a Pod Security Policy (PSP) could consume SME cycles when troubleshooting pod start failures.

Policy-enabled Kubernetes reduces this burden by making decisions before the cluster states are changed. The same PSP issues that require SME intervention, can be reduced, if not removed, by requiring that all pod and container specs include the correct security context configurations. A policy that enforces these behaviors in the Kubernetes API request cycle prevents the unwanted state changes from being applied in etcd. The end user, interacting with the cluster API server, is immediately notified that their request is invalid and has failed for reasons defined in the output message. This simple process has enormous potential to both educate users and lessen support burdens for cluster operators/admins. In this model, policy-enabled Kubernetes augments security controls, and improves the user experience for cluster users by shifting the decisions left, as responses to API server requests.

Cleanup

Delete the ArgoCD deployment as follows:
kubectl delete -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Delete Kyverno as follows:

kubectl delete -f https://raw.githubusercontent.com/kyverno/kyverno/master/definitions/release/install.yaml

Conclusion

The Kyverno project streamlines the process of creating proper DevSecOps policies for your Kubernetes cluster. In this article, we saw how easy it is to create policy with Kyverno and implement with ArgoCD. Using Kubernetes-native components, like Customer Resource Definitions (CRD), lessens the need to learn new syntaxes or even new policy languages. If your choice is to implement and administer policies in your EKS environment the native Kubernetes way, Kyverno is a great option.

References

https://github.com/nirmata/kyverno
https://kyverno.io/docs/

Raj Seshadri

Raj Seshadri

Raj Seshadri is a Senior Partner Solutions Architect with AWS and is a member of the containers and blockchain Technical Field Community. Prior to AWS, he had stints at Aqua Security, Red Hat, Docker, Dell and EMC. In his spare time, he plays tennis and enjoys traveling around the world. You can reach him on Twitter @texanraj.