AWS Open Source Blog

Using Open Policy Agent on Amazon EKS

OPA and Amazon EKS

中文版Open Policy Agent (OPA) is a Cloud Native Computing Foundation (CNCF) sandbox project designed to help you implement automated policies around pretty much anything, similar to the way the AWS Identity and Access Management (IAM) works. With OPA, you can write a very slimmed-down policy using a language called rego which is based on datalog. You can then deploy these into rego files to the Open Policy Agent Admission Controller and have them Validate or Mutate any request that is made to the Kubernetes API Server. This means that you could write a two-line policy and have it implementing validations on the attributes in the requests, for example checking imagePullPolicy for Always, or that a Deployment always has at least 2 replicas, or that the registry the image comes from is in a whitelisted repository. The rest of this post will walk you through deploying OPA into an Amazon Elastic Container Service for Kubernetes (EKS) cluster and implementing a check to only allow images from your own Amazon Elastic Container Registry (ECR) or the EKS ECR repository.

Getting Started

Before we can get started, we’re going to need to set up an Amazon EKS cluster. We’ll use eksctl with the Cluster config file mechanism. Start by downloading these prerequisites:

With all the necessary tools installed, we can get started launching our EKS cluster. In this example, we’re deploying the cluster in us-east-2, our Ohio region; you can replace the AWS_REGION with any region that supports Amazon EKS.

Deploy Cluster

export AWS_REGION=us-east-2
export ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)

Once we’ve exported the region, we can create the ClusterConfig as follows:

cat >cluster.yaml <<EOF
apiVersion: eksctl.io/v1alpha4
kind: ClusterConfig
metadata:
  name: opa
  region: ${AWS_REGION}

nodeGroups:
  - name: ng-1
    desiredCapacity: 2
    allowSSH: true
EOF

After the file has been created, we create the cluster using the eksctl create cluster command:

eksctl create cluster -f cluster.yaml

This will take roughly 15 minutes to complete, then we’ll have an Amazon EKS cluster ready to go. In the meantime, we can start preparing the Open Policy Agent requirements. First, we’re going to generate a Self-Signed Certificate Authority for our Admission Controller so that all communication can be done via TLS.

Create Resources

openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -days 100000 -out ca.crt -subj "/CN=admission_ca"

With our CA created, we need to create a TLS key and certificate for OPA:

cat >server.conf <<EOF
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth, serverAuth
EOF

With the server.conf created, we can use openssl again to generate the key and cert:

openssl genrsa -out server.key 2048
openssl req -new -key server.key -out server.csr -subj "/CN=opa.opa.svc" -config server.conf
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 100000 -extensions v3_req -extfile server.conf

Next, we’ll create our Open Policy Agent Admission Controller Manifest:

cat >admission-controller.yaml <<EOF
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: opa-viewer
roleRef:
  kind: ClusterRole
  name: view
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  name: system:serviceaccounts:opa
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: opa
  name: configmap-modifier
rules:
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: opa
  name: opa-configmap-modifier
roleRef:
  kind: Role
  name: configmap-modifier
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  name: system:serviceaccounts:opa
  apiGroup: rbac.authorization.k8s.io
---
kind: Service
apiVersion: v1
metadata:
  name: opa
  namespace: opa
spec:
  selector:
    app: opa
  ports:
  - name: https
    protocol: TCP
    port: 443
    targetPort: 443
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: opa
  namespace: opa
  name: opa
spec:
  replicas: 1
  selector:
    matchLabels:
      app: opa
  template:
    metadata:
      labels:
        app: opa
      name: opa
    spec:
      containers:
        - name: opa
          image: openpolicyagent/opa:0.10.5
          args:
            - "run"
            - "--server"
            - "--tls-cert-file=/certs/tls.crt"
            - "--tls-private-key-file=/certs/tls.key"
            - "--addr=0.0.0.0:443"
            - "--addr=http://127.0.0.1:8181"
          volumeMounts:
            - readOnly: true
              mountPath: /certs
              name: opa-server
        - name: kube-mgmt
          image: openpolicyagent/kube-mgmt:0.6
          args:
            - "--replicate-cluster=v1/namespaces"
            - "--replicate=extensions/v1beta1/ingresses"
      volumes:
        - name: opa-server
          secret:
            secretName: opa-server
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: opa-default-system-main
  namespace: opa
data:
  main: |
    package system

    import data.kubernetes.admission

    main = {
      "apiVersion": "admission.k8s.io/v1beta1",
      "kind": "AdmissionReview",
      "response": response,
    }

    default response = {"allowed": true}

    response = {
        "allowed": false,
        "status": {
            "reason": reason,
        },
    } {
        reason = concat(", ", admission.deny)
        reason != ""
    }
EOF

We’ll then create a ValidatingWebhookConfiguration which will tell Kubernetes to send CREATE, UPDATE pod events to allow our policy to validate them:

cat > webhook-configuration.yaml <<EOF
kind: ValidatingWebhookConfiguration
apiVersion: admissionregistration.k8s.io/v1beta1
metadata:
  name: opa-validating-webhook
webhooks:
  - name: validating-webhook.openpolicyagent.org
    rules:
      - operations: ["CREATE", "UPDATE"]
        apiGroups: ["*"]
        apiVersions: ["v1"]
        resources: ["Pod"]
    clientConfig:
      caBundle: $(cat ca.crt | base64 | tr -d '\n')
      service:
        namespace: opa
        name: opa
EOF

We’ll then create our first policy. The policy we’re going to create will validate that every Pod‘s image comes from a registry in a whitelist. The first two entries in our policy will be our own account’s Amazon ECR and the Amazon EKS-specific ECR.

cat > image_source.rego <<EOF
package kubernetes.admission

deny[msg] {
    input.request.kind.kind = "Pod"
    input.request.operation = "CREATE"
    image = input.request.object.spec.containers[_].image
    name = input.request.object.metadata.name
    not registry_whitelisted(image,whitelisted_registries)
    msg = sprintf("pod %q has invalid registry %q", [name, image])
}

whitelisted_registries = {registry |
    registries = [
        "602401143452.dkr.ecr.${AWS_REGION}.amazonaws.com",
        "${ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com"
    ]
    registry = registries[_]
}

registry_whitelisted(str, patterns) {
    registry_matches(str, patterns[_])
}

registry_matches(str, pattern) {
    contains(str, pattern)
}
EOF

Lastly, we’ll create the nginx test Pod manifest which we’ll use to test our policy:

cat > nginx.yaml <<EOF
kind: Pod
apiVersion: v1
metadata:
  name: nginx
  labels:
    app: nginx
  namespace: default
spec:
  containers:
  - image: nginx
    name: nginx
EOF

Once the cluster has been created, we can deploy all the resources and test it out.

Deploy Resources

First, we’ll create the opa namespace and set that in our current context. This will make it easier to keep track of everything we’re doing:

kubectl create namespace opa
kubectl config set-context --namespace opa --current

Now, we’ll deploy all the manifests previously created, starting with the tls secret with our server key and cert:

kubectl create secret tls opa-server --cert=server.crt --key=server.key

Then we deploy the admission controller:

kubectl apply -f admission-controller.yaml

If you are running your cluster in a multi-tenant or untrusted environment, it’s recommended to read through the TLS and Authentication schemes here.

After we have the AdmissionController deployed, we can deploy the ValidatingWebhookConfiguration that we created. This tells the Kubernetes API Server to send all Pod CREATE and UPDATE events to opa service for validation.

kubectl apply -f webhook-configuration.yaml

The final configuration needed is to deploy our custom policy that restricts Pod from being deployed from an untrusted registry:

kubectl create configmap image-source --from-file=image_source.rego

Note: This command, unlike the other kubectl commands we are using, is an imperative configuration. This can sometimes make keeping your ConfigMap in sync difficult, especially if you use a deployment strategy such as Gitops. If you’d like to use apply, you’ll need to create the ConfigMap manifest and copy your policy into the data attribute.

Once we have created the policy, we can test to see that it compiled properly and that the Open Policy Agent successfully initialized the rego script. To do this, use kubectl get configmap image-source -o yaml:

$ kubectl get configmap image-source -o jsonpath="{.metadata.annotations}"
map[openpolicyagent.org/policy-status (http://openpolicyagent.org/policy-status):{"status":"ok"}]

With that deployed and initialized in the Open Policy Agent, we can then run the nginx pod test.

Test Policy

$ kubectl apply -f nginx.yaml
Error from server (pod "nginx" has invalid registry "nginx"): error when creating "nginx.yaml": admission webhook "validating-webhook.openpolicyagent.org" denied the request: pod "nginx" has invalid registry "nginx"

Now let’s tag the nginx image with our own registry, push the image, and try deploying from our own registry. First, we need to create a repository:

aws ecr create-repository —repository-name nginx

We’ll then get the repository name from the API so we can tag our nginx instance:

export REPOSITORY=$(aws ecr describe-repositories --repository-name nginx --query "repositories[0].repositoryUri" --output text)

Now we’ll pull the Docker Hub public nginx locally so that we can retag it:

docker pull nginx

Then we can retag latest with our repository name and push the image to your own Amazon ECR. We start by logging into Amazon ECR:

aws ecr get-login —no-include-email | bash -

Then, push the latest tag to Amazon ECR:

docker push ${REPOSITORY}:latest

Now we can recreate our nginx manifest file and push it to our Amazon EKS cluster:

cat > private-nginx.yaml <<EOF
kind: Pod
apiVersion: v1
metadata:
  name: nginx
  labels:
    app: nginx
  namespace: default
spec:
  containers:
  - image: ${REPOSITORY}:latest
    name: nginx
EOF

Finally, we deploy our nginx Pod to the cluster:

$ kubectl apply -f private-nginx.yaml
pod/nginx created

Clean Up

If you would like to tear down the cluster created, you can delete all the resources created in the cluster, then delete the Amazon EKS cluster again using eksctl:

kubectl delete -f private-nginx.yaml
kubectl delete namespace opa
eksctl delete cluster -f cluster.yaml

More Examples

Here are a bunch of great examples of how people are using Open Policy Agent today with their Kubernetes clusters to help manage their custom policies:

Conclusion

With Open Policy Agent, you no longer need to write custom code to handle your organization or teams’ custom policies. You can now deploy custom ValidatingAdmissionControllers as we did in this writeup, or even MutatingAdmissionControllers (which give your Kubernetes resources sane defaults), or set up proper labels. The controls you have are nearly endless. To learn more about Open Policy Agent, check out the Open Policy Agent documentation, and get involved with the Open Policy Agent community.

Read more from Chris.

Chris Hein

Chris Hein

Chris Hein is a Sr. Developer Advocate for Kubernetes/EKS at Amazon Web Services. Before Amazon, Chris worked for a number of large and small companies like GoPro, Sproutling, & Mattel. Read More from Chris here https://aws.amazon.com/blogs/opensource/author/heichris/ and follow him at @christopherhein