Containers

AWS Secrets Controller PoC: integrating AWS Secrets Manager with Kubernetes

Introduction

Kubernetes allows you to store and manage sensitive information outside of the podSpec using a secret object, e.g. API key or certificate. Conceptually, this allows you to treat secrets differently than other types of Kubernetes objects. Nevertheless, a lot of customers avoided using Kubernetes Secrets for storing secret material because it did not include an option for strong encryption with a customer managed key when it was first introduced. This was the motivation for creating this PoC. It demonstrates how you can consume secrets from an external service (AWS Secrets Manager) using a Kubernetes dynamic admission controller.

Recent advances in EKS

Recently, EKS added support for KMS envelope encryption of Kubernetes Secrets. With envelope encryption, you can use a customer-managed AWS KMS key to encrypt the data key Kubernetes uses to encrypt secrets. This allows you to strengthen your overall security posture because it creates a dependence on a separate key that is stored outside of Kubernetes. This is in addition to the full volume encryption that AWS already uses to protect data persisted to etcd. For further information about data encryption for Kubernetes Secrets works, please visit the encrypt data documentation.

Although envelope encryption makes Kubernetes Secrets a viable option for storing secret material, there are still a couple of downsides. First, Kubernetes Pods and Secrets are scoped to a namespace. If pods and secrets share a namespace, pods can read all of the secrets created in that namespace. Second, Kubernetes secrets are not rotated automatically. If you need/want to rotate a secret periodically, you have to do so manually.

Alternatives to Kubernetes Secrets

Historically, customers have addressed the shortcomings of Kubernetes Secrets by using an external secret provider like Hashicorp’s Vault, which supports both granular permissions and the automatic rotation of secrets. It also integrates with Kubernetes by way of Kubernetes Service Accounts and mutating webhooks. The Service Account assigns an identity to a pod, which is used to grant access to secrets in Vault whereas the webhook is used to inject an init container into a Pod that mounts the Secret from Vault to a temporary volume. Together, these make it easier to consume Vault secrets from within Kubernetes.

The proof of concept we’ve developed utilizes a similar approach, only rather than using Vault as the backend, secrets are stored and managed in AWS Secrets Manager. Compared to native Kubernetes Secrets, using Secrets Manager has several advantages. First, it allows you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Second, it offers built-in secret rotation for several AWS services such as Amazon RDS, Amazon Redshift, and Amazon DocumentDB. It’s also extensible in that you can use it rotate other types of secrets. Lastly, it gives you the ability to control access to secrets using fine-grained permissions and audit secret rotation centrally for resources in the AWS Cloud, as well as third-party services and resources that run on-premises.

Solution overview

This proof of concept (PoC) makes use of the following Kubernetes constructs:

  • Annotations are an array of non-identifying key-value pairs. In this instance, we’re using annotations to enable/disable the init container injector, specify the AWS ARN of the secret.
  • The Downward API is a mechanism to get metadata about a pod. In this solution, we’re using it to retrieve the values of a set of key-value pairs in the annotation field of the pod.
  • A mutating webhook is called when a pod is created. It is implemented as a pod that runs within the cluster. If the secret.k8s.aws/sidecarInjectorWebhook: enabled appears in the annotations field, the webhook will inject the init container into the pod.
  • IAM Roles for Service Accounts (IRSA) is a way to assign an IAM role to a Kubernetes pod. This PoC uses IRSA to grant the pod access to retrieve a secret from Secrets Manager and decrypt that secret using a KMS key. It’s through the ServiceAccount that you can grant access to secrets in Secrets Manager.
  • An init container is a container that runs and exits before the application container is started. In our PoC, the init container is used to fetch the secret from Secrets Manager and writes it to an emptyDir (RAM disk) volume that is subsequently mounted by the application container.

When a pod with the requisite annotations is deployed to the cluster, the webhook will update the pod to run the init container. Provided the ServiceAccount specified in the podSpec has access to the secret referenced in the secrets.k8s.aws/secret-arn: <secret arn> the init container will retrieve the secret from Secrets Manager and write it to a RAM disk. This is done by specifying using Memory as the medium for an emptyDir volume. This prevents the secrets from being persisted to disk after pod is terminated. After the init container exits, the application container starts and mounts the RAM disk as a volume. When the application needs to read the secret, it reads it from the mounted volume. The diagram below depicts the webhook process flow the PoC follows:

 

Deploying AWS Secrets Admission Controller Webhook

AWS Secrets Admission Controller can be deployed via a Helm chart. The Helm chart creates the following Kubernetes objects:

  • A Kubernetes deployment running the admission controller.
  • A Kubernetes service that exposes the above deployment.
  • A Kubernetes secret that contains the TLS certificates for the admission controller.
  • A MutatingWebhookConfiguration object.

If you need instructions on how to install Helm, please refer the official Helm documentation here.

1. Add the Helm repository that contains the Helm charts for the secret-inject admission controller webhook.

$ helm repo add secret-inject https://aws-samples.github.io/aws-secret-sidecar-injector/

2. Chart repositories change frequently due to updates and new additions. To keep Helm’s local list updated with all these changes, we need to occasionally run the repository update command.

$ helm repo update

3. Install the AWS Secret Controller by installing the Helm chart.

$ helm install secret-inject secret-inject/secret-inject

4. Verify that the relevant Kubernetes objects were created.

$ kubectl get mutatingwebhookconfiguration
NAME CREATED AT
aws-secret-inject 2020-05-10T04:29:20Z

Creating secrets

You can create and manage secrets in Secrets Manager using the native AWS APIs, however, you may want to manage AWS Secrets Manager secrets directly from Kubernetes. The Native Secrets (NASE) project is a serverless mutating webhook. It is implemented as a Lambda function with an HTTP API endpoint, which is registered with Kubernetes as part of the mutating webhook object. Calls to create and update native Kubernetes Secrets are “redirected” to the webhook which writes the secret in the secret manifest to Secrets Manager and returns the ARN of the secret to Kubernetes which stores it as a secret.

Use case walkthrough: Accessing database credentials from AWS Secrets Manager

For this PoC, we will be deploying a mock webserver that needs a password to access a database server. The password is stored as a secret in AWS Secrets Manager. The actual secret is retrieved by an init container that gets injected into the pod by our mutating webhook.

We have already created a secret in AWS Secrets Manager. If you need directions on how to create secrets in AWS Secrets Manager, please refer to the AWS documentation.

$ aws secretsmanager list-secrets 
SecretList:- ARN: arn:aws:secretsmanager:us-east-1:123456789012:secret:database-password-hlRvvF
  Description: Password for the MySQL database
  LastChangedDate: '2020-05-18T00:49:46.912000+00:00'
  Name: database-password
  SecretVersionsToStages:
    bc50ebbf-2811-4561-8b6b-7bc1c564267a:
    - AWSCURRENT
  Tags: []

Note the ARN of the secret, which will be used in the subsequent steps.

Create an AWS role to access secrets in AWS Secrets Manager

Next, we create an IAM role that is used by our webserver to access the secret stored in AWS Secrets Manager. We start by creating an IAM policy to read secrets from AWS Secrets Manager:

aws iam create-policy --policy-name webserver-secrets-policy --policy-document file://policy.json

Below is a sample policy.json file that grants permission to read the database secret from AWS Secrets Manager:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "webserversecret",
            "Effect": "Allow",
            "Action": [
                "secretsmanager:GetResourcePolicy",
                "secretsmanager:GetSecretValue",
                "secretsmanager:DescribeSecret",
                "secretsmanager:ListSecretVersionIds"
            ],
            "Resource": "arn:aws:secretsmanager:us-east-1:123456789012:secret:database-password-hlRvvF"
        },
        {
            "Sid": "secretslists",
            "Effect": "Allow",
            "Action": [
                "secretsmanager:GetRandomPassword",
                "secretsmanager:ListSecrets"
            ],
            "Resource": "*"
        }
    ]
}

In a production environment, you will likely want to scope this to a specific secret.

Note: Use the role ARN of the secret created in the previous step.

Now that we’ve created a policy, we need to create an IAM role, which our pod will assume. For this PoC, we’re going to use IAM Roles for Service Accounts (IRSA) to enable fine grained access to read the secrets from AWS Secrets Manager. IRSA requires that you create an OIDC identity provider for IAM. The instructions for setting up an OIDC identity provider can be found here.

Creating an IAM role to use with IRSA requires us to perform the following steps:

1. Set the AWS account ID to an environment variable with the following command

AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)

2. Set the OIDC identity provider to an environment variable

OIDC_PROVIDER=$(aws eks describe-cluster --name <cluster-name> --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///")

3. We create a trust policy for the role so that the OIDC federated user can assume the role. Execute the following code block, replacing the namespace and service-account-name placeholders with values for our environment.

read -r -d '' TRUST_RELATIONSHIP <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "${OIDC_PROVIDER}:sub": "system:serviceaccount:default:webserver-service-account",
          "${OIDC_PROVIDER}:aud": "sts.amazonaws.com"
        }
      }
    }
  ]
}
EOF
echo "${TRUST_RELATIONSHIP}" > trust.json

This creates a file named trust.json.

4. Create an IAM role:

$ aws iam create-role --role-name webserver-secrets-role --assume-role-policy-document file://trust.json --description "IAM Role to access webserver secret"

5. Attach the webserver-secrets-policy created in the first step to this role.

$ aws iam attach-role-policy --role-name webserver-secrets-role --policy-arn=arn:aws:iam::123456789012:policy/webserver-secret-policy

Creating a Kubernetes Service Account

Now that we’ve created an IAM role to use with IRSA, we need to create a service account and map it to our role. This requires us to perform the following tasks:

1. Create a Kubernetes Service Account.

$ kubectl create sa webserver-service-account

2. Add an annotation for the service account that references the IAM role we created earlier. In this case, we’re using the ARN of the role we created earlier.

$ kubectl edit sa webserver-service-account
apiVersion: v1
kind: ServiceAccount
metadata:
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/webserver-secrets-role
  creationTimestamp: "2020-05-08T00:17:04Z"
  name: webserver-service-account
  namespace: default
  resourceVersion: "13330471"
  selfLink: /api/v1/namespaces/default/serviceaccounts/webserver-service-account
  uid: eef8b19d-7bd0-4390-94ab-186a5d677fd0
secrets:
- name: webserver-service-account-token-x5t4q

Let’s take it for a spin

Let’s deploy a webserver pod that needs to connect to the database. The credentials to connect to the database are saved as a secret in AWS Secrets Manager.

Create a Kubernetes deployment object for the webserver.

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    run: webserver
  name: webserver
spec:
  replicas: 1
  selector:
    matchLabels:
      run: webserver
  template:
    metadata:
      annotations:
        secrets.k8s.aws/sidecarInjectorWebhook: enabled
        secrets.k8s.aws/secret-arn: arn:aws:secretsmanager:us-east-1:123456789012:secret:database-password-hlRvvF
      labels:
        run: webserver
    spec:
      serviceAccountName: webserver-service-account
      containers:
      - image: busybox:1.28
        name: webserver
        command: ['sh', '-c', 'echo $(cat /tmp/secret) && sleep 3600']
EOF

Note that the annotations and service account created above are specified in the pod specification.

The mutating webhook creates a emptyDir volume secret-vol and mounts it at /tmp/ location for all the containers in the pod. The decrypted secret is written to /tmp/secret.

For demonstration purposes, the pod also prints out the secret to STDOUT. This is not recommended in production.

 $  kubectl logs webserver-66f6bb988f-x774k
{"username":"admin","password":"P@$$word1024","engine":"mysql","host":"database-1.cluster.us-east-1.rds.amazonaws.com","port":3306,"dbClusterIdentifier":"database-1"}

Caveats

Although this PoC gives you a Kubernetes native way to consume secrets from AWS Secrets Manager, it has a few caveats that you ought to be aware of. First, there is a cost for storing and retrieving secrets. Second, Secrets Manager has limits around the size (64KB) of secrets and the rate at which they can be retrieved, e.g. the limit for GetSecretValue is 2,000 per second. Be sure that you’ve reviewed the costs and limits before implementing this solution.

The PoC is also more complex as compared to native Kubernetes secrets. For example, you need to install and register the mutating webhook. You also need to correctly annotate the pods that will consume secrets from Secrets Manager and they have to reference a Service Account that has the necessary permission to fetch the secret referenced in the annotation. That said, if you need a mechanism to provide granular access to secrets or you need to rotate your secrets, the additional overhead may be worth it.

Lastly, the purpose of this Poc is to demonstrate the type of integration that can be achieved between AWS Secrets Manager and Kubernetes. It is not meant to be used in production.

Future directions

The AWS Secret Controller PoC enables you to access secret from AWS Secrets Manager by running as an Init container during the pod start up. Future enhancements for this PoC include running a sidecar container in the pod to keep the secret updated whenever the secret is rotated by AWS Secrets Manager.

Secret store Container Storage Interface (CSI) driver enables mounting of secrets, passwords, and certificates from enterprise grade external secret stores as volumes. This project features a pluggable provider interface, which can be leveraged to integrate AWS Secrets Manager as an external secret store for Kubernetes secrets.

Conclusion

We hope you enjoyed learning about securing secrets for your Kubernetes applications. The source code for this solution can be found here. Feel free to reach out via GitHub issues for any questions, comments, and feedback. PRs are welcome.