AWS Open Source Blog

Managing secrets deployment in Kubernetes using Sealed Secrets

Kubernetes is an open source system for automating the deployment, scaling, and management of containerized applications. It is especially suitable for building and deploying cloud-native applications on a massive scale, leveraging the elasticity of the cloud. Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service for running a production-grade, highly available Kubernetes cluster on AWS without needing to stand up or to maintain the Kubernetes control plane. Amazon EKS is used today by thousands of customers to run containerized applications at scale.

Regardless of whether customers are spinning up a Kubernetes cluster using open source tools such as Kops and managing it on their own or employing a managed service such as Amazon EKS, it is a common practice for a DevOps team to use a continuous delivery pipeline to deploy various resources to their Kubernetes cluster. The YAML manifests pertaining to these cluster resources may be stored and version-controlled in a Git repository.

In this post, we will walk through the details of using tools from the Sealed Secrets open source project that will allow users to manage the deployment of sensitive information to their Kubernetes clusters, and to store them securely in a Git repository and to integrate them into their continuous delivery pipelines.

Continuous delivery using GitOps

GitOps is a term coined by WeaveWorks and is a way to do Kubernetes cluster management and continuous delivery. In this approach, a Git repository is designated as the single source of truth for deployment artifacts, such as YAML files, that provide a declarative way to describe the cluster state. As illustrated in the architecture below, a Weave Flux agent runs in the Kubernetes cluster and watches the Git repository and image registries, such as Amazon Elastic Container Registry (Amazon ECR) and Docker Hub, where the container images pertaining to application workloads reside. If changes to deployment artifacts are pushed to this config repository or a new image is pushed to the image registry by a continuous integration system such as Jenkins, the Weave Flux agent responds by pulling these changes down and updating the relevant applications workloads deployed to the cluster.

GitOps Workflow for Continuous Delivery


The challenge with handling Secrets

A Kubernetes cluster has different types of resources such as Deployments, DaemonSets, ConfigMaps, Secrets, etc. A Secret is a resource that helps cluster operators manage the deployment of sensitive information, such as passwords, OAuth tokens, and SSH keys. These Secrets can be mounted as data volumes or exposed as environment variables to the containers in a Kubernetes Pod, thus decoupling Pod deployment from managing sensitive data needed by the containerized applications within a Pod.

The challenge here is with integrating these Secrets into the GitOps workflow by storing the relevant YAML manifests outside the cluster, in a Git repository. The data in a Secret is obfuscated by using merely Base64 encoding. Storing such files in a Git repository is extremely insecure as it is trivial to decode the Base64-encoded data. Often developers accidentally check these files into their Git repositories, thus exposing sensitive information—such as credentials—to their production databases.

Sealed Secrets open source project addresses this challenge by providing a mechanism to encrypt a Secret object so that it is safe to store in a private or public repository. These encrypted Secrets can also be deployed to a Kubernetes cluster using normal workflows with tools such as kubectl.

How it works

Sealed Secrets comprises the following components:

  1. A controller deployed to cluster
  2. A CLI tool called kubeseal
  3. A customer resource definition (CRD) called SealedSecret

Upon startup, the controller looks for a cluster-wide private/public key pair, and generates a new 4096-bit RSA key pair if not found. The private key is persisted in a Secret object in the same namespace as that of the controller. The public key portion of this is made publicly available to anyone wanting to use Sealed Secrets with this cluster.

During encryption, each value in the original Secret is symmetrically encrypted using AES-256 with a randomly generated session key. The session key is then asymmetrically encrypted with the controller’s public key using SHA256 and the original Secret’s namespace/name as the input parameter. The output of the encryption process is a string that is constructed as: length (2 bytes) of encrypted session key + encrypted session key + encrypted Secret.

When a SealedSecret custom resource is deployed to the Kubernetes cluster, the controller will pick it up, unseal it using the private key, and create a Secret resource. During decryption, the SealedSecret’s namespace/name is used again as the input parameter. This ensures that the SealedSecret and Secret are strictly tied to the same namespace and name.

The companion CLI tool kubeseal is used for creating a SealedSecret custom resource definition (CRD) from a Secret resource definition using the public key. kubeseal can communicate with the controller through the Kubernetes API server and retrieve the public key needed for encrypting a Secret at runtime. The public key may also be downloaded from the controller and saved locally to be used offline.

SealedSecrets - How it Works

Installing the kubeseal client

For Linux x86_64 systems, the client-tool may be installed into /usr/local/bin with the following command:

wget -O kubeseal
sudo install -m 755 kubeseal /usr/local/bin/kubeseal

For macOS systems, the client tool is installed as follows:

brew install kubeseal

Installing the custom controller and CRD for SealedSecret

See the Git repository for Sealed Secrets project for recent releases and detailed installation instructions. The latest release at the time of writing this post was v0.12.1 and can be installed on a Kubernetes cluster in a single step using kubectl as shown by the following command. This will install the controller and the SealedSecret custom resource definition in the kube-system namespace as well as create a Service Account and RBAC artifacts such as Role/ClusterRole and RoleBinding/ClusterRoleBinding.

kubectl apply -f controller.yaml

The logs from the controller reveal the name of the Secret that is created in the kube-system namespace and that contains the private key pair used by the controller for unsealing SealedSecrets deployed to the cluster.

kubectl logs sealed-secrets-controller-6b7dcdc847-9l8pz -n kube-system

Output from the above command looks as follows:

2020/04/09 23:06:50 Starting sealed-secrets controller version: v0.12.1
2020/04/09 23:06:50 Searching for existing private keys
2020/04/09 23:06:53 New key written to kube-system/sealed-secrets-keyhvdtf
2020/04/09 23:06:53 Certificate is 

Upon starting, the controller searches for a Secret with the label in its namespace. If it does not find one, it creates a new one in its namespace and prints the public key portion of the key pair to its output logs. The contents of this Secret, which contains the public/private key pair, can be seen in YAML format with the following command:

kubectl get secret -n kube-system -l -o yaml
apiVersion: v1
- apiVersion: v1
  kind: Secret
    creationTimestamp: "2020-04-09T23:06:53Z"
    generateName: sealed-secrets-key
    labels: active
    name: sealed-secrets-keyhvdtf
    namespace: kube-system
    resourceVersion: "302785"
    selfLink: /api/v1/namespaces/kube-system/secrets/sealed-secrets-keyhvdtf
    uid: cd53c49a-7ab6-11ea-a485-0adbb667fc0b
kind: List
  resourceVersion: ""
  selfLink: ""

Sealing the Secrets

Since 1.14, kubectl supports the management of Kubernetes objects using Kustomize, which provides Resource Generators to create Kubernetes resources, such as Secrets and ConfigMaps. The Kustomize generators should be specified in a kustomization.yaml file. The YAML manifest for a Secret is generated from literal key-value pairs using kubectl and kustomize as shown in the example below:

cat << EOF > kustomization.yaml
namespace: octank
- name: database-credentials
  - username=admin
  - password=Tru5tN0!
  disableNameSuffixHash: true
kubectl kustomize . > secret.yaml

Using this Secret, the YAML manifest for the SealedSecret CRD is created using kubeseal as follows:

kubeseal --format=yaml < secret.yaml > sealed-secret.yaml

kubeseal encrypts the Secret using the public key that it fetches at runtime from the controller running in the Kubernetes cluster. If a user does not have direct access to the cluster, then a cluster administrator may retrieve the public key from the controller logs and make it accessible to the user.

A SealedSecret CRD is then created using kubeseal as follows using the public key file:

kubeseal --format=yaml --cert=public-key-cert.pem < secret.yaml > sealed-secret.yaml

The generated Secret with Base64 encoded value for username and password keys is as follows:

apiVersion: v1
kind: Secret
  name: database-credentials
  namespace: octank
password: VHJ1NXROMCE=
username: YWRtaW4=
type: Opaque

The generated SealedSecret with encrypted values is shown below.

Note that the keys in the original Secret—namely, username and password—are not encrypted in the SealedSecret, only their values are. You may change the names of these keys, if necessary, in the SealedSecret YAML file and still be able to deploy it successfully to the cluster. However, you cannot change the name and namespace of the SealedSecret. Doing so will invalidate the SealedSecret because the name and namespace of the original Secret are used as input parameters in the encryption process. This provides the additional safeguard that even if a user were to gain access to the YAML manifest of a SealedSecret created for a certain namespace in a Kubernetes cluster that they do not have access to, they will not be able to merely edit that manifest and deploy the SealedSecret to a different namespace even within the same cluster.

kind: SealedSecret
  creationTimestamp: null
  name: database-credentials
  namespace: octank
    password: AgBIpx1gp2VsE0gcHzaFgXDdhA1hu0p+D4U1ys1FsQ1h0g4IRVVJqDUh56YRFBkj9bSt9dVXevRiO27mTPwyKXP6vnS2Oe/fM7x0LMQ1oXd6gTe2NB2o8r6ek3AenTghObRn1lbKigAHFtBFYN+Pq6wQptoewsaRhv+fTxRFk1PMjK5PHu/ETwsm74KeZ3VjOzEUkrAaRN2BPML0UvLstEy/yDMHD3+hLaGP/slwd5oeyhbVWutsofpyNNwLiZ5EvKhEr6vuCQ4CF0gvtFhXVF7e84m3SmZcZ1AvYicqW026PxjbL1b42T8Hk9PUboYfydSsBUjFAmXhzHLu+FrBuHIHdbikpd4kPOOqHO9z9wt39lwhiHf6f3YgesMFmVsWpco+ghn79fumfzTCrVHdwZGQ/7oYNDrcBNQB+kKQSOy+p3W2YR6BhhoGKHxELcqMZU9/1gKHIfbHPFoJtsK1SW2KwfpGeh0kvn/NtWj5sBeLrMYD9fev9j+jaDSOkzmB7ftpNx85DhOUVYJ2O/e4qUkzf0xASLq4XzwhknPOAbaRZ8oxBHQBKZKCW6x6RUpxOUXe7uImKEdodbDbCyrcIMH7a1SILohg5jpEDUUbxQBGtt7bFt4EJoF0LKLMDvzO/R02kHhuFiIRUwH2NJM8IpupwfLVdOBEBJ8yh4agmRvAm8bioH+bQELMQvc4DZHbiIOq0L2DlkzTyw==
    username: AgA5lKkkdSFgQbO8NXgEb/ohsWlUH5ngoxa4SAdQt/kN84eowrUev71KLnZsahJ1sDHr0GQHWW32hJlemG+GnD3ASSEjvJABWwMIdpeJzi5vEPtp5fNiHZq5pmlJGnJZxHRvVlLSOfhf473r9Sbo/dS3OUvzsDabnb6hQzSVMJjS6RYTQp6JnsmvjDrOGk0P7Gik6bdgNivqmxrEiddqkanSqjS+MXMsY2HYnECr5EG7QRawq2yWl6UVQRfVLjoB9n8MAGHELkIXo5aiH1MwYdH0jnPukWZrwIsz1BbeBW4jX9wjtjXpZKKxrywd6AOhoodL6YIRGhWvRXTswQfktAdk5Y7Gu7vlnQMIkeaKjd7sXwO+TgoW4T8WTwTY7LPe4vLNI87AX1ZWCmLUexX1YlmwNJUGPIP8+WfFlkQO2YkcOt8Imt5lJI13+CGAPl13RZ5jVVAsO8HnE4GaufWRDHglb1GOAtiwnKRYp5JS0ipzQZspH+tpACFIFKLqGykMPBoRoxEqqtdouYs1b8k2cAj7uhhWaibn3f0T1gYMMuAwu5FD/A1KX0YRtcBXab2VPDOxD0cLwwjimkAjWgXrzVjCHq9yn9CyCsd9Hie56lesphmk+/kiZ0fwr5T9UZnJQrL/REplDgUVnFdLZfdY9PFCeylAeTv6KUjHsmc6O6cooPwXKFhntGDHm4GGYz9T8cfQWI7NFA==
      creationTimestamp: null
      name: database-credentials
      namespace: octank
    type: Opaque
status: {}

The YAML manifest that pertains to the Secret is no longer needed and may be deleted. The SealedSecret is the only resource that will be deployed to the cluster as follows:

kubectl create namespace octank
kubectl apply -f sealed-secret.yaml 

Once the SealedSecret CRD is created in the cluster, the controller becomes aware of it and unseals the underlying Secret using the private key and deploys it to the same namespace. This is seen by looking at the logs from the controller:

2020/04/09 23:06:53 HTTP server serving on :8080
2020/04/09 23:19:16 Updating octank/database-credentials
2020/04/09 23:19:16 Event(v1.ObjectReference{Kind:"SealedSecret", Namespace:"octank", Name:"database-credentials", UID:"882f5e6c-7ab8-11ea-9ebe-1243fd9383c9", APIVersion:"", ResourceVersion:"304045", FieldPath:""}): type: 'Normal' reason: 'Unsealed' SealedSecret unsealed successfully

The YAML file, sealed-secret.yaml, that pertains to the SealedSecret is safe to be stored in a Git repository along with YAML manifests pertaining to other resources—such as DaemonSets, Deployments, and ConfigMaps—deployed in the cluster. We can now use a GitOps workflow and securely manage the deployment of Secret resources to your cluster.

Securing the sealing key

Without the private key that is managed by the controller, there is no way to decrypt the encrypted data within a SealedSecret. In the event that you are trying to restore the original state of a cluster—say, after a disaster or you want to leverage GitOps workflow to deploy the same set of Kubernetes resources, including SealedSecrets, from a Git repository and stand up a separate instance of a Kubernetes cluster—the controller deployed in the new cluster must use the same private key to be able to unseal the SealedSecrets. Absent this private key, all SealedSecrets will have to be regenerated using a new private/public key pair, which could become an onerous task for a deployment containing a large number of Secret resources.

The private key can be retrieved from the controller using the following command. In a production environment, one would typically make use of Kubernetes RBAC to grant the permissions required to perform this operation to a restricted set of clients.

kubectl get secret -n kube-system -l -o yaml > master.yaml

To test how this works, let’s first delete the installation of the controller, the Secret that it created in the kube-system namespace that contains the private key, the SealedSecret resource named database-credentials, as well as the Secret that was unsealed from it:

kubectl delete secret database-credentials -n octank
kubectl delete sealedsecret database-credentials -n octank
kubectl delete secret -n kube-system -l
kubectl delete -f controller.yaml 

Now, let’s reinstate the Secret containing the private key back into the cluster using the master.yaml file:

kubectl apply -f master.yaml 
kubectl get secret -n kube-system -l

Output from the above command shows the new Secret created in the kube-system namespace using the private key from the master.yaml file:

NAME                      TYPE                DATA   AGE
sealed-secrets-keyhvdtf   2      5s

Next let’s redeploy the SealedSecret CRD, controller, and RBAC artifacts to the cluster. Note that we are using the same YAML manifest for the SealedSecret that was generated earlier:

kubectl apply -f controller.yaml
kubectl logs sealed-secrets-controller-6b7dcdc847-jkrz8 -n kube-system

As shown in the new controller’s logs, it is able to find the existing Secret sealed-secrets-keyhvdtf in the kube-system namespace and thus does not create a new private/public key pair:

2020/04/09 23:26:07 Starting sealed-secrets controller version: v0.12.1+dirty
controller version: v0.12.1
2020/04/09 23:26:07 Searching for existing private keys
2020/04/09 23:26:07 ----- sealed-secrets-keyhvdtf
2020/04/09 23:26:07 HTTP server serving on :8080

Now let’s redeploy the SealedSecret and verify that the controller is able to successfully unseal it:

kubectl apply -f sealed-secret.yaml 
kubectl logs sealed-secrets-controller-6b7dcdc847-jkrz8 -n kube-system
2020/04/09 23:26:07 Starting sealed-secrets controller version: v0.12.1
controller version: v0.12.1
2020/04/09 23:26:07 Searching for existing private keys
2020/04/09 23:26:07 ----- sealed-secrets-keyhvdtf
2020/04/09 23:26:07 HTTP server serving on :8080
2020/04/09 23:29:33 Updating octank/database-credentials
2020/04/09 23:29:33 Event(v1.ObjectReference{Kind:"SealedSecret", Namespace:"octank", Name:"database-credentials", UID:"f7d22e88-7ab9-11ea-a485-0adbb667fc0b", APIVersion:"", ResourceVersion:"305137", FieldPath:""}): type: 'Normal' reason: 'Unsealed' SealedSecret unsealed successfully

If the master.yaml file that contains the public/private key pair generated by the controller is compromised, then all the SealedSecret manifests can be unsealed and the encrypted sensitive information they store revealed. Hence, this file must be guarded by granting least privilege access. For additional guidance on sealing key renewal, manual sealing key management, and so on, consult the documentation.

One option to secure the private key is to store the master.yaml file contents as a SecureString parameter in AWS Systems Manager Parameter Store. The parameter could be secured using a AWS Key Management Service (KMS) Customer managed key (KMS Key) and you can use the Key policy to restrict the set of AWS Identity and Access Management (IAM) principals who can use this key in order to retrieve the parameter. Additionally, you also may enable automatic rotation of this Key in KMS. Note that Standard tier parameters support a maximum parameter value of 4096 characters; given the size of the master.yaml file, you will have to store it as a parameter in the Advanced tier.

When a Secret resource is created, the Kubernetes API server persists the data in an etcd database that is part of the Kubernetes Control Plane. The default behavior is to persist the Secret data in Base64-encoded format.

When you launch an Amazon EKS cluster using Kubernetes version 1.13 or later, you may choose whether to enable or disable envelope encryption of Kubernetes secrets using AWS KMS. If you enable envelope encryption, the Kubernetes Secrets are encrypted using the customer managed key (KMS Key) that you select and then stored in the etcd database, thus adhering to the security best practice of encrypting data at rest. This feature is discussed in detail in “Using EKS encryption provider support for defense-in-depth”. Using this strategy in conjunction with the one discussed in this post gives users a robust mechanism for managing sensitive data needed for deploying their Kubernetes workloads.

Feature image via Pixabay.

Viji Sarathy

Viji Sarathy

Viji Sarathy is a Principal Specialist Solutions Architect at AWS. He provides expert guidance to customers on modernizing their applications using AWS Services that leverage serverless and containers technologies. He has been at AWS for about 3 years. He has 20+ years of experience in building large-scale, distributed software systems. His professional journey began as a research engineer in high performance computing, specializing in the area of Computational Fluid Dynamics. From CFD to Cloud Computing, his career has spanned several business verticals, all along with an emphasis on design & development of applications using scalable architectures. He holds a Ph. D in Aerospace Engineering, from The University of Texas, Austin. He is an avid runner, hiker and cyclist.