Containers

Use private certificates to enable a container repository in Amazon EKS

Introduction

Containerization has gained popularity as a method for deploying and managing applications with Kubernetes, which is a leading container orchestration platform. Many customers choose Amazon Elastic Kubernetes Service (Amazon EKS) for its performance, scalability, availability, and integration with other AWS services and security.

Enterprises across the industry opt for private container repositories, such as JFrog Artifactory, to enhance security, maintain compliance, and protect intellectual property. They secure those repositories with certificates issued by Private Certificate Authority, which helps to optimize cost and to tailor certificate to their unique needs. These enterprises also want to utilize Amazon EKS to host their applications and securely retrieve images from private repositories.

This post guides you through the process of configuring Amazon EKS worker nodes to securely use a private container image repository. We’ll use JFrog Artifactory as the private container image repository for our example. You may choose any other repository management software available on the market as well.

Solution overview

Amazon EKS cluster securely accessing private image repository

Figure 1. Amazon EKS cluster securely accessing private image repository

  1. The client certificate is stored in an Amazon S3 bucket, which is encrypted with Amazon KMS customer managed key.
  2. Amazon EKS nodes, which get access via the attached AWS IAM role, copy the client certificate and install on each Amazon EKS node.
  3. Nodes with the installed certificate are able to securely connect to the container image repository and pull container images.

Walkthrough

  1. Create a private certificate authority (CA) (using AWS Private Certificate Authority)
  2. Issue an end-entity certificate (using AWS Certificate Manager)
  3. Install and configure a private container image repository (e.g., JFrog Artifactory and secure with the end-entity certificate)
  4. Create private hosted zone using Amazon Route 53 to support business friendly domain name (optional)
  5. Upload Root CA certificate to Amazon Simple Storage Service (Amazon S3)
  6. Create an Amazon EKS cluster, which automates the installation of a Root CA certificate for cluster worker nodes
  7. Securely connect to and deploy a container image from a JFrog Artifactory repository

Note: Following the steps describe in this post will incur cost.

Prerequisites

1. Create and install Root and subordinate CA

To begin, let’s create the certificate authority using the AWS Private Certificate Authority service. In contrast to the public certificate, private certificate is used only internally. It is recommended to follow the best practice of creating a certificate authority hierarchy and issuing an end-entity certificate from a subordinate certificate authority. For detailed information on designing a CA hierarchy, please refer to the AWS documentation. In this post, we’ll demonstrate the creation of a Root and a subordinate certificate.

Create Root certificate authority

Figure 2. Create Root certificate authority

Under Subject distinguished name options, configure the subject name of your private CA. You must enter a value for at least one of the following options. (Note: we’ll use myca.local domain name throughout this post):

    • Organization (O) – For example, a company name
    • Organization Unit (OU) – For example, a division within a company
    • Country name (C) – A two-letter country code
    • State or province name – Full name of a state or province
    • Locality name – The name of a city
    • Common Name (CN)myca.local

Create Subordinate certificate authority

Figure 3. Create Subordinate certificate authority

2. Issue end-entity certificate

The next step is to issue an end-entity certificate using the AWS Certificate Manager service. An end-entity certificate is digitally signed statement issued by CA to a person or a system. It is used to validate the identity of an entity such as website, business, or person. Select the previously created Subordinate CA as the certificate authority. Additionally, choose a fully qualified domain name (FQDN) for your certificate, such as repo.sub.myca.local. This certificate will be used to enable TLS for JFrog Artifactory.

Create private certificate

Figure 4. Create private certificate

After the end-entity certificate is issued, you need to export the certificate bundle for further use to enable Transport Layer Security (TLS) on the JFrog Artifactory server. The export includes three key elements (i.e., .pem files): the primary TLS certificate, the Certificate Chain, and the private key.

3. Install and configure a private container image repository

Next, install and configure JFrog Artifactory on an Amazon Elastic Compute Cloud (Amazon EC2) instance. Please refer to be JFrog installation guide for further detail. After the installation, use the certificate bundle to enable TLS for the JFrog Artifactory server.

Once JFrog Artifactory is installed and configured, build and push a sample docker image to the repository that we will be using for testing.

4. Create Amazon Route53 private hosted zone for user-friendly domain name (optional)

The FQDN for the certificate is repo.sub.myca.local. We need to assign the same domain name to the container image repository. To manage the Domain Name System (DNS), we’ll use Amazon Route53 service. Create a private hosted zone in Amazon Route53. Once the private hosted zone is available, create a DNS record of type A to map repo.sub.myca.local to the repository server’s IP address:

Create DNS record in Amazon Route 53

Figure 5. Create DNS record in Amazon Route 53

5. Upload Root CA certificate to Amazon S3

Amazon EKS worker nodes need access to the Root CA certificate during bootstrapping. We can use Amazon Simple Storage Service (Amazon S3) service as a storage solution for the certificate. To store the certificate, you can use an existing Amazon S3 bucket or create a new one. Use Amazon S3 security best practices to protect the content of your bucket. For better tracking, we recommend to create a dedicated Amazon S3 bucket with server access log enabled. Then, upload client certificate client.pem, we retrieved earlier:

$ aws s3 cp client.pem s3://<bucket-name>/

6. Create an Amazon EKS cluster and automate installation of Root CA certificate

We’re prepared to provision an Amazon EKS cluster and dynamically install the Root CA certificate. The Root CA certificate is used to established the chain of trust with server certificate installed in image repository (e.g., JFrog Artifactory) and establish TLS connection. To simplify the process, we utilize the eksctl tool for provisioning Amazon EKS clusters and worker nodes. During the launch of each worker node, the certificate is fetched from the specified Amazon S3 bucket and automatically installed on the nodes.

Create an configuration file, called cluster.yaml and copy-paste the following content:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
   name: basic-cluster
   region: <region-name>
   version: "1.27"

managedNodeGroups:
    - name: ng-1
      iam:
         attachPolicy: 
           Version: "2012-10-17"
           Statement:
           - Effect: Allow
             Action:
             - "s3:GetObject"
             Resource: 'arn:aws:s3:::<bucket-name>/*'
      preBootstrapCommands:
        - aws s3 --region <region-name> cp s3://<bucket-name>/client.pem /etc/pki/ca-trust/source/anchors/ 
        - sudo update-ca-trust extract

Execute the following command to deploy an Amazon EKS cluster and a managed NodeGroup.

$ eksctl create cluster -f cluster.yaml
...
2023-07-09 15:38:56 [✔] all EKS cluster resources for "basic-cluster" have been created
2023-07-09 15:38:56 [ℹ] adding identity "arn:aws:iam::111111111111:role/eksctl-basic-cluster-nodegroup-ng-NodeInstanceRole-GHAYPH7K942E" to auth ConfigMap
2023-07-09 15:38:56 [ℹ] nodegroup "ng-1" has 0 node(s)
2023-07-09 15:38:56 [ℹ] waiting for at least 2 node(s) to become ready in "ng-1"
2023-07-09 15:39:39 [ℹ] nodegroup "ng-1" has 2 node(s)
2023-07-09 15:39:39 [ℹ] node "ip-192-168-16-253.ec2.internal" is ready
2023-07-09 15:39:39 [ℹ] node "ip-192-168-54-78.ec2.internal" is ready
2023-07-09 15:39:40 [ℹ] kubectl command should work with "/home/ec2-user/.kube/config", try 'kubectl get nodes'
2023-07-09 15:39:40 [✔] EKS cluster "basic-cluster" in "us-east-1" region is ready

Note: When you create Amazon EKS-managed node groups, the standard Amazon EKS optimized Amazon Linux 2 machine image is used by default

eksctl creates two AWS CloudFormation stacks with all required AWS resources. You can review them in AWS CloudFormation Console. Let’s go over some of the settings in cluster.yaml file that are relevant to this post:

  • IAM section under managedNodeGroups is translated into an AWS Identity and Access Management (AWS IAM) policy that allows the Amazon EKS nodes to read files from the Amazon S3 bucket where we uploaded the client certificate. The policy is included in the AWS IAM role along with other required policies and attached to an AWS Launch Template.
  • preBootstrapCommands is added as user data to the launch template. The commands copy the client certificate and update consolidated and dynamic configuration of Root CA certificates and associated trust on each node.

The same configuration works for unmanaged node groups as well, to use unmanaged node groups, replace managedNodeGroups with nodeGroups in cluster.yaml.

Validate certificate installation

To verify the certificate has been installed on the nodes, proceed with the following instructions.

Get node instance ids first:

$ kubectl get nodes -o custom-columns=INSTANCEID:.spec.providerID
INSTANCEID
aws:///<availability-zone-1>/<instance-id>
aws:///<availability-zone-2>/<instance-id>

Replace <node instance id> with the value from the previous command’s output:

$ cmdid=$(aws ssm send-command --document-name "AWS-RunShellScript" \ 
--targets '[{"Key":"InstanceIds","Values":["<node instance id>"]}]' \
--parameters '{"commands":["trust list | grep local"]}' \
 --query "Command.CommandId" --output text)

$ aws ssm list-command-invocations --command-id "$cmdid" --details --query "CommandInvocations[*].CommandPlugins[*].Output[]" --output text

The last two commands execute trust list command on the node and return the output, you should see the following:

label: sub.myca.local

Note that by default, eksctl creates a separate Virtual Private Cloud (VPC) in your account. If you’d like to use an existing VPC, then you need to supply VPC configuration. Also, eksctl configures kubectl for us.

7. Securely connect to and deploy image from secure JFrog Artifactory repository

Our container image repository is password protected so a pod needs to use a Secret to pull an image. Create a json file with the secret (e.g., secret.json).

{
    "auths": {
        "repo.sub.myca.local": {
            "auth": "<base64 encoded credentials>"
        }
    }
}

You can generate base64 encoded credentials by running the following command:

echo "user:password" | base64

The user and password credentials of the container image repository account are allowed to pull images. You can set any user name and password of your choice.

Now, create secret manifest file (e.g., secret.yaml).

---
apiVersion: v1
kind: Namespace
metadata:
    name: eks-sample-app
    labels:
        name: eks-sample-app

---
apiVersion: v1
kind: Secret
metadata:
    name: myregistrykey
    namespace: eks-sample-app
data:
    .dockerconfigjson: <base64 encoded secret.json>
type: kubernetes.io/dockerconfigjson

The .dockerconfigjson is a base64 encoded string of secret.json. Generate it using the following command:

$ base64 secret.json

Now, deploy the secret:

$ kubectl apply -f secret.yaml
namespace/eks-sample-app created
secret/myregistrykey created

Next, create pod manifest file (e.g., pod.yaml):

apiVersion: v1
kind: Pod
metadata:
    namespace: eks-sample-app
    name: private-pod
spec:
    containers:
    - name: private-reg-container
      image: repo.sub.myca.local/test/nginx:latest
    imagePullSecrets:
    - name: myregistrykey

Final step is to deploy the pod:

$ kubectl apply -f pod.yaml
pod/private-pod created

Validate image pull and pod creation

To check whether the pod is successfully created, run the following command:

$ kubectl describe pods -n eks-sample-app
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  3s    default-scheduler  Successfully assigned eks-sample-app/private-test1 to ip-10-0-2-251.ec2.internal
  Normal  Pulling    2s    kubelet            Pulling image "repo.sub.myca.local/test/nginx:latest"
  Normal  Pulled     2s    kubelet            Successfully pulled image "repo.sub.myca.local/test/nginx:latest" in 45.334188ms (45.349959ms including waiting)
  Normal  Created    2s    kubelet            Created container private-reg-container
  Normal  Started    2s    kubelet            Started container private-reg-container

Cleaning up

To avoid incurring further charges, delete the Amazon EKS cluster using the following command:

$ eksctl delete cluster basic-cluster

Additionally, use console or command line to clean-up and remove the following resources:

Conclusion

In this post, we showed you how you can configure your Amazon EKS cluster, generate and install the necessary certificates, and deploy nodes that can successfully pull container images from the private registry. This enables seamless integration of private registry images with your Amazon EKS cluster. Enabling Amazon EKS clusters to support pulling container images from private registries secured with private certificates is crucial for secure and efficient application deployments, and for adherence to the compliance need of your organization.

Bappaditya Datta

Bappaditya Datta

Bappaditya Datta is a Sr. Solution Architect in AWS North America focusing on data & analytics. He is helping customers across different industries to design and build secure, scalable, and highly available solutions, addressing their business needs and bringing innovations. Prior to AWS, Bappaditya worked as Technical Architect helping pharmaceutical customers adopt AWS cloud for their data & analytics needs

Arnab Ghosh

Arnab Ghosh

Arnab Ghosh is a Sr. Solutions Architect for AWS in North America helping enterprise customers build resilient and cost-efficient architectures. He has over 15 years of experience in architecting, designing, and developing enterprise applications solving complex business problems.

Dom Bavaro

Dom Bavaro

Dom is a Sr. Solutions Architect at Amazon Web Services (AWS) in New York. Dom brings over 10 years of experience in complex infrastructure and solution design and implementation. He specializes in Storage and AI/ML.

Eugene Kim

Eugene Kim

Eugene is a Sr. Solutions Architect at Amazon Web Services (AWS) in New York, brings over 15 years of experience in designing and implementing scalable and complex solutions in AWS. He specializes in container and serverless technologies.