Containers

Implementing Pod Security Standards in Amazon EKS

Introduction

Securely adopting Kubernetes includes preventing unwanted changes to clusters. Unwanted changes can disrupt cluster operations and even compromise cluster integrity. Introducing pods that lack correct security configurations is an example of an unwanted cluster change. To control pod security, Kubernetes provided Pod Security Policy (PSP) resources. PSPs specify a set of security settings that pods must meet before they can be created or updated in a cluster. However, as of Kubernetes version 1.21, PSPs have been deprecated, and they have been removed in Kubernetes version 1.25.

The Kubernetes project documented why PSPs were deprecated. Simply put, PSPs were confusing to the majority of users. This confusion resulted in many misconfigurations; clusters were impaired or left unprotected by overly-restrictive or overly-permissive settings. How PSPs were applied was not obvious to many users. PSPs lacked certain capabilities that would have made them easier to add to existing clusters while gauging cluster impact, such as dry-run and audit modes. Finally, because of PSP implementation details, it wasn’t possible to enable PSPs by default. All this precipitated the need for a new, more user-friendly, and deterministic solution for pod security. It also had to remain built-in to Kubernetes.

In Kubernetes, PSPs are replaced with Pod Security Admission (PSA), a built-in admission controller that implements the security controls outlined in the Pod Security Standards (PSS). PSA and PSS both reached beta states in Kubernetes 1.23, and were enabled in Amazon Elastic Kubernetes Service (Amazon EKS) 1.23 by default.

Note: As an alternative to PSA, you can use Policy-as-Code (PaC) solutions, which are available from the open-source software community. For more information about PaC solutions, including how to select the most appropriate solution for your needs, please see this blog series (Policy-based countermeasures for Kubernetes Part 1 and Part 2).

Kubernetes users can move Pod security to PSA or PaC now, because both solutions can coexist with PSP in the same cluster. As mentioned in our Amazon EKS Best Practices Guide, it’s a best practice to ease adoption and migration by using PSA or PaC solutions with PSPs, until PSPs are removed from your clusters. For additional guidance on migrating from PSPs to PSA, you should review the Kubernetes documentation on this topic.

Note: To identify pods in your clusters that are annotated to use PSP, please see this kubectl snippet:

kubectl get pod -A \

-o jsonpath='{range .items[?(@.metadata.annotations.kubernetes\.io/psp)]}{.metadata.name}{“\t”}{.metadata.annotations.kubernetes\.io/psp}{“\t”}{.metadata.namespace}{“\n”}’

Walkthrough

Pod Security Standards (PSS) and Pod Security Admission (PSA)

According to the Kubernetes PSS documentation, the PSS “define three different policies to broadly cover the security spectrum. These policies are cumulative and range from highly-permissive to highly-restrictive.”

The policy levels are defined in the Kubernetes documentation as:

  • Privileged: Unrestricted policy, providing the widest possible level of permissions. This policy allows for known privilege escalations.
  • Baseline: Minimally restrictive policy which prevents known privilege escalations. Allows the default (minimally specified) pod configuration.
  • Restricted: Heavily restricted policy, following current pod hardening best practices.

The PSA admission controller implements the controls, outlined by the PSS policies, via three modes of operation, which are provided in the following list:

  • enforce: Policy violations will cause the pod to be rejected.
  • audit: Policy violations trigger the addition of an audit annotation to the event recorded in the audit log, but are otherwise allowed.
  • warn: Policy violations will trigger a user-facing warning, but are otherwise allowed.

Built-in Pod Security admission enforcement

As mentioned above, from Kubernetes version 1.23, the PodSecurity feature gate is a beta feature and is enabled by default in Amazon EKS. The default PSS and PSA settings for upstream Kubernetes version 1.23 are used for Amazon EKS. They are listed in the following code:

...
    defaults:
      enforce: "privileged"
      enforce-version: "latest"
      audit: "privileged"
      audit-version: "latest"
      warn: "privileged"
      warn-version: "latest"
    exemptions:
      # Array of authenticated usernames to exempt.
      usernames: []
      # Array of runtime class names to exempt.
      runtimeClasses: []
      # Array of namespaces to exempt.
      namespaces: []
...

The above settings configure the following cluster-wide scenario:

  • No PSA exemptions are configured at Kubernetes API server startup.
  • The Privileged PSS profile is configured by default for all PSA modes, and set to latest versions.

These default settings provide less impact to clusters and reduce negative impact to applications. As we will see, Namespace labels can be used to opt-in to more restrictive settings.

Pod security admission labels for Namespaces

Given the above default configuration, you must configure specific PSA modes and PSS profiles at the Kubernetes Namespace level, to opt Namespaces into more restrictive pod security provided by PSA and PSS. You can configure Namespaces to define the admission control mode you want to use for pod security. With Kubernetes labels, you can choose which of the predefined PSS levels you want to use for pods in a given Namespace. The labels you select define what action the PSA takes if a potential violation is detected. As seen in the following code, you configure any or all modes, or even set a different level for different modes. For each mode, there are two possible labels that determine the policy used.


# The per-mode level label indicates which policy level to apply for the mode.
#
# MODE must be one of `enforce`, `audit`, or `warn`.
# LEVEL must be one of `privileged`, `baseline`, or `restricted`.
pod-security.kubernetes.io/<MODE>: <LEVEL>

# Optional: per-mode version label that can be used to pin the policy to the
# version that shipped with a given Kubernetes minor version (for example v1.24).
#
# MODE must be one of `enforce`, `audit`, or `warn`.
# VERSION must be a valid Kubernetes minor version, or `latest`.
pod-security.kubernetes.io/<MODE>-version: <VERSION>

Below is an example of PSA and PSS Namespace configurations that can be used for testing. Please note that we didn’t include the optional PSA mode-version label. We used the cluster-wide setting, latest, configured by default. By uncommenting the desired labels (in the following code), you can enable the PSA modes and PSS profiles you need for your respective Namespaces.

apiVersion: v1
kind: Namespace
metadata:
  name: policy-test
  labels:    
    # pod-security.kubernetes.io/enforce: privileged
    # pod-security.kubernetes.io/audit: privileged
    # pod-security.kubernetes.io/warn: privileged
    
    # pod-security.kubernetes.io/enforce: baseline
    # pod-security.kubernetes.io/audit: baseline
    # pod-security.kubernetes.io/warn: baseline
    
    # pod-security.kubernetes.io/enforce: restricted
    # pod-security.kubernetes.io/audit: restricted
    # pod-security.kubernetes.io/warn: restricted

An example of testing PSA and PSS, using the above approach, can be found at this GitHub repository. We will follow this example in this post.

Kubernetes Admission Controllers

In Kubernetes, an Admission Controller is a piece of code that intercepts requests to the Kubernetes API server before they are persisted into etcd and used to change the state of the cluster. Admission controllers can be of type mutating, validating, or both. When a Kubernetes API server starts, it loads the mutating and validating admission controllers configured for the cluster. For example, the following Amazon CloudWatch log indicates the validating admission controllers that are loader when an Amazon EKS 1.23 API server starts.

Loaded 12 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,PodSecurityPolicy,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.

The implementation of PSA, listed in the preceding log as PodSecurity, is a validating admission controller, and it checks inbound pod specification requests for conformance to the specified PSS.

Prior to Kubernetes 1.23, PSA could be added as an admission webhook, to an existing Kubernetes cluster, via a dynamic admission controller.

Note: For a quick intro into Kubernetes Dynamic Admission Controllers, please watch this short video from Containers from the Couch.

In the following flow, mutating and validating dynamic admission controllers (i.e., admission webhooks) are integrated to the Kubernetes API server request flow. In fact, the preceding Amazon CloudWatch log indicated that the ValidatingAdmissionWebhook admission controller was loaded when the Amazon EKS 1.23 API server started. The webhooks call out to services configured to respond to certain types of API server requests. For example, you can use webhooks to configure dynamic admission controllers to validate that containers in a pod are running as non-root users, or container images are sourced from specific registries.

Diagram of Kubernetes API server request flow

Using PSA and PSS

PSA is a built-in admission controller, loaded at API server startup, that enforces the policies outlined in PSS. The PSS policies define a set of pod security profiles. In the following diagram, we outline how PSA and PSS work together, with pods and namespaces, to define pod security profiles and apply admission control based on those profiles. As seen in the following diagram, the PSA enforcement modes and PSS policies are defined as labels in the target Namespaces.

Diagram of PSA/PSS in a Kubernetes cluster, with Namespace integration

Setup for testing pod security in your Amazon EKS cluster

The following is a guide designed to help you get started with PSA and PSS in your Amazon EKS cluster. To follow along with our guide, you need an AWS account, an Amazon EKS 1.23 cluster, and the following tools:

Note: Please see the Amazon EKS documentation for creating a cluster.

The sequence of operations for testing pod security with PSA and PSS are provided in the following list.

  1. Clone the psa-pss-testing GitHub repository.
  2. If you don’t already have one, create an Amazon EKS 1.23 cluster. The default PSA and PSS configurations will be present when the cluster is created. The simplest way to create an Amazon EKS cluster is via the eksctl single command seen in the following details.
    eksctl create cluster --name <CLUSTER_NAME> --region <AWS_REGION> --version 1.23
  3. Once the cluster and node group are created and your kubectl config has been modified to connect to your Amazon EKS cluster, run the following eksctl command to enable Amazon CloudWatch logging for the newly minted cluster.
    eksctl utils update-cluster-logging --enable-types=all --region=<AWS_REGION> \
    --cluster=<CLUSTER_NAME>
  4. Connect to your cluster: kubectl version
  5. Within the cloned GitHub repository, Modify the psa-pss-testing/tests/0-ns.yaml to configure the policy-test Namespace with correct PSA and PSS labels for desired security settings.
  6. Run the psa-pss-testing/tests/test.sh bash script to run your tests by applying the test resources to your Amazon EKS cluster.
  7. Verify that the test results match the configured PSA and PSS settings.
  8. Run the psa-pss-testing/tests/clean.sh bash script to clean up any resources created in your cluster during testing.
  9. Repeat steps 5–8 to change the PSA and PSS settings for the policy-test Namespace and re-run the tests.

PSA and PSS testing and results

Following the test scenarios below, several tests are executed against an Amazon EKS 1.23 cluster, with Pod Security Admission (PSA) and Pod Security Standards (PSS) Privileged profile enabled by default. The testing is designed to exercise different PSA modes and PSS profiles, while producing the following responses:

  • Allowing pods that meet profile requirements
  • Disallowing pods that don’t meet profile requirements
  • Allowing Deployments, even if Pods did not meet PSS profile requirements
  • Failure (forbidden) responses to Kubernetes API server clients when pods don’t meet profile requirements
  • Deployment resource statuses, reflecting failed (forbidden) pods
  • Kubernetes API server logs with PSA controller responses, reflecting failed (forbidden) Pods
  • Warning messages to Kubernetes API server clients when deployments contained pod specs that failed PSS profile requirements

Test setup

The following resources are used during all test scenarios. Changes are only made to the policy-test Namespace, to adjust the labels for desired PSA modes and PSS profiles. As the Namespace labels are adjusted, the tests yield different results because different levels of pod security are applied to the policy-test Namespace.

Kubernetes resources:

  • policy-test Namespace
apiVersion: v1
kind: Namespace
metadata:
  name: policy-test
  labels:    
    # pod-security.kubernetes.io/enforce: privileged
    # pod-security.kubernetes.io/audit: privileged
    # pod-security.kubernetes.io/warn: privileged
    
    # pod-security.kubernetes.io/enforce: baseline
    # pod-security.kubernetes.io/audit: baseline
    # pod-security.kubernetes.io/warn: baseline
    
    # pod-security.kubernetes.io/enforce: restricted
    # pod-security.kubernetes.io/audit: restricted
    # pod-security.kubernetes.io/warn: restricted
  • Known good (based on documented PSS profiles) Kubernetes deployment created and then deleted
apiVersion: apps/v1
kind: Deployment
namespace: policy-test
... 
    spec: 
      containers:
      - name: test
        image: public.ecr.aws/r2l1x4g2/go-http-server:v0.1.0-23ffe0a715
        imagePullPolicy: IfNotPresent
        securityContext:  
          allowPrivilegeEscalation: false  
          runAsUser: 1000  
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          capabilities:
            drop: ["ALL"]  
          seccompProfile:
            type: "RuntimeDefault"
        ports:
        - containerPort: 8080
...        
  • Known bad (based on documented PSS profiles) Kubernetes deployment created
    • No securityContext element at the pod level
    • No securityContext element at the container level
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test
  namespace: policy-test
...
    spec: 
      containers:
      - name: test
        image: public.ecr.aws/r2l1x4g2/go-http-server:v0.1.0-23ffe0a715
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
...        
  • Known bad (based on documented PSS profiles) Kubernetes pod created
    • No securityContext element at the pod level
    • No securityContext element at the container level
apiVersion: v1
kind: Pod
metadata:
  name: test
  namespace: policy-test
...
spec:
  containers:
    - name: test
      image: public.ecr.aws/r2l1x4g2/go-http-server:v0.1.0-23ffe0a715
      imagePullPolicy: IfNotPresent
      ports:
      - containerPort: 8080
...
  • Known bad (based on documented PSS profiles) Kubernetes pod created
    • securityContext element at the pod level exists with valid runAsUser and runAsNonRoot
    • No securityContext element at the container level
apiVersion: v1
kind: Pod
metadata:
  name: test2
  namespace: policy-test
...
spec:
  securityContext:  
    runAsUser: 1000  
    runAsNonRoot: true
  containers:
    - name: test
      image: public.ecr.aws/r2l1x4g2/go-http-server:v0.1.0-23ffe0a715
      imagePullPolicy: IfNotPresent
      ports:
      - containerPort: 8080
...
  • Known bad (based on documented PSS profiles) Kubernetes pod created
    • No securityContext element at the pod level
    • securityContext element at the container level exists with disallowed settings for allowPrivilegeEscalation, readOnlyRootFilesystem, and runAsNonRoot
apiVersion: v1
kind: Pod
metadata:
  name: test3
  namespace: policy-test
...
spec:
  containers:
    - name: test
      image: public.ecr.aws/r2l1x4g2/go-http-server:v0.1.0-23ffe0a715
      imagePullPolicy: IfNotPresent
      securityContext:  
        allowPrivilegeEscalation: true  
        runAsUser: 1000  
        readOnlyRootFilesystem: false
        runAsNonRoot: false
        capabilities:
          drop: ["ALL"]  
        seccompProfile:
          type: "RuntimeDefault"
      ports:
...
  • Known bad (based on documented PSS profiles) pod created
    • No securityContext element at the pod level.
    • securityContext element at the container level exists with correct settings
    • Pod spec contains disallowed hostNetwork, hostPID, and hostIPC settings
apiVersion: v1
kind: Pod
metadata:
  name: test4
  namespace: policy-test
...
spec:
  hostNetwork: true
  hostPID: true
  hostIPC: true
  containers:
    - name: test
      image: public.ecr.aws/r2l1x4g2/go-http-server:v0.1.0-23ffe0a715
      imagePullPolicy: IfNotPresent
      securityContext:  
        allowPrivilegeEscalation: false  
        runAsUser: 1000  
        readOnlyRootFilesystem: true
        runAsNonRoot: true
        capabilities:
          drop: ["ALL"]  
        seccompProfile:
          type: "RuntimeDefault"
      ports:
      - containerPort: 8080
...

Testing scenarios

The aforementioned psa-pss-testing GitHub repository contains several scenarios for testing all the PSA modes with all the PSS profiles on a Kubernetes 1.23 cluster. In this post we will explore two scenarios (Scenarios 3 and 4) that test all three PSA modes with the Baseline and Restricted PSS profiles. The scenarios aren’t meant to be an exhaustive list of settings allowed or disallowed by the PSS profiles. Instead, the scenarios are meant to demonstrate how PSA works with PSS to provide desired Pod security.

Scenario 3 – All PSA modes enabled for baseline PSS profile (Namespace level)

  • Namespace Config
apiVersion: v1
kind: Namespace
metadata:
  name: policy-test
  labels:    
    # pod-security.kubernetes.io/enforce: privileged
    # pod-security.kubernetes.io/audit: privileged
    # pod-security.kubernetes.io/warn: privileged
    
    pod-security.kubernetes.io/enforce: baseline
    pod-security.kubernetes.io/audit: baseline
    pod-security.kubernetes.io/warn: baseline
    
    # pod-security.kubernetes.io/enforce: restricted
    # pod-security.kubernetes.io/audit: restricted
    # pod-security.kubernetes.io/warn: restricted
  • Test Output – Namespace-level PSS baseline profile applied (4 pods allowed, 1 pod disallowed)
namespace/policy-test created

>>> 1. Good config...
deployment.apps/test created
deployment.apps "test" deleted

>>> 2. Deployment - Missing container security context element...
deployment.apps/test created

>>> 3. Pod - Missing container security context element...
pod/test created

>>> 4. Pod - Pod security context, but Missing container security context element...
pod/test2 created

>>> 5. Pod - Container security context element present, with incorrect settings...
pod/test3 created

>>> 6. Pod - Container security context element present, with incorrect spec.hostNetwork, spec.hostPID, spec.hostIPC settings...
Error from server (Forbidden): error when creating "policy/psa-pss/tests/6-pod.yaml": pods "test4" is forbidden: violates PodSecurity "baseline:latest": host namespaces (hostNetwork=true, hostPID=true, hostIPC=true), hostPort (container "test" uses hostPort 8080)

kubectl -n policy-test get po
NAME                   READY   STATUS    RESTARTS   AGE
test                   1/1     Running   0          46s
test-59955f994-6tbj7   1/1     Running   0          52s
test2                  1/1     Running   0          42s
test3                  1/1     Running   0          37s

Scenario 4 – All PSA modes enabled for restricted PSS profile (Namespace level)

  • Namespace config
apiVersion: v1
kind: Namespace
metadata:
  name: policy-test
  labels:
    # pod-security.kubernetes.io/enforce: privileged
    # pod-security.kubernetes.io/audit: privileged
    # pod-security.kubernetes.io/warn: privileged
    
    # pod-security.kubernetes.io/enforce: baseline
    # pod-security.kubernetes.io/audit: baseline
    # pod-security.kubernetes.io/warn: baseline
    
    pod-security.kubernetes.io/enforce: restricted
    pod-security.kubernetes.io/audit: restricted
    pod-security.kubernetes.io/warn: restricted
  • Test output: Namespace-level PSS restricted profile applied (0 pods allowed, 5 pods disallowed)
    • 1 deployment created, with 0 pods allowed
namespace/policy-test created


>>> 1. Good config...
deployment.apps/test created
deployment.apps "test" deleted


>>> 2. Deployment - Missing container security context element...
Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "test" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "test" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
deployment.apps/test created


>>> 3. Pod - Missing container security context element...
Error from server (Forbidden): error when creating "policy/psa-pss/tests/3-pod.yaml": pods "test" is forbidden: violates PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "test" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "test" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")


>>> 4. Pod - Pod security context, but Missing container security context element...
Error from server (Forbidden): error when creating "policy/psa-pss/tests/4-pod.yaml": pods "test2" is forbidden: violates PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "test" must set securityContext.capabilities.drop=["ALL"]), seccompProfile (pod or container "test" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")


>>> 5. Pod - Container security context element present, with incorrect settings...
Error from server (Forbidden): error when creating "policy/psa-pss/tests/5-pod.yaml": pods "test3" is forbidden: violates PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "test" must set securityContext.allowPrivilegeEscalation=false), runAsNonRoot != true (container "test" must not set securityContext.runAsNonRoot=false)


>>> 6. Pod - Container security context element present, with incorrect spec.hostNetwork, spec.hostPID, spec.hostIPC settings...
Error from server (Forbidden): error when creating "policy/psa-pss/tests/6-pod.yaml": pods "test4" is forbidden: violates PodSecurity "restricted:latest": host namespaces (hostNetwork=true, hostPID=true, hostIPC=true), hostPort (container "test" uses hostPort 8080)

kubectl -n policy-test get po
No resources found in policy-test namespace.

kubectl -n policy-test get deploy test -oyaml
...
status:
  conditions:
...
  - lastTransitionTime: "2022-07-12T23:56:10Z"
    lastUpdateTime: "2022-07-12T23:56:10Z"
    message: 'pods "test-59955f994-wl8hf" is forbidden: violates PodSecurity "restricted:latest":
      allowPrivilegeEscalation != false (container "test" must set securityContext.allowPrivilegeEscalation=false),
      unrestricted capabilities (container "test" must set securityContext.capabilities.drop=["ALL"]),
      runAsNonRoot != true (pod or container "test" must set securityContext.runAsNonRoot=true),
      seccompProfile (pod or container "test" must set securityContext.seccompProfile.type
      to "RuntimeDefault" or "Localhost")'
    reason: FailedCreate
    status: "True"
    type: ReplicaFailure
...

Testing assumptions

Given the default PSA and PSS settings of the Amazon EKS 1.23 cluster and how PSA and PSS function, the following testing assumptions were made.

  • PSA Enforce mode only affects Pods, and does not affect workload resource controllers (Deployment, etc.) that create Pods.
  • No PSA exemptions are configured at API server startup, for the PSA controller.
  • The Privileged PSS profile is configured by default for all PSA modes and is set to latest versions. This can be changed via Namespace labels.

Testing outcomes

Given our testing setup and scenarios, and the fact that Amazon EKS uses upstream Kubernetes, the following outcomes were observed.

  • PSA modes (audit, enforce, and warn) functioned as expected in Amazon EKS 1.23
  • PSS profiles (Privileged, Baseline, and Restricted) functioned as expected in Amazon EKS 1.23

User experience (UX)

When used independently, the PSA modes have different responses that result in different user experiences. The enforce mode prevents Pods from being created if the respective Pod specs violate the configured PSS profile. However, in this mode, non-Pod Kubernetes objects that create Pods, such as Deployments, won’t be prevented from being applied to the cluster, even if the Pod spec therein violates the applied PSS profile. In this case, the Deployment is applied while the Pods are prevented from being applied.

In some scenarios, this is a difficult user experience, as there is no immediate indication that the successfully applied Deployment object belies failed Pod creation. The offending Pod specs won’t create Pods. Inspecting the Deployment resource with kubectl get deploy <DEPLOYMENT_NAME> -oyaml will expose the message from the failed Pod(s) .status.conditions element, as was seen in our testing above.

In both the audit and warn PSA modes, the Pod restrictions don’t prevent violating Pods from being created and started. However, in these modes audit annotations on API server audit log events and warnings to API server clients (e.g., kubectl) are triggered, respectively. This occurs when Pods, as well as objects that create Pods, contain Pod specs with PSS violations. A kubectl Warning message is seen in the following output.

deployment.apps/test created
Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != 
false (container "test" must set securityContext.allowPrivilegeEscalation=false), 
unrestricted capabilities (container "test" must set 
securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container 
"test" must set securityContext.runAsNonRoot=true), seccompProfile (pod or 
container "test" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")

While the Deployment was created, the Pod was not. It’s clear that a best practice would be to use warn and audit modes at all times, for a better user experience.

Cleaning up

If you created an Amazon EKS cluster using the eksctl method from above, then eksctl created two AWS CloudFormation stacks: one for the cluster and one for the node group. You use the following single eksctl command to delete the cluster, the node group, and all other AWS resources created with the original eksctl create cluster command. The command below will also remove the config entries from your local kubectl config.

eksctl delete cluster --name <CLUSTER_NAME> --wait

Since all the created resources are part of AWS CloudFormation stacks, you can also cleanup by directly deleting the stacks. In which case, you would need to explicitly remove the cluster config entries from your local kubectl config.

Conclusion

In this post, we showed you how to apply various Pod security standards in Amazon EKS. The Kubernetes project has decided to replace PSP with PSA and PSS. PSA and PSS work together (i.e., PSA implements the security controls outlined in PSS). Both PSA and PSS are beta features in Kubernetes 1.23 and you can run PSA and PSS in the same cluster as PSP. This aids adoption and migration before PSPs are removed from Kubernetes in version 1.25. The default configurations of PSA and PSS are part of Amazon EKS 1.23, and Kubernetes Namespaces can be configured with labels to opt into Pod security defined by PSS and implemented by PSA. With appropriate policies you can successfully replace PSP.

Try PSA and PSS!

If you are looking for a replacement for PSP, then you should look into PSA and PSS. Even if you are not using Kubernetes 1.23 yet, you can still use PSA and PSS via the aforementioned dynamic admission controller Pod Security Admission Webhook solution. Now is the time to try PSA and PSS, and discover if the solution fits your Pod security needs. Take advantage of the testing approach we shared in this post to help you verify if PSA and PSS work for you. And, if you’re curious about how PSA and PSS compare to Policy-as-Code (PaC) solutions as potential PSP replacements, check out our EKS Best Practices Guide for related Pod security topics.

Check out our Containers Roadmap!

If you have ideas about how we can improve Amazon container services, then please use our Containers Roadmap to provide us feedback and review our existing roadmap items.

Jayaprakash Alawala

Jayaprakash Alawala

Jayaprakash Alawala is a Sr Container Specialist Solutions Architect at AWS. He helps customers on Applications Modernization and build large scale applications leveraging various AWS services. He has expertise in the area of Containers, Micro-services, Dev Ops, Security, Cost Optimization including EC2 Spot, Technical Training. Outside of work, he loves spending time reading and traveling. You can reach him on twitter @JP_Alawala

Jimmy Ray

Jimmy Ray

Jimmy Ray is a Developer Advocate on the AWS Kubernetes team. He focuses on container and Software Supply Chain security, and Policy-as-Code.