Amazon EKS now supports Kubernetes version 1.25


The Amazon Elastic Kubernetes Service (Amazon EKS) team is pleased to announce support for Kubernetes version 1.25 for Amazon EKS and Amazon EKS Distro. Amazon EKS Anywhere (release 0.14.2) also supports Kubernetes 1.25. The theme for this version was chosen to recognize both the diverse components that the project comprises and the individuals who contributed to it. Hence, the fitting release name, “Combiner”. Combiner highlights the collaborative nature of open source and its impact, both in general and with specific regard to Kubernetes. The Kubernetes release team stated the following, “With this release, we wish to honor the collaborative, open spirit that takes us from isolated developers, writers, and users spread around the globe to a combined force capable of changing the world.”

Prerequisites for upgrade

Nowadays, it is not uncommon for API versions to be deprecated when a new version of Kubernetes is released. When this happens, it is imperative that you update all manifests and controllers that reference these deprecated APIs. A list of the API versions that were deprecated in version 1.25 can be found in the Deprecated API Migration Guide. Among the API versions that were removed in 1.25 was the API. If you are using the AWS Load Balancer Controller with the enable-endpoint-slices flag, you will have to upgrade the controller to 2.4.7 before you upgrade to Kubernetes 1.25. If you are unsure which version of the controller you are currently running you can run the following command to get the version number:

kubectl get deployment aws-load-balancer-controller -n kube-system -o jsonpath='{.spec.template.spec.containers[*].image}'

The version number will appear after the colon, e.g.

Instructions for installing/upgrading to the latest version of the AWS Load Balancer Controller can be found in the documentation. Failure to upgrade the controller to v2.4.7 before upgrading to EKS 1.25 could cause a disruption to your workloads.

Kubernetes 1.25 highlights

This post covers some of the enhancements in the Kubernetes version 1.25 release. The most notable change is the removal of Pod Security Policies (PSPs). Since its deprecation in Kubernetes 1.21, PSPs have been replaced by the Pod Security Admission (PSA) controller. PSA has graduated to stable in 1.25. In addition, there are a number of security changes that have been introduced or improved on. Some of the security related features include Network Policy port ranges and using RuntimeDefault as the default seccomp profile. Another security related development is the removal of wildcard queries in the CoreDNS plugin that ships with this version release. It’s also worth noting the ongoing effort to move away from in-tree storage drivers. In Kubernetes 1.25, a number storage plugins have been removed and the Container Storage Interface (CSI) migration has graduated to stable.

To review the complete list of enhancements, you can refer to the Kubernetes change log.

Pod Security Admission graduates to Stable and Pod Security Policy removed

As of Kubernetes 1.23, the PSA was introduced as a beta feature to succeed PSPs in implementing pod security measures. These measures are defined by the Pod Security Standards (PSS) and allow you to restrict the behavior of your pods at a namespace level. In Kubernetes 1.25, PSA is stable and enabled by default in Amazon EKS. Conversely, PSP has now been completely removed from the Kubernetes API and won’t be usable. For customers looking to migrate their workloads from PSPs to PSA, this post offers guidance around the transition. The official Kubernetes documentation also provides assistance on this migration. You can also refer to our PSP removal FAQ. As documented in the Amazon EKS Best Practices guide, customers can also consider the use of open source Policy-as-Code (PaC) solutions as an alternative to PSA. For additional information on Amazon EKS add-on compatibility with Amazon EKS 1.25, please refer to this documentation.

Ephemeral containers reach stable state

Ephemeral containers were first introduced in Kubernetes 1.16 as special purpose containers designed to enhance the process of debugging running pods. Ephemeral containers are deployed in the same namespace as the running pod that you intend to debug and has access to its containers’ file systems and process namespace. These type of containers are useful for troubleshooting exercises but aren’t meant to be used for normal application deployments. As such, they can’t be configured as you typically would other containers with fields like ports, readinessProbes, and livenessProbes. This feature is now generally available, allowing you to debug and run inspections on your workloads in Amazon EKS 1.25.

Network Policy port ranges are stable

In previous versions of Kubernetes, when applying port restrictions with a Network Policy, you have to specify each target port that the rules should apply to. With the graduation of endPort to stable, customers can now declare a port range which simplifies the restriction process. Without a Network Policy, all pods on a Kubernetes cluster can communicate with each other by default. This design simplifies the initial adoption and remains a default configuration. However, it’s highly encouraged to adopt a Network Policy for production workloads to secure east-to-west network connections between pods. Network Policies implement allow-based firewall rules. Amazon EKS installs Amazon VPC CNI for pod networking and is the default Container Network Interface (CNI). As of today, Amazon EKS is compatible with policy engines Calico, Cilium, and Weave, among others. You can read more about implementing Network Policies in Amazon EKS in the EKS Best Practices guide on networking.

SeccompDefault Profiles is now beta

Seccomp is a security feature in the Linux Kernel that can be used to restrict the behavior of containers on your nodes. The use of seccomp profiles has become increasingly important given the number of security risks that go hand in hand with running container workloads. In Amazon EKS 1.25, this feature is disabled by default. As such, customers that wish apply these strict security profiles on their nodes have to enable this feature and apply the –seccomp-default flag when configuring kubelet. The nodes will use the RuntimeDefault seccomp profile rather than the Unconfined (i.e., seccomp disabled) mode. In some scenarios, not all your containers require the same level of syscall restrictions, and as such you can set a custom seccomp profile for some workloads. Alternatively, you can apply taints to dedicated nodes with the RuntimeDefault seccomp profile, and deploy select workloads that require higher restriction levels with the appropriate tolerations. Customers can also use the Kubernetes Security Profile to create and propagate the seccomp profiles to enforce security on their nodes.

Cgroups v2 is stable

Control groups (Cgroups) are a Linux kernel feature that allow you to manage resources for running processes. With Cgroups, you can allocate and limit the usage of CPU, memory, network, disk I/O, etc. This particular enhancement is related to Kubernetes compatibility with Cgroups API version 2 which is now stable in 1.25. When working with Cgroups v2 in Amazon EKS 1.25, customers should review the new configuration values to see some of the changes to ranges of values for resources (i.e., cpu.weight changes from [2-262144] to [1-10000]). At present, EKS does not offer an optimized AMI that supports cgroups v2. Its availability is currently being tracked in this issue.

DaemonSet maxSurge is stable

With this feature, customers can control the maximum number of pods that can be created in place of old ones during a rolling update. To make use of this, you can add the desired value for the number of pods to the spec.strategy.rollingUpdate.maxSurge optional field. This value can also be expressed as a percentage of the existing desired pods. This feature has graduated to stable and is enabled by default in Amazon EKS 1.25.

Local ephemeral storage capacity isolation is stable

Kubernetes supports both persistent and ephemeral storage. Ephemeral storage is useful for things like caching, sharing transient data between multi-container pods, and logging. This feature, first introduced in Kubernetes 1.7, provides a way for you to isolate and limit the local ephemeral storage consumed by a pod. Similar to CPU and memory resource management, you can set limits as well as reserve ephemeral storage with resource requests. A pod’s local ephemeral storage shares its lifecycle and doesn’t extend beyond that. Through the use of hard limits, pods can be evicted in the case that they exceed their individual configured capacity. In Kubernetes 1.25, this feature is now generally available (GA).

Core CSI migration is stable

Container Storage Interface (CSI) migration was introduced to solve the complexity surrounding in-tree storage plugins that were encoded into Kubernetes. The goal was to replace these in-tree plugins with corresponding out-of-tree CSI drivers (like Amazon EBS CSI). This approach decouples the storage provider plugin from the Kubernetes source code, improving the maintenance of both Kubernetes and the storage provider plugins. The CSI Migration is now generally available (GA), enabling the use of drivers that interact with workloads on your cluster through the CSI. In Amazon EKS 1.25, Amazon EBS CSI migration is enabled by default. However, to use Amazon EBS volumes for resources like StorageClass, PersistentVolume, and PersistentVolumeClaim, then you need to install the corresponding Amazon EBS CSI driver as an Amazon EKS add-on. This particular development, however, doesn’t apply to Amazon EKS-A, which doesn’t have any in-tree CSI drivers.

Other updates

Updates to AWS IAM Authenticator

Amazon EKS 1.25 also includes enhancements to cluster authentication. You must add double quotes before and after curly braces in the aws-auth ConfigMap (found in the kube-system namespace) if a YAML value starts with a macro (i.e., the first character is a curly brace). This is required to ensure accurate parsing of the aws-auth ConfigMap by aws-iam-authenticator v0.6.3 used in Amazon EKS 1.25. For example, if you set your username to {{SessionName}} in your aws-auth ConfigMap without any double quotes, like:

username: {{SessionName}}

then you must update it to "{{SessionName}}". However, if the first character is not a curly brace, like:

username: admin:{{SessionName}}

then no action is required. If you do not make this change, it can possibly lead to authentication failures in your cluster.


Customers can now create EKS clusters running Kubernetes version 1.25, and benefit from the features highlighted in this post, along with the other enhancements documented in the release notes. For customers that are currently running PSPs, it’s highly recommended that you plan your workload migration as you move to PSA or a PaC solution. If you need assistance with upgrading your cluster to the latest Amazon EKS version, then you can refer to our documentation here. If you’re still running an older version of Kubernetes like 1.21 and 1.22, please consider upgrading to one of the newer supported versions. The end-of-life-support for 1.21 clusters was February 16, 2023, and the end-of-life support for 1.22 clusters will be in May, 2023. If you have more questions concerning Amazon EKS version support, refer to our FAQ.

Lukonde Mwila

Lukonde Mwila

Lukonde is a Senior Developer Advocate at AWS. He has years of experience in application development, solution architecture, cloud engineering, and DevOps workflows. He is a life-long learner and is passionate about sharing knowledge through various mediums. Nowadays, Lukonde spends the majority of his time contributing to the Kubernetes and cloud-native ecosystem