Amazon EKS now supports Kubernetes 1.22
The Amazon Elastic Kubernetes Service (Amazon EKS) team is pleased to announce support for Kubernetes 1.22. Amazon EKS, Amazon EKS Distro, and Amazon EKS Anywhere can now run Kubernetes version 1.22. The upstream project theme for this release is “Reaching New Peaks.” The theme for the release, according to release lead Savitha Raghunathan, is due to what she described as: “in spite of the pandemic and burnout, Kubernetes 1.22 had the highest number of enhancements in any release.” This release does bring a significant number of API changes, a Kubernetes release cadence change, and many other updates. Thank you for all the work the upstream Kubernetes 1.22 Release Team did to bring this release to the greater cloud-native ecosystem.
Our commitments to customers and open source software
Security and reliability are hallmarks that make Amazon EKS the most trusted way for enterprise customers to start, run, and scale Kubernetes. In the release process for Amazon EKS 1.22, the intention was to ship with the latest and greatest version of etcd, 3.5.2 (which is recommended for Kubernetes 1.22). As you may recall, etcd is the brains of a Kubernetes cluster as it is the backing store for all cluster data. In etcd 3.5, the community made many performance and reliability improvements. However, two data inconsistency issues led the Amazon EKS team to help verify and consistently reproduce these issues. The Amazon EKS team is working closely with the upstream etcd team to help mitigate these two high severity issues (1, 2) that could leave cluster data in an inconsistent state. The EKS team continues to work with the etcd community towards a fix. The Amazon EKS team prioritizes extensive testing over taking a default path of latest version of all cluster components. Our commitment to rigorous testing and collaboration will have a net positive impact on the open source etcd project and Kubernetes as a whole.
The etcd community has made an announcement to hold off on using etcd 3.5.1 and 3.5.2 pending a fix coming in etcd 3.5.3. The Amazon EKS team will work with the etcd community to contribute to the fixes as well as test the upcoming etcd release with the same rigor and priority that our customers have come to expect. Open source cooperation and contributions are key to running any Kubernetes service at scale. As a result of our continued work with etcd developers, we are shipping Amazon EKS 1.22 with etcd v3.4 with plans to upgrade etcd 3.5.3 (or later) once stable.
Kubernetes 1.22 highlights
If you’re interested in all of the notable features, you should read the Kubernetes Blog release post and full release notes. Some of the notable updates will be highlighted here. For the complete Kubernetes 1.22 changelog, see https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.22.md.
Kubernetes API and feature removals in 1.22
Kubernetes 1.22 comes with many removals of deprecated APIs. Before upgrading to Kubernetes 1.22, please upgrade the removed APIs to their now generally available equivalents. The Kubernetes blog post, Kubernetes API and Feature Removals In 1.22: Here’s What You Need To Know, details what’s been removed and what to put in its place now that these APIs are generally available. Please refer to the latest Amazon EKS Updating a cluster document for specifics on what Amazon EKS customers should do prior to upgrading. One helpful feature that graduated to stable is a warning mechanism for deprecated API use.
kubectl-convert command is there to help manage some of the needed changes to manifests/apps.
kubectl-convert helps folks convert config files between different API versions. Both YAML and JSON formats are accepted. Check the What to do section of the Here’s What You Need To Know blog post for recommendations for migrating API versions. Here are some of the most notable of the several beta Kubernetes APIs going stable in Kubernetes 1.22:
networking.k8s.io/v1beta1API versions of Ingress are no longer available in Kubernetes 1.22.
- On any cluster running Kubernetes v1.19 or later, you can use the v1 API to retrieve or update existing Ingress objects, even if they were created using an older API version.
- If you are running version 2.4.0 of
aws-load-balancer-controller, the Ingress changes have already been implemented.
- If you need to upgrade your
aws-load-balancer-controller, those instructions can be found in the Installation Guide.
- Webhook resources
- Migrate to use the admissionregistration.k8s.io/v1 API versions of
- You can use the v1 API to retrieve or update existing objects, even if they were created using an older API version.
- Migrate to use the admissionregistration.k8s.io/v1 API versions of
- CustomResourceDefinition (CRD) have been in use across many industries for quite some time. It’s good to see them become stable. There are some assurances now for writers of CRDs that the API will be much more stable going forward.
- Because there were so many changes to CRDs going from beta to stable, the Kubernetes community encourages users to upgrade and test CRDs to make sure they behave in the intended manner.
Kubernetes release cadence change
The Kubernetes upstream project, like most open source projects, is run by humans. The cadence going down to three releases a year is a major change for the project and a sign of its maturing feature set. The cadence change is also significant enough to warrant its own blog post as well: Kubernetes Release Cadence Change: Here’s What You Need To Know.
“Prior to this change, an enhancement could graduate from alpha to stable in nine months. With the change in cadence, this period will stretch to 12 months.” The change will also give regular release team members (SIG Release) much-needed breaks between releases and be more accommodating to post-COVID life. A notable technical advantage of this will be “slightly longer periods“ where security updates and bug fixes can be applied to existing Kubernetes versions. (It’s worth noting this is not a long-term support commitment for the upstream project.) And it means a given release might have more updates as there is more time to work on features before code freezes. As always, AWS is diligent about applying security patches and updates to the control plane automatically. Similarly, managed node groups give customers a mechanism to recycle their worker nodes if and when necessary. Amazon EKS is committed to supporting at least four production-ready versions of Kubernetes at any given time.
There is an ongoing Kubernetes community survey about release cadences. If you are a Kubernetes project contributor, user of Kubernetes, are staff of companies that use, resell, or host Kubernetes, please consider taking the survey.
Server-Side Apply is now generally available
“Server-Side Apply is a new object merge algorithm, as well as tracking of field ownership, running on the Kubernetes API server.” What does that mean? It helps users and even controllers manage their resources declaratively. Server-Side Apply is meant both as a replacement for the original
kubectl apply and as a simpler mechanism for controllers to enact their changes. As a reminder, Amazon EKS add-ons use the Kubernetes Server-Side Apply feature. You can modify specific Amazon EKS-managed configuration fields for Amazon EKS add-ons through the Amazon EKS API. It’s also possible to modify configuration fields not managed by Amazon EKS directly within the Kubernetes cluster once the add-on starts. This includes defining specific configuration fields for an add-on where applicable.
Pod Security Admission
In Kubernetes 1.25, PodSecurityPolicies are being replaced by Pod Security Standards (PSS) and Pod Security Admission (PSA). The Kubernetes Pod Security Standards define different isolation levels for pods. These standards let you define how you want to restrict the behavior of pods in a clear, consistent fashion. The PSA effort includes an admission controller webhook project that implements the controls defined in the PSS. We have updated the Amazon EKS Best Practices Guides accordingly to help customers start testing these new standards.
The Kubernetes control plane sets an immutable label
kubernetes.io/metadata.name on all namespaces, provided that the
NamespaceDefaultLabelName feature gate is enabled. The value of the label is the namespace name. This is a beta feature.
With this beta feature, customers are able to specify the maximum number of nodes that can have an updated DaemonSet pod during an update. The value can be an absolute number or a percentage of desired pods. The default value is 0. This will help customers in minimizing disruptions due to updates of DaemonSets.
Network policies can target a range of ports
When writing a NetworkPolicy, you can target a range of ports instead of a single port. This is achievable with the usage of the endPort field, as the following example:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: port-range-egress namespace: default spec: podSelector: matchLabels: role: web policyTypes: - Egress egress: - to: - ipBlock: cidr: 10.0.0.0/24 ports: - protocol: TCP port: 32000 endPort: 32768
Amazon EKS is ending support for
In version 1.20, Kubernetes deprecated
Dockershim, which allows Kubernetes to use Docker as a container runtime. Docker is still fully functional, but users will need to migrate to a different container runtime before support is removed in the Amazon EKS 1.24 release. Amazon EKS is removing
Dockershim support in version 1.24. You’ll still be able to use
Dockershim, but you’ll have to build an AMI for that purpose. It will no longer be included in the optimized AMI that Amazon EKS vends. For information on how to install and manage clusters with
containerd, a graduated CNCF project and already in use by Fargate, Bottlerocket, and Windows workers, please see the previous release’s blog post on this topic.
Detector for Docker Socket (DDS)
When trying to determine where Dockershim is being used in your environment, the Amazon EKS team realizes this can be difficult at times. To help our customers and the greater Kubernetes ecosystem, we have developed a tool to help with this: Detector for Docker Socket (DDS)
This detector tool is a kubectl plugin that can detect if any of your workloads or manifest files are mounting the docker.sock volume. DDS looks for every pod in your Kubernetes cluster. If pods are part of a workload (e.g., Deployment, StatefulSet, etc.) it inspects the workload type instead of pods directly. The tool can also be used to look at Kubernetes manifest files on disk as well. More details can be found in the project’s README.
You can learn about how to upgrade your EKS version in our blog post, Planning Kubernetes Upgrades with Amazon EKS.
As a friendly reminder, Amazon EKS provides support for at least four Kubernetes versions at any given time. Given the Kubernetes recurrent release cycle, it is critical for all customers to have an ongoing upgrade plan. Amazon EKS support for version 1.18 reached end of support on March 31, 2022. It is no longer possible to create new 1.18 clusters. You can see the Amazon EKS Kubernetes release calendar for more information.
With that in mind, we’re excited for customers to start taking advantage of the numerous enhancements and new generally available APIs in Kubernetes 1.22. Again, a huge thanks to the Kubernetes 1.22 release team for working on this release through the pandemic. Please start making sure you’re ready for
Dockershim removal by testing workloads using
containerd. Also, keep in mind that the new upstream release cadence (going from four to three releases a year) will mean fewer but potentially larger releases of Kubernetes in the future.