Containers
Amazon EKS now supports Kubernetes version 1.26
Introduction
The Amazon Elastic Kubernetes Service (Amazon EKS) team is pleased to announce support for Kubernetes version 1.26 for Amazon EKS and Amazon EKS Distro. Amazon EKS Anywhere (release 0.15.1) also supports Kubernetes 1.26. The theme for this version was chosen to recognize both the diverse components that the project comprises and the individuals who contributed to it. Hence, the fitting release name, “Electrifying”. In their official release announcement, the Kubernetes release team said the release is, “dedicated to all of those whose individual motion, integrated into the release flow, made all of this possible.” To upgrade your cluster, refer to Updating an Amazon EKS cluster Kubernetes version.
Kubernetes v1.26 highlights
This post covers some of the notable removals, deprecations, and enhancements in the Kubernetes version 1.26 release. First off, it’s hard to miss the news and the redirects to the new faster, cheaper, and Generally Available (GA) registry at registry.k8s.io , which started on March 20, 2023. Note that the Kubernetes 1.26 images are already available in k8s.gcr.io, and they will continue to be available there. However, going forward, all new versions of Kubernetes will only be published to registry.k8s.io, which is now the preferred and only source for Kubernetes images. Note that the previous registry at k8s.gcr.io will no longer release any container image tags for v1.26, so users should start using registry.k8s.io to get the latest Kubernetes images. If you haven’t already changed to the new registry, check out this quick YouTube video by Justin Garrison, one of our Developer Advocates.
Watch out for all the deprecations and removals in Kubernetes version 1.26. Below is a list of the most notable changes. For a complete list, refer to the Kubernetes change log.
Prerequisites to upgrade
Before upgrading to Kubernetes v1.26 in Amazon EKS, there are some important tasks you need to complete. The following section outlines two key changes that you must address before upgrading.
- Virtual Private Cloud (VPC) Container Network Interface (CNI) plugin. You must upgrade your VPC CNI plugin version to 1.12 or higher. Earlier versions of the VPC CNI will cause the CNI to crash, because it relied on the CRI v1alpha2API, which has been removed from Kubernetes v1.26. For step-by-step instructions to upgrade the VPC CNI in your cluster, refer to Working with the Amazon VPC CNI plugin for Kubernetes Amazon EKS add-on.
- Containerd runtimes. You must upgrade to containerd version 1.6.0 or higher. Kubernetes v1.26 has dropped support for CRI v1alpha2, which causes the kubelet to not register nodes when the container runtime does not support CRI v1. This affects containerd minor versions 1.5 and below, which are not supported in Kubernetes 1.26. Similarly, other container runtimes that only support v1alpha2 must also be updated. If you’re using an Amazon EKS optimized AMI, check which version of containerd it uses. Note that the version of containerd included in custom AMIs may vary, and some older versions may not be compatible with Kubernetes v1.26. To ensure compatibility, upgrade to the latest Amazon EKS optimized AMI, which includes containerd version 1.6.19 to avoid any compatibility issues.
Attention: In EKS 1.26 the AWS specific logic in the kubelet has been disabled. If you used the EKS optimized AMI prior to 1.26, the kubelet was configured to use the in-tree cloud provider. This was configured by passing the --cloudprovider=aws
flag to kubelet’s extra-args. This triggered the kubelet to call the EC2 DescribeInstance
API which returned the PrivateDnsName of the instance. Starting with 1.26, the kubelet is configured to use --cloud-provider=external
or --cloud-provider=“”
by default. This could cause issues if you’re using a custom AMI and/or you’ve configured your VPC with a DHCP option set that includes a custom domain suffix, such as example.com. Unless you pass the --hostname-override=$PRIVATE_DNS_NAME
flag to the kubelet’s extra-args, the kubelet will use the operating system’s hostname as the node name, e.g. i-0c9e5eff964fb6eea.example.com or ip-192-168-52.example.com. Nodes with names like this will fail to register with the cluster because the aws-iam-authenticator expects the node’s name to always match the node’s PrivateDnsName. For additional information about this issue, see Override hostname to match EC2’s PrivateDnsName. The latest version of the EKS Optimized AMI, v20230501, has been updated to include the --hostname-override
flag. If your nodes are having issues joining the cluster after upgrading to 1.26, please verify that you’re using v20230501 or higher. You can find the latest EKS Optimized AMI in your region by querying Parameter Store as described in Retrieving Amazon EKS optimized Amazon Linux AMI IDs. Custom AMIs built from the Packer scripts available at awslabs/amazon-eks-ami should be rebuilt using the latest version, Release AMI Release v20230501.
Retired API versions and features
Nowadays, it’s not uncommon for Application Programming Interface (API) versions to be removed when a new version of Kubernetes is released. When this happens, it’s imperative that you update all manifests and controllers to the newer versions and features listed in this section before upgrading to version 1.26.
- apiserver.k8s.io/v1beta1 API version. Among the API versions that were removed in 1.26 is the flowcontrol.apiserver.k8s.io/v1beta1 API, which was marked as deprecated in v1.23. This API version is related to Kubernetes’ API priority and fairness feature. If you are using the v1beta1 API version of FlowSchema and PriorityLevelConfiguration resources, update your Kubernetes manifests and API clients to use the newer flowcontrol.apiserver.k8s.io/v1beta2 API version instead. If you haven’t modified the default settings for API priority and fairness, then you’re probably not affected by the removal and don’t need to take any further action. If you are unsure whether you are using the version being retired, run the following command:
- autoscaling/v2beta2 API version. Among the API versions that were removed in Kubernetes v1.26 is the autoscaling/v2beta2 API, which was marked as deprecated in v1.23. If you are currently using the autoscaling/v2beta2 API version of the HorizontalPodAutoscaler, update your applications to use the autoscaling/v2API version instead. If you are unsure whether you are using the version being retired, run the following command:
- Dynamic kubelet configuration. Dynamic kubelet configuration was removed in v1.26 due to stability and security concerns. Previously, this configuration allowed you to roll out new kubelet configurations via the Kubernetes API by specifying a ConfigMap containing the configuration data that the kubelet should use for each node. We recommend using a configuration management tool like Puppet, Chef, or Ansible, or switching to static kubelet configurations by passing configurations to the kubelet using the –config To learn more, refer to Set Kubelet parameters via a config file.
- Bottlerocket cgroupsv1 to cgroupsv2 migration. Bottlerocket will be migrating from cgroupsv1 to cgroupsv2 in version 1.26. This change will affect users who use Bottlerocket AMIs in their Amazon EKS cluster, as the current cgroupsv1 implementation will no longer be the default option. Instead, Bottlerocket will use the new cgroupsv2 cgroupsv2 is designed to address previous limitations in Linux’s resource management by providing enhanced controls for memory, Central Processing Unit (CPU), I/O, and networking. This improvement allows for more efficient automation of resource allocation and can provide safer limits on I/O usage and other resource-intensive activities. On that note, Bottlerocket provides tools to automatically update Bottlerocket instances, as well as an API for direct control of updates. To learn more, see Update methods in the Bottlerocket documentation.
- klog logging flags. Numerous klog flags that extend logging over events streams with the klog logging library were removed in v1.26 to simplify logging configuration and maintenance, including the removal of –logtostderr. We recommend updating your configuration files and scripts to use alternative logging options. For a complete list of the flags removed, see Removed klog flags.
In the context of the Kubernetes project, deprecating a feature or flag means that it will be gradually phased out and eventually removed in a future version, and there were a lot of deprecations in Kubernetes version 1.26. For a complete list, refer to all Deprecations and removals in Kubernetes v1.26.
Graduation highlights
There are a lot of cool features graduating to stable in this release that has our technical community pretty excited. Here are the top call-outs in their own words.
Job tracking (without lingering pods) is GA
Justin Garrison — Sr. Developer Advocate at AWS
- #3207 This is great because when you run a lot of jobs you historically would lose information about job completion after the pod is deleted. Pod deletion is important because they consume resources on nodes (even when not running) and can add more load to the API server for secret watching.
- This update allows you to delete pods and not lose data about jobs. This is especially important for customers who use AWS Fargate to run jobs. If they don’t delete pods after job completion, then the AWS Fargate node isn’t removed. This means they’re still paying for compute that’s not being used. The recommendation is to use garbage collection but historically this would mean they can’t query job status for pods that have been deleted.
- To learn more, see Job Tracking, to Support Massively Parallel Batch Workloads, Is Generally Available.
GA Support for Kubelet Credential Providers
Justin Garrison — Sr. Developer Advocate at AWS
- #2133 This helps simplify authentication to various container registries and avoid needing to store secrets as Kubernetes resources. This allows for more flexibility when authenticating to a registry and allows for a separation of ownership between Kubernetes resource management and node/registry/security management.
- To learn more, see GA Support for Kubelet Credential Providers.
CPUManager goes GA
Justin Garrison — Sr. Developer Advocate at AWS
- The CPU Manager gives workloads more control over their CPU and performance requirements. This is important for multi-socket servers where workloads can be scheduled—or moved—between CPUs based on availability. Some workloads require tight control over their CPU scheduling to avoid cache misses or to have higher bandwidth with main memory and scheduled on the same bus with NUMA.
- High-Performance Computing (HPC) workloads like image rendering will are able to run faster if they are not re-scheduled to different CPUs during execution.
- To learn more, see CPUManager goes GA.
Support of mixed protocols in Services with type=LoadBalancer
Jeremy Cowan — Sr. Manager, Developer Advocacy at AWS
- #1435 This feature enables the creation of a LoadBalancer Service that has different port definitions with different protocols. With the AWS load balancer controller you can create a Service that exposes Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) so long as the ports are different. Creating a Service that exposes the same port and protocol, for example, Domain Name System (DNS) which opens port 53 over TCP and UDP, does not work right now.
- To learn more, see mixed-protocol-lb.
Service Internal Traffic Policy
Jeremy Cowan — Sr. Manager, Developer Advocacy at AWS
- #2086 allows node-local and topology-aware routing for Service traffic. In the past, internal traffic routed to a Service was randomly distributed to all endpoints. With Service Internal Traffic Policy, you can create a Service that always directs traffic to an instance of the service running on the same node. By keeping traffic local to the node/Availability Zone (AZ), you can reduce your cross-AZ network costs while improving performance.
- To learn more, see Service Internal Traffic Policy.
This is by no means an exhaustive list of the coolest features that graduated to stable in Kubernetes v1.26. For a complete list, refer to Graduations to stable.
Kubernetes tips and tricks
Our technical community had a couple other tips and tricks up their sleeves to help make upgrading to Kubernetes v1.26 a little easier.
Extract all kubelet settings
Jeremy Cowan — Sr. Manager, Developer Advocacy at AWS
- The kubelet config can be spread across multiple files. This is a handy way to extract all of the kubelet settings at once.
- The proxy command starts a proxy server and runs it in the background. The subsequent command sets a shell variable to the name of a node being reconfigured, and then uses curl to fetch the kubelet configuration for that node from the Kubernetes API server, using the proxy server running on port 8001. The output is then piped to jq, which modifies the configuration by setting the kind and apiVersion This updated configuration can then be used to update the kubelet configuration for the specified node.
End of support
If you’re still running an older version of Kubernetes like 1.22 and 1.23, then please consider upgrading to one of the newer supported versions. The end-of-support for 1.22 clusters is June 4, 2023, and the end-of-support for 1.23 clusters will be in October 2023. If you have more questions concerning Amazon EKS version support, then refer to our FAQ.
Conclusion
In this post, I walked through the top changes in Kubernetes version 1.26, and the AWS community highlighted some of the exciting features available. Be sure to check out the other improvements documented in Kubernetes v1.26 release notes. Note that if you’re currently running a VPC CNI plugin, containerd, or any of the retired API versions, it’s highly recommended that you start planning your migration. If you need assistance with upgrading your cluster to the latest Amazon EKS version, refer to our documentation here.