Containers

Preparing for Kubernetes API deprecations when going from 1.15 to 1.16

The way that Kubernetes evolves and introduces new features is via its APIs. These APIs are versioned and features go through three phases: alpha, beta, and general availability (also known as stable). For users of Kubernetes this usually takes the form of an apiVersion version you are putting on the top of your YAML spec documents such as apiVersion: extensions/v1beta1 or apiVersion: apps/v1. As features evolve, they may change the parameters that you can specify and therefore the schema of these YAML files. Also the API versions will change to indicate that they are no longer beta features and have become stable.

To help facilitate cluster and workload updates, there is a period of time when upstream Kubernetes will provide both the old and the new API both the old and the new API. During this period, the old API is called ‘deprecated’. Eventually, Kubernetes stops including the old versions, and when you update your clusters during those upgrades, your workloads will break if they have not had their YAML spec files updated to the new API version.

There are many such deprecated API removals in 1.16

The oldest Kubernetes version supported and available in Amazon Elastic Kubernetes Service (Amazon EKS) is 1.15. This will no longer be supported starting May 3rd, 2021, and customers will need to upgrade these clusters to 1.16. This upgrade involves major removals of Deprecated APIs as described here. These include many common APIs such as NetworkPolicy, PodSecurityPolicy, DaemonSet, Deployment, StatefulSet, and ReplicaSet and thus will impact many customer workloads.

Tools to help check for things that will break post-upgrade to 1.16

While I am sure there are many tools that can help prepare for the 1.16 upgrade, there are two that I have seen customers use successfully:

kube-no-trouble

The first tool that I have seen customers use to prepare for this is kube-no-trouble. This tool will scan a particular cluster’s YAML specs for APIs versions that will be removed in future versions, tell you which things are going to break, and what you’ll need to be update before starting a cluster upgrade to 1.16. You can iteratively run this tool gradually fixing each thing until it shows that everything is good for you to proceed.

$./kubent
6:25PM INF >>> Kube No Trouble `kubent` <<<
6:25PM INF Initializing collectors and retrieving data
6:25PM INF Retrieved 103 resources from collector name=Cluster
6:25PM INF Retrieved 132 resources from collector name="Helm v2"
6:25PM INF Retrieved 0 resources from collector name="Helm v3"
6:25PM INF Loaded ruleset name=deprecated-1-16.rego
6:25PM INF Loaded ruleset name=deprecated-1-20.rego
__________________________________________________________________________________________
>>> 1.16 Deprecated APIs <<<
------------------------------------------------------------------------------------------
KIND         NAMESPACE     NAME                    API_VERSION
Deployment   default       nginx-deployment-old    apps/v1beta1
Deployment   kube-system   event-exporter-v0.2.5   apps/v1beta1
Deployment   kube-system   k8s-snapshots           extensions/v1beta1
Deployment   kube-system   kube-dns                extensions/v1beta1
__________________________________________________________________________________________
>>> 1.20 Deprecated APIs <<<
------------------------------------------------------------------------------------------
KIND      NAMESPACE   NAME           API_VERSION
Ingress   default     test-ingress   extensions/v1beta1

Deprek8ion / OPA conftest

If you’d prefer to scan your Kubernetes spec files within the pipelines that are building and deploying your applications, or do it manually on an ad-hoc basis against the YAML that you might have in your git repo, there is Deprek8ion. Under the hood, this uses the Open Policy Agent (via a tool called Conftest that allows you to do one-off local scans against the policies) to compare your YAML to a set of policies to alert you to these to-be-removed API versions. The error messages it returns will tell you the current API version you should be targeting as well.

This has been nicely packaged up in a Docker container, so all you need to do is pass or pipe your YAML object in and it’ll tell you if you need to update some things before the upgrade. This can be added as a stage of a CI/CD pipeline to throw either a warning or an error so that going forward you’ll know you need to make these changes early and as part of your normal development processes.

Example of finding and fixing an issue

We’ll start by scanning our 1.15 cluster for potential upgrade troubles with kube-no-trouble:

$kubent
3:25PM INF >>> Kube No Trouble `kubent` <<<
3:25PM INF version 0.3.2 (git sha 919129b596890475965cda2b972cb6fded71f40b)
3:25PM INF Initializing collectors and retrieving data
3:25PM INF Retrieved 2 resources from collector name=Cluster
3:25PM INF Retrieved 0 resources from collector name="Helm v2"
3:25PM INF Retrieved 0 resources from collector name="Helm v3"
3:25PM INF Loaded ruleset name=deprecated-1-16.rego
3:25PM INF Loaded ruleset name=deprecated-1-22.rego
__________________________________________________________________________________________
>>> Deprecated APIs removed in 1.16 <<<

KIND NAMESPACE NAME API_VERSION
StatefulSet default web apps/v1beta1

As we can see, we have a StatefulSet called web that we need to update from apps/v1beta1 to apps/v1. This isn’t as simple as just changing the version on the top of the document though, the schema has also changed as well as described in this blog post. In this case, spec.selector is now required and spec.updateStrategy.type now defaults to a RollingUpdate, so we’ll need to explicitly set that to OnDelete to maintain the current/older behavior we’ve become used to. We’ll take the easy path there and explicitly ask for the existing behavior until we can properly test the new RollingUpdate default.

So we’ll change this original spec file from this:

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: web
spec:
  serviceName: "nginx"
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: k8s.gcr.io/nginx-slim:0.8
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi

To this (changes bolded and underlined):

*_apiVersion: apps/v1_*
kind: StatefulSet
metadata:
  name: web
spec:
  serviceName: "nginx"
  replicas: 2
 *_selector_:
    _matchLabels_:
      _app: nginx_
  _updateStrategy_:
    _type__: OnDelete_*
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: k8s.gcr.io/nginx-slim:0.8
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi

And then apply it and rerun kube-no-trouble to confirm we are now good to go!

$kubent
3:39PM INF >>> Kube No Trouble `kubent` <<<
3:39PM INF version 0.3.2 (git sha 919129b596890475965cda2b972cb6fded71f40b)
3:39PM INF Initializing collectors and retrieving data
3:39PM INF Retrieved 2 resources from collector name=Cluster
3:39PM INF Retrieved 0 resources from collector name="Helm v2"
3:39PM INF Retrieved 0 resources from collector name="Helm v3"
3:39PM INF Loaded ruleset name=deprecated-1-16.rego
3:39PM INF Loaded ruleset name=deprecated-1-22.rego

One thing to note is that Kubernetes 1.15 actually has been converting from these old APIs to the new API for us. We can see this if we apply the original deprecated API spec file to the cluster and then do a kubectl edit statefulset web command we’ll see an object with the new API version as shown below. It seems that kube-no-trouble is scanning the last-applied-configuration annotation to determine if the older APIs were what you deployed pre-conversion. This means:

  • The things you have already deployed to your 1.15 cluster will likely continue to work after the upgrade. It will just refuse any future applying of the deprecated API spec YAML files by your deployment pipelines etc.
  • If we dump out the items from the 1.15 cluster with a command like kubectl get statefulset web -o yaml, you should also get a spec file that will still work in 1.16 if you need some help working out how to generate one.
apiVersion: apps/v1
kind: StatefulSet
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1beta1","kind":"StatefulSet","metadata":{"annotations":{},"name":"web","namespace":"default"},"spec":{"replicas":2,"serviceName":"nginx","template":{"metadata":{"labels":{"app":"nginx"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx-slim:0.8","name":"nginx","ports":[{"containerPort":80,"name":"web"}],"volumeMounts":[{"mountPath":"/usr/share/nginx/html","name":"www"}]}]}},"volumeClaimTemplates":[{"metadata":{"name":"www"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}}}}]}}
  creationTimestamp: "2021-02-05T04:20:51Z"
... 

There is a light at the end of the tunnel coming with version 1.19+

There are no deprecated API removals in 1.17, 1.18, or 1.19, so once you get through the 1.15 to 1.16 upgrade, you won’t need to worry about this for a few Kubernetes versions. However, Kubernetes is a fast paced project, and these practices should be put in place to help prepare for future versions which might deprecate API versions and require your attention.

Also, starting with Kubernetes version 1.19, you’ll see that the Kubernetes API as well as the kubectl CLI tool will now throw warnings to alert you that you are using deprecated API versions and in what future version they’ll be removed.

What this means is that rather than needing to use the above tools in the future, you’ll just need to monitor your deployment logs for these warnings. You can either take action as they come up or search for them before doing an upgrade to see if there are any in the logs before proceeding with the upgrade.