Managed EKS Upgrade 1.21 to 1.22

Vivek Raj
10 min readJan 27, 2023

--

Here is a brief outline of the steps to upgrade an Amazon Elastic Kubernetes Service (EKS) cluster from version 1.21 to 1.22:

  1. Verify the current version of your EKS cluster and make sure it’s 1.21.
  2. Backup important data and resources, including your cluster and node group configurations.
  3. Create a new node group with the desired version (1.22) using the AWS Management Console or CLI.
  4. Drain and cordon old worker nodes to prepare for replacement.
  5. Terminate the old worker nodes and replace them with the new ones.
  6. Update your cluster’s Kubernetes version using the AWS Management Console or CLI.
  7. Verify that your applications are running as expected on the updated cluster.
  8. Repeat the process for each node group in the cluster.

Note: Before starting the upgrade, check the release notes for the new version to see if there are any compatibility issues or breaking changes that may impact your applications.

Kubernetes 1.22 features and removals

API removals for Kubernetes v1.22

Kubernetes 1.22 deprecated a number of APIs that are no longer available. These are all beta APIs that were previously deprecated in favor of newer and more stable API versions. I provided the name of the stable API versions next to the deprecated ones. You may need to make changes to your application before upgrading to Amazon EKS version 1.22. Follow the Kubernetes version 1.22 prerequisites carefully before updating your cluster.

  • Beta versions of the ValidatingWebhookConfiguration and MutatingWebhookConfiguration API (the admissionregistration.k8s.io/v1beta1 API versions). GA version that must be used: admissionregistration.k8s.io/v1
  • The beta CustomResourceDefinition API (apiextensions.k8s.io/v1beta1). GA version that must be used: apiextensions.k8s.io/v1
  • The beta APIService API (apiregistration.k8s.io/v1beta1). GA version that must be used: apiregistration.k8s.io/v1
  • The beta TokenReview API (authentication.k8s.io/v1beta1). GA version that must be used: authentication.k8s.io/v1
  • Beta API versions of SubjectAccessReview, LocalSubjectAccessReview, SelfSubjectAccessReview (API versions from authorization.k8s.io/v1beta1). GA version that must be used: authorization.k8s.io/v1
  • The beta CertificateSigningRequest API (certificates.k8s.io/v1beta1). GA version that must be used: certificates.k8s.io/v1
  • The beta Lease API (coordination.k8s.io/v1beta1). GA version that must be used: coordination.k8s.io/v1
  • All beta Ingress APIs (the extensions/v1beta1 and networking.k8s.io/v1beta1 API versions). GA version that must be used: networking.k8s.io/v1
  • RBAC — Change API version from rbac.authorization.k8s.io/v1beta1 to rbac.authorization.k8s.io/v1
  • PriorityClasss- Change API version from scheduling.k8s.io/v1beta1 to scheduling.k8s.io/v1
  • For CSIDriver, CSINode, and StorageClass- change the API version from VolumeAttachmentstorage.k8s.io/v1beta1 to storage.k8s.io/v1

Example

# To get all resource beta version. Run below command.
kubectl api-resources --verbs=list -o name | xargs -n 1 kubectl get -o name | awk -F '/' '{print $1}' | uniq | awk -F " " '{print "kubectl get "$1" -o yaml | grep v1beta1"}'

# To get one by one resource beta version. Run below command.

kubectl get CustomResourceDefinition -o yaml | grep 'apiextensions.k8s.io/v1beta1'
kubectl get ValidatingWebhookConfiguration -o yaml | grep 'admissionregistration.k8s.io/v1beta1'

#etc .... check all your resource and run accordingly.

This is a complete list of changes you will need to handle before upgrading to EKS 1.22. However, if you’re looking for some more information, the Kubernetes documentation covers these API removals for v1.22 and explains how each of those APIs changed between beta and stable.

Upgrade EKS cluster Kubernetes version

Compare the Kubernetes version of your cluster control plane with the Kubernetes version of your nodes.

Get the Kubernetes version of your cluster control plane with the kubectl version — short command.

kubectl version --short

Get the Kubernetes version of your nodes with the kubectl get nodes command. This command returns all self-managed and managed Amazon EC2 and Fargate nodes. Each Fargate pod is listed as its own node.

kubectl get nodes

Before updating your control plane to a new Kubernetes version, make sure that the Kubernetes minor versions of both the managed nodes and fargate nodes in your cluster are the same as the version of your control plane.

For example, if your control plane is running version 1.22 and one of your nodes is running version 1.21, you will need to update your nodes to version 1.22 before you can update your control plane to 1.23. We also recommend that you update your self-managed nodes to the same version of your control plane before updating the control plane. For more information, see Updating managed node groups and Updating self-managed nodes.

To update the version of a Fargate node, first delete the pod that is represented by the node. Then update your control plane. Any remaining pods will be updated to the new version after you redeploy.

Update your cluster using the AWS Management Console, or the AWS CLI.

  1. Open the Amazon EKS console at https://console.aws.amazon.com/eks/home#/clusters.
  2. Choose the name of the Amazon EKS cluster to update and choose the Update cluster version.
  3. For the Kubernetes version, select the version to update your cluster too and choose Update.
  4. For Cluster name, enter the name of your cluster and choose Confirm.
    The update takes several minutes to complete.

After your cluster update is complete, update your nodes to the same Kubernetes minor version as your updated cluster. For more information, see Updating a self-managed node and Updating a managed node group. Any new pod launched on Fargate has a kubelet version that matches your cluster version. Existing Fargate pods are not replaced.

After upgrading EKS control-plane

Remember to upgrade the recommended core deployment and daemon set to EKS 1.22.

  1. VPC CNI : v1.12.1
  2. Kube-proxy : v1.22.11-eksbuild.2
  3. CoreDNS : v1.8.7-eksbuild.1

The above is just a recommendation from AWS. You should look at upgrading all your components to match the 1.22 Kubernetes version. They could include:

  1. Cluster-autoscaler
  2. Kube-state-metrics
  3. Metrics-server
  4. etc …

To update the Amazon VPC CNI plugin

Update Amazon VPC CNI plugin for Kubernetes, CoreDNS, and kube-proxy add-on. If you’ve updated your cluster to version 1.21 or later, we recommend updating the add-ons to the minimum versions listed in the service account token.

  • If you are using the Amazon EKS add-on, select the cluster in the Amazon EKS console, then select the name of the cluster that you updated in the left navigation pane. Notifications appear in the console. They inform you that there is a new version available for each addon that has an update available. To update an add-on, select the Add-ons tab. In the box for one of the add-ons that have an update available, select Update now, select an available version, and then select Update.

For Kubernetes self-managed add-on

  • Confirm that you have the self-managed type of the add-on installed on your cluster. Replace my-cluster with the name of your cluster.
aws eks describe-addon --cluster-name my-cluster --addon-name vpc-cni --query addon.addonVersion --output text
  • If an error message is returned, you have a self-managed type of add-on installed on your cluster. The remaining steps in this topic are for updating the self-managed type of add-on. Add-ons are installed on your cluster. To update it, use the procedure for updating an add-on instead of using the procedure in this topic. If you’re not familiar with the differences between add-on types, see Amazon EKS Add-ons.
  • See which version of the container image is currently installed on your cluster.
kubectl describe daemonset aws-node --namespace kube-system | grep amazon-k8s-cni: | cut -d : -f 3
  • Back up your current settings so that you can configure the same settings after updating your version.
kubectl get daemonset aws-node -n kube-system -o yaml > aws-k8s-cni-backup.yaml
  • If you don’t have any custom settings, then run the command under To apply this release: heading on GitHub for the release that you want to update to. If you have custom settings, download the manifest file with the following command, instead of applying it. Change URL-of-manifest-from-GitHub to the URL for the release on GitHub that you’re installing.
curl -O url-of-manifest-from-github/aws-k8s-cni.yaml
# example
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.12.1/config/master/aws-k8s-cni.yaml
  • Note: If necessary, modify the file with the custom settings from the backup you created, and then apply the modified file to your cluster. If your nodes do not have access to the private Amazon EKS Amazon ECR repository from which the images are pulled (see the lines starting with image: in the manifest), then you will need to download the images, add them to your modified manifest Pull into your repository, and images from your repository. For more information, see Copy a container image from one repository to another.
curl -O url-of-manifest-from-github/aws-k8s-cni.yaml
# example
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.12.1/config/master/aws-k8s-cni.yaml
  • Confirm that the new version is now installed on your cluster.
kubectl describe daemonset aws-node --namespace kube-system | grep amazon-k8s-cni: | cut -d : -f 3
CNI Version

Updating the Kuberneteskube-proxy self-managed add-on

The kube-proxy add-on is deployed on each Amazon EC2 node in your Amazon EKS cluster. It maintains network rules on your nodes and enables network communication for your pods. The add-on is not deployed to the Fargate nodes in your cluster. See kube-proxy in the Kubernetes documentation for more details.

The following table lists the latest version of the CoreDNS container image available for each Amazon EKS cluster instance.

kube-proxy version compatibility
  • Confirm that you have the self-managed type of add-on installed on your cluster. Replace my-cluster with the name of your cluster.
aws eks describe-addon --cluster-name my-cluster --addon-name kube-proxy --query addon.addonVersion --output text
  • If an error message is returned, you have a self-managed type of add-on installed on your cluster. The remaining steps in this topic are for updating the self-managed type of add-on. Add-ons are installed on your cluster. To update it, use the procedure for updating an add-on instead of using the procedure in this topic. If you’re not familiar with the differences between add-on types, see Amazon EKS Add-ons.
  • See which version of the container image is currently installed on your cluster.
kubectl describe daemonset kube-proxy -n kube-system | grep Image
  • Update the kube-proxy add-on by replacing 602401143452 and region-code with the values from your output. in the previous step Replace v1.24.9-minimal-build.1 with the kube-proxy version listed in the Latest available kube-proxy container image version for each Amazon EKS cluster version table. You can specify a version number for the default or minimal image type. Back up your current settings so that you can configure the same settings after updating your version.
kubectl set image daemonset.apps/kube-proxy -n kube-system kube-proxy=602401143452.dkr.ecr.region-code.amazonaws.com/eks/kube-proxy:v1.24.9-minimal-eksbuild.1
  • The example output is as follows.
daemonset.apps/kube-proxy image updated
  • Confirm that the new version is now installed on your cluster.
kubectl describe daemonset kube-proxy -n kube-system | grep Image | cut -d ":" -f 3
  • The example output is as follows.
v1.23.8-eksbuild.2
kube-proxy image version

Updating the CoreDNS self-managed add-on

CoreDNS is a flexible, extensible DNS server that can serve as a Kubernetes cluster DNS. When you launch an Amazon EKS cluster with at least one node, two replicas of the CoreDNS image are deployed by default, regardless of the number of nodes deployed in your cluster. CoreDNS Pods provide name resolution for all Pods in the cluster. CoreDNS pods can be deployed to Fargate nodes if your cluster includes an AWS Fargate profile with a namespace that matches the namespace for the CoreDNS deployment. For more information about CoreDNS, see Using CoreDNS for Service Discovery in the Kubernetes documentation.

The following table lists the latest version of the CoreDNS container image available for each Amazon EKS cluster instance.

Core DNS version compatibility
  • Confirm that you have the self-managed type of add-on installed on your cluster. Replace my-cluster with the name of your cluster.
aws eks describe-addon --cluster-name my-cluster --addon-name coredns --query addon.addonVersion --output text
  • If an error message is returned, you have a self-managed type of add-on installed on your cluster. The remaining steps in this topic are for updating the self-managed type of add-on. Add-ons are installed on your cluster. To update it, use the procedure for updating an add-on instead of using the procedure in this topic. If you’re not familiar with the differences between add-on types, see Amazon EKS Add-ons.
  • See which version of the container image is currently installed on your cluster.
kubectl describe deployment coredns -n kube-system | grep Image | cut -d ":" -f 3
  • The example output is as follows.
v1.8.7-eksbuild.3
Core DNS image version

Hope this article nicely summarizes all the important information about upgrading EKS to version 1.22 and it will help people to speed up their work.

Enjoy Kubernetes :)

--

--

Vivek Raj
Vivek Raj

Written by Vivek Raj

Self-motivated individual with 4+ years of IT experiences in e-Commerce/Payment & Telecom domain managing advanced multi-platform I.T. Infrastructure