Kubernetes hands on series - Upgrading Production Kubernetes cluster created with kubeadm
This tutorial will explain how we can upgrade a Kubernetes cluster created with kubeadm from version 1.17.x to version 1.18.x or from version 1.18.x to 1.18.y (where y > x
).
It does not cover Managed Kubernetes Environments (like our own, where Upgrades are automatically handled by the platform), or Kubernetes services on public clouds (such as AWS’ EKS or Azure Kubernetes Service), which have their own upgrade process.
For the purposes of this tutorial, we assume that a healthy 3-node Kubernetes cluster has been provisioned. Follow this tutorial to spin up a Production ready 3 Node kubernetes cluster.
Kubernetes Master Node ->
172.42.42.200 kmaster-rj.example.com/kmaster-rj
Kubernetes Worker Nodes ->
172.42.42.201 kworker-rj1.example.com/kworker-rj1
172.42.42.202 kworker-rj2.example.com/kworker-rj2
root@kmaster-rj:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster-rj Ready master 17d v1.18.6
kworker-rj1 Ready <none> 17d v1.18.6
kworker-rj2 Ready <none> 17d v1.18.6root@kmaster-rj:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:56:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}root@kmaster-rj:~# kubelet --version
Kubernetes v1.18.6
Decide which version is available and needs to be upgraded to:
root@kmaster-rj:~# apt updateroot@kmaster-rj:~# apt-cache madison kubeadmkubeadm | 1.18.8-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.18.6-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.18.5-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.18.4-01 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
................
................
# find the latest 1.18 version in the list
# it should look like 1.18.x-00, where x is the latest patch
Upgrading control plane nodes (kmaster-rj):
First upgrade the kubeadm to 1.18.x
root@kmaster-rj:~# apt-get update && \
apt-get install -y --allow-change-held-packages kubeadm=1.18.8-00
Verify the expected version:
root@kmaster-rj:~# kubeadm versionkubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:10:16Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Drain the control plane (kmaster-rj) node:
root@kmaster-rj:~# kubectl drain kmaster-rj --ignore-daemonsets
node/kmaster-rj cordoned
On the control plane node (kmaster-rj), run:
root@kmaster-rj:~# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.18.6
[upgrade/versions] kubeadm version: v1.18.8
[upgrade/versions] Latest stable version: v1.18.8
[upgrade/versions] Latest stable version: v1.18.8
[upgrade/versions] Latest version in the v1.18 series: v1.18.8
[upgrade/versions] Latest version in the v1.18 series: v1.18.8
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet 2 x v1.18.6 v1.18.8
1 x v1.18.8 v1.18.8
Upgrade to the latest version in the v1.18 series:
COMPONENT CURRENT AVAILABLE
API Server v1.18.6 v1.18.8
Controller Manager v1.18.6 v1.18.8
Scheduler v1.18.6 v1.18.8
Kube Proxy v1.18.6 v1.18.8
CoreDNS 1.6.7 1.6.7
Etcd 3.4.3 3.4.3-0
You can now apply the upgrade by executing the following command:kubeadm upgrade apply v1.18.8____________________________________________________________________
Upgrade the control plane (kmaster-rj) node:
root@kmaster-rj:~# kubeadm upgrade apply v1.18.8[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.18.8"
[upgrade/versions] Cluster version: v1.18.6
[upgrade/versions] kubeadm version: v1.18.8
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.18.8"...
Static pod: kube-apiserver-kmaster-rj hash: 53cfb5b5d2a38751fa36374ef03632c1
Static pod: kube-controller-manager-kmaster-rj hash: 202c0f48b876c1870965f803fea482cf
Static pod: kube-scheduler-kmaster-rj hash: 3dd66788a2c7782d910d05ea37b91678
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version for this Kubernetes version "v1.18.8" is "3.4.3-0", but the current etcd version is "3.4.3". Won't downgrade etcd, instead just continue
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests838540363"
W0820 06:10:03.500083 26075 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-08-20-06-09-59/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-kmaster-rj hash: 53cfb5b5d2a38751fa36374ef03632c1
Static pod: kube-apiserver-kmaster-rj hash: 53cfb5b5d2a38751fa36374ef03632c1
Static pod: kube-apiserver-kmaster-rj hash: 8c10c16fe470dd21582fb78821669002
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-08-20-06-09-59/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-kmaster-rj hash: 202c0f48b876c1870965f803fea482cf
Static pod: kube-controller-manager-kmaster-rj hash: ff5b33a064ebf390003a0a34068a7722
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-08-20-06-09-59/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-kmaster-rj hash: 3dd66788a2c7782d910d05ea37b91678
Static pod: kube-scheduler-kmaster-rj hash: c808ba8a724ff4e00643b5c4f7fc454b
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.18.8". Enjoy!
Uncordon the control plane node:
root@kmaster-rj:~# kubectl uncordon kmaster-rj
node/kmaster-rj uncordoned
If you have more than one control panel nodes (master nodes) follow the same steps and upgrade the node.
Upgrade kubelet and kubectl:
root@kmaster-rj:~# apt-get install -y --allow-change-held-packages
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 34 not upgraded.
Need to get 28.3 MB of archives.
After this operation, 12.3 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.18.8-00 [8,827 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.18.8-00 [19.4 MB]
Fetched 28.3 MB in 5s (5,721 kB/s)
(Reading database ... 74726 files and directories currently installed.)
Preparing to unpack .../kubectl_1.18.8-00_amd64.deb ...
Unpacking kubectl (1.18.8-00) over (1.18.6-00) ...
Preparing to unpack .../kubelet_1.18.8-00_amd64.deb ...
Unpacking kubelet (1.18.8-00) over (1.18.6-00) ...
Setting up kubelet (1.18.8-00) ...
Setting up kubectl (1.18.8-00) ...
kubelet=1.18.8-00 kubectl=1.18.8-00
Reload the daemons, restart the kubelet and Verify the versions:
root@kworker-rj1:~# systemctl daemon-reloadroot@kworker-rj1:~# systemctl restart kubeletroot@kworker-rj1:~# kubelet --version
Kubernetes v1.18.8root@kworker-rj1:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:12:48Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}The connection to the server localhost:8080 was refused - did you specify the right host or port?
Upgrade the worker nodes -
First drain the worker node -
root@kmaster-rj:~# kubectl drain kworker-rj1 --ignore-daemonsets
node/kworker-rj1 cordoned
Upgrade the kubelet configuration:
root@kworker-rj1:~# apt-get install -y --allow-change-held-packages kubelet=1.18.8-00 kubectl=1.18.8-00Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 34 not upgraded.
Need to get 28.3 MB of archives.
After this operation, 12.3 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.18.8-00 [8,827 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.18.8-00 [19.4 MB]
Fetched 28.3 MB in 5s (5,721 kB/s)
(Reading database ... 74726 files and directories currently installed.)
Preparing to unpack .../kubectl_1.18.8-00_amd64.deb ...
Unpacking kubectl (1.18.8-00) over (1.18.6-00) ...
Preparing to unpack .../kubelet_1.18.8-00_amd64.deb ...
Unpacking kubelet (1.18.8-00) over (1.18.6-00) ...
Setting up kubelet (1.18.8-00) ...
Setting up kubectl (1.18.8-00) ...
Reload the daemons, restart the kubelet and Verify the versions:
root@kworker-rj1:~# systemctl daemon-reloadroot@kworker-rj1:~# systemctl restart kubeletroot@kworker-rj1:~# kubelet --versionKubernetes v1.18.8root@kworker-rj1:~# kubectl versionClient Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:12:48Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}The connection to the server localhost:8080 was refused - did you specify the right host or port?
Uncordon the worker node:
root@kmaster-rj:~# kubectl uncordon kworker-rj1
node/kworker-rj1 uncordoned
Follow the same process to upgrade second worker node.
Verify the status of the cluster
root@kmaster-rj:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster-rj Ready master 17d v1.18.8
kworker-rj1 Ready <none> 17d v1.18.8
kworker-rj2 Ready <none> 17d v1.18.8
What to do if upgrade Process Fails !
If kubeadm upgrade
fails and does not roll back, for example because of an unexpected shutdown during execution, you can run kubeadm upgrade
again. This command is idempotent and eventually makes sure that the actual state is the desired state you declare.
To recover from a bad state, you can also run kubeadm upgrade apply --force
without changing the version that your cluster is running.
During upgrade kubeadm writes the following backup folders under /etc/kubernetes/tmp
:
kubeadm-backup-etcd-<date>-<time>
kubeadm-backup-manifests-<date>-<time>
kubeadm-backup-etcd
contains a backup of the local etcd member data for this control-plane Node. In case of an etcd upgrade failure the contents of this folder can be manually restored in /var/lib/etcd
. In case external etcd is used this backup folder will be empty.
kubeadm-backup-manifests
contains a backup of the static Pod manifest files for this control-plane Node. In case of a upgrade failure the contents of this folder can be manually restored in /etc/kubernetes/manifests
. If for some reason there is no difference between a pre-upgrade and post-upgrade manifest file for a certain component, a backup file for it will not be written.
root@kmaster-rj:~# tree /etc/kubernetes/tmp//etc/kubernetes/tmp/
├── kubeadm-backup-etcd-2020-08-20-06-09-59
│ └── etcd
│ └── member
│ ├── snap
│ │ ├── 000000000000000c-00000000001cd78e.snap
│ │ ├── 000000000000000c-00000000001cfe9f.snap
│ │ ├── 000000000000000c-00000000001d25b0.snap
│ │ ├── 000000000000000c-00000000001d4cc1.snap
│ │ ├── 000000000000000c-00000000001d73d2.snap
│ │ └── db
│ └── wal
│ ├── 0000000000000011-000000000017711a.wal
│ ├── 0000000000000012-000000000018d527.wal
│ ├── 0000000000000013-00000000001a2dc6.wal
│ ├── 0000000000000014-00000000001b90c2.wal
│ ├── 0000000000000015-00000000001cf3f1.wal
│ └── 1.tmp
├── kubeadm-backup-etcd-2020-08-20-06-15-13
│ └── etcd
│ └── member
│ ├── snap
│ │ ├── 000000000000000c-00000000001cd78e.snap
│ │ ├── 000000000000000c-00000000001cfe9f.snap
│ │ ├── 000000000000000c-00000000001d25b0.snap
│ │ ├── 000000000000000c-00000000001d4cc1.snap
│ │ ├── 000000000000000c-00000000001d73d2.snap
│ │ └── db
│ └── wal
│ ├── 0000000000000011-000000000017711a.wal
│ ├── 0000000000000012-000000000018d527.wal
│ ├── 0000000000000013-00000000001a2dc6.wal
│ ├── 0000000000000014-00000000001b90c2.wal
│ ├── 0000000000000015-00000000001cf3f1.wal
│ └── 1.tmp
├── kubeadm-backup-manifests-2020-08-20-06-09-59
│ ├── kube-apiserver.yaml
│ ├── kube-controller-manager.yaml
│ └── kube-scheduler.yaml
└── kubeadm-backup-manifests-2020-08-20-06-15-13
12 directories, 27 files
Backup Kubernetes Master Node configuration
The etcd component is used as Kubernetes’ backing store. All cluster data is stored here.
root@kmaster-rj:/# cp -r /etc/kubernetes/pki backup/root@kmaster-rj:/# docker run --rm -v $(pwd)/backup:/backup \> --network host \> -v /etc/kubernetes/pki/etcd:/etc/kubernetes/pki/etcd \> --env ETCDCTL_API=3 \> k8s.gcr.io/etcd-amd64:3.2.18 \> etcdctl --endpoints=https://127.0.0.1:2379 \> --cacert=/etc/kubernetes/pki/etcd/ca.crt \> --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt \> --key=/etc/kubernetes/pki/etcd/healthcheck-client.key \> snapshot save /backup/etcd-snapshot-latest.dbUnable to find image 'k8s.gcr.io/etcd-amd64:3.2.18' locally
3.2.18: Pulling from etcd-amd64
f70adabe43c0: Pull complete
7c7edbd93e22: Pull complete
e0de1b76f800: Pull complete
Digest: sha256:b960569ade5f37205a033dcdc3191fe99dc95b15c6795a6282859070ec2c6124
Status: Downloaded newer image for k8s.gcr.io/etcd-amd64:3.2.18
Snapshot saved at /backup/etcd-snapshot-latest.db
The script above does two things:
- It copies all the certificates
- It creates a snapshot of the etcd keystore.
These are all saved in a directory called backup.
After running the script, we have several files in the backup directory. These include certificates, snapshots and keys required for Kubernetes to run.
root@kmaster-rj:/backup# pwd
/backup
root@kmaster-rj:/backup# tree
.
├── etcd-snapshot-latest.db
└── pki
├── apiserver.crt
├── apiserver-etcd-client.crt
├── apiserver-etcd-client.key
├── apiserver.key
├── apiserver-kubelet-client.crt
├── apiserver-kubelet-client.key
├── ca.crt
├── ca.key
├── etcd
│ ├── ca.crt
│ ├── ca.key
│ ├── healthcheck-client.crt
│ ├── healthcheck-client.key
│ ├── peer.crt
│ ├── peer.key
│ ├── server.crt
│ └── server.key
├── front-proxy-ca.crt
├── front-proxy-ca.key
├── front-proxy-client.crt
├── front-proxy-client.key
├── sa.key
└── sa.pub
2 directories, 23 files
Restoration -
The restoration may look something like this:
# Restore certificates
sudo cp -r backup/pki /etc/kubernetes/
# Restore etcd backup
sudo mkdir -p /var/lib/etcd
sudo docker run --rm \
-v $(pwd)/backup:/backup \
-v /var/lib/etcd:/var/lib/etcd \
--env ETCDCTL_API=3 \
k8s.gcr.io/etcd-amd64:3.2.18 \
/bin/sh -c "etcdctl snapshot restore '/backup/etcd-snapshot-latest.db' ; mv /default.etcd/member/ /var/lib/etcd/"
# Restore kubeadm-config
sudo mkdir /etc/kubeadm
sudo cp backup/kubeadm-config.yaml /etc/kubeadm/
# Initialize the master with backup
sudo kubeadm init --ignore-preflight-errors=DirAvailable--var-lib-etcd \
--config /etc/kubeadm/kubeadm-config.yaml
Hope you like the tutorial. Please let me know your feedback in the response section.
Happy Learning!