Write before
- Learning K8s, involving this section, here I'll sort it out
- The blog content refers to the k8s version upgrade (kubeadm upgrade), the environment is installed through kubeadm, so whether the upgrade is available or not is unknown to the production environment
- Official website has more detailed and authoritative upgrade documents, and small partners can move to the official website to learn;
- Because it is a multi-machine operation, ansible is used. You need to know a little about ansible to read this article
- One thing to note here is that you cannot upgrade across versions
The meaning of life is to learn to live truly, and the meaning of life is to find the meaning of life--the mountains and rivers are clean
Upgrade K8S
Cannot update across versions
The basic process for upgrading is as follows |
---|
Upgrade Master Control Node |
Upgrade Work Node |
1. Determine which version to upgrade to
┌──[root@vms81.liruilongs.github.io]-[~] └─$yum list --showduplicates kubeadm --disableexcludes=kubernetes # Find the latest version 1.22 in the list # It should look like 1.22.x-0, where x is the latest patch version
Existing environment
┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$kubectl get nodes NAME STATUS ROLES AGE VERSION vms81.liruilongs.github.io NotReady control-plane,master 11m v1.21.1 vms82.liruilongs.github.io NotReady <none> 12s v1.21.1 vms83.liruilongs.github.io NotReady <none> 11s v1.21.1 ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$
2. Upgrade master
The upgrade process on the control node should process one node at a time. First, select a control surface node to upgrade first. The / etc/kubernetes/admin.conf file must be present on this node.
1. Execute "kubeadm upgrade"
Upgrade kubeadm:
# Replace X in 1.22.x-0 with the latest patch version number ┌──[root@vms81.liruilongs.github.io]-[~] └─$yum install -y kubeadm-1.22.2-0 --disableexcludes=kubernetes
Verify that the download is working correctly and that the kubeadm version is correct:
┌──[root@vms81.liruilongs.github.io]-[~] └─$kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:37:34Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
Verify upgrade plan:
┌──[root@vms81.liruilongs.github.io]-[~] └─$kubeadm upgrade plan [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks. [upgrade] Running cluster health checks [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.21.1 [upgrade/versions] kubeadm version: v1.22.2 [upgrade/versions] Target version: v1.22.2 [upgrade/versions] Latest version in the v1.21 series: v1.21.5 ................
Select the target version to upgrade to and run the appropriate command
┌──[root@vms81.liruilongs.github.io]-[~] └─$sudo kubeadm upgrade apply v1.22.2 ............ upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.22.2". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. ┌──[root@vms81.liruilongs.github.io]-[~] └─$
Set to Maintenance Mode
Prepare nodes for upgrade by marking them as non-dispatchable and emptying them:
# Replace <node-to-drain>with the name of the control surface node you want to vacate #kubectl drain <node-to-drain> --ignore-daemonsets ┌──[root@vms81.liruilongs.github.io]-[~] └─$kubectl drain vms81.liruilongs.github.io --ignore-daemonsets ┌──[root@vms81.liruilongs.github.io]-[~] └─$
2. Upgrade kubelet and kubectl
# Replace X in 1.22.x-00 with the latest patch version number #yum install -y kubelet-1.22.x-0 kubectl-1.22.x-0 --disableexcludes=kubernetes ┌──[root@vms81.liruilongs.github.io]-[~] └─$yum install -y kubelet-1.22.2-0 kubectl-1.22.2-0 --disableexcludes=kubernetes
Restart kubelet
┌──[root@vms81.liruilongs.github.io]-[~] └─$sudo systemctl daemon-reload ┌──[root@vms81.liruilongs.github.io]-[~] └─$sudo systemctl restart kubelet
Unprotect Nodes
┌──[root@vms81.liruilongs.github.io]-[~] └─$kubectl uncordon vms81.liruilongs.github.io node/vms81.liruilongs.github.io uncordoned
master node version has been replaced
┌──[root@vms81.liruilongs.github.io]-[~] └─$kubectl get nodes NAME STATUS ROLES AGE VERSION vms81.liruilongs.github.io Ready control-plane,master 11d v1.22.2 vms82.liruilongs.github.io NotReady <none> 11d v1.21.1 vms83.liruilongs.github.io Ready <none> 11d v1.21.1 ┌──[root@vms81.liruilongs.github.io]-[~] └─$
3. Upgrade Node
The upgrade process on the work node should execute one node at a time, or several nodes at a time, without affecting the minimum capacity required to run the workload.
1. Upgrade kubeadm
┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible node -a "yum install -y kubeadm-1.22.2-0 --disableexcludes=kubernetes" ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible node -a "sudo kubeadm upgrade node" # Execute "kubeadm upgrade" For the working node, the following command upgrades the local kubelet configuration: ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$kubectl get nodes NAME STATUS ROLES AGE VERSION vms81.liruilongs.github.io Ready control-plane,master 12d v1.22.2 vms82.liruilongs.github.io Ready <none> 12d v1.21.1 vms83.liruilongs.github.io Ready,SchedulingDisabled <none> 12d v1.22.2
Free Node, Set Maintenance Status
┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$kubectl drain vms82.liruilongs.github.io --ignore-daemonsets node/vms82.liruilongs.github.io cordoned WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-ntm7v, kube-system/kube-proxy-nzm24 node/vms82.liruilongs.github.io drained
2. Upgrade kubelet and kubectl
┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible 192.168.26.82 -a "yum install -y kubelet-1.22.2-0 kubectl-1.22.2-0 --disableexcludes=kubernetes"
Restart kubelet
┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible 192.168.26.82 -a "systemctl daemon-reload" 192.168.26.82 | CHANGED | rc=0 >> ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ansible 192.168.26.82 -a "systemctl restart kubelet" 192.168.26.82 | CHANGED | rc=0 >>
┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$kubectl get nodes NAME STATUS ROLES AGE VERSION vms81.liruilongs.github.io Ready control-plane,master 13d v1.22.2 vms82.liruilongs.github.io Ready,SchedulingDisabled <none> 13d v1.22.2 vms83.liruilongs.github.io Ready,SchedulingDisabled <none> 13d v1.22.2
Unprotect Nodes
┌──[root@vms81.liruilongs.github.io]-[~] └─$kubectl uncordon vms82.liruilongs.github.io node/vms82.liruilongs.github.io uncordoned ┌──[root@vms81.liruilongs.github.io]-[~] └─$kubectl uncordon vms83.liruilongs.github.io node/vms83.liruilongs.github.io uncordoned
Upgrade complete, check
┌──[root@vms81.liruilongs.github.io]-[~] └─$kubectl get nodes NAME STATUS ROLES AGE VERSION vms81.liruilongs.github.io Ready control-plane,master 13d v1.22.2 vms82.liruilongs.github.io Ready <none> 13d v1.22.2 vms83.liruilongs.github.io Ready <none> 13d v1.22.2 ┌──[root@vms81.liruilongs.github.io]-[~] └─$
kubeadm upgrade apply does the following:
- Check if your cluster is upgradable:
- API Server is accessible
- All nodes are in Ready state
- Control surface is healthy
- Enforce version deviation policy.
- Make sure the mirror of the control surface is available or pullable to the server.
- If the component configuration requires a version upgrade, an alternate configuration is generated and/or an override version configuration provided by the user is used.
- Upgrade control surface components or rollback if any of them fail to start.
- Apply the new CoreDNS and kube-proxy lists and force the creation of all required RAB rules.
- If the old file expires in 180 days, a new certificate and key file for the API server will be created and the old file backed up.
The kubeadm upgrade node performs the following operations on other control flat nodes:
- Get the kubeadm ClusterConfiguration from the cluster.
- (Optional) Back up the kube-apiserver certificate.
- Upgrade the static Pod list of control plane components.
- Upgrade the kubelet configuration for this node
kubeadm upgrade node does the following on the work node:
- Retrieve the kubeadm ClusterConfiguration from the cluster.
- Upgrade the kubelet configuration for this node.