ECK has not been deployed on the cluster for a long time and has not been officially put into use. Many problems have been found in the trial phase. Now it is about to be formally deployed. I want to upgrade the k8s cluster when it is formally deployed. After reading some data, since my cluster has only one master node, although it is very powerful, a HUAWEI Taishan2280v2 server is equipped with two-way Kunpeng 920 (48 cores in one way, so everything can be used to the best of its ability. It's a pity that the master node, tracefik and kubernetes dashboard, nexus have no image of arm64), but it's a single point in the end. So the upgrade was done without much load.
Cluster upgrade includes Kubernetes orchestration engine upgrade and Docker container running engine upgrade
Kubernetes upgrade
Kubernetes is highly recommended to refer to the official documents for upgrading. There is no problem in English. The following can be ignored. I can take it as my personal translation notes here. My environment is an offline upgrade. The corresponding rpm package and docker image package are prepared in advance. For details, please refer to the previous three chapters of this series.
Official document address: https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
Master upgrade
Upgrade the first control node (primary node)
1. Upgrade kubeadm
# replace x in 1.17.x-0 with the latest patch version sudo um install -y kubeadm-1.17.x-0 --disableexcludes=kubernetes
Check the version of kubeadm
sudo kubeadm version
2. Empty the container on the node
# replace <cp-node-name> with the name of your control plane node sudo kubectl drain <cp-node-name> --ignore-daemonsets
3. View upgrade plan
sudo kubeadm upgrade plan
After execution, you can see the following results. The version number is different according to the actual operation:
[upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.16.0 [upgrade/versions] kubeadm version: v1.17.0 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE Kubelet 1 x v1.16.0 v1.17.0 Upgrade to the latest version in the v1.16 series: COMPONENT CURRENT AVAILABLE API Server v1.16.0 v1.17.0 Controller Manager v1.16.0 v1.17.0 Scheduler v1.16.0 v1.17.0 Kube Proxy v1.16.0 v1.17.0 CoreDNS 1.6.2 1.6.5 Etcd 3.3.15 3.4.3-0 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.17.0
The above command only checks whether the cluster can be upgraded and the version that can be upgraded.
4. Enter the actual version to execute the upgrade plan
# replace x with the patch version you picked for this upgrade sudo kubeadm upgrade apply v1.17.x
It can be seen at the end after execution
...... [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.17.0". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
5. Upgrade CNI provider plugin manually
For example, flannel did not upgrade because the network plug-in did not change this time.
6. Unseal node
# replace <cp-node-name> with the name of your control plane node sudo kubectl uncordon <cp-node-name>
Upgrade other control nodes (Master)
Execute the upgrade command directly:
sudo kubeadm upgrade node
Upgrade kubelet and kubectl on each control node
# replace x in 1.17.x-0 with the latest patch version sudo yum install -y kubelet-1.17.x-0 kubectl-1.17.x-0 --disableexcludes=kubernetes
Reload kubelet and restart
sudo systemctl daemon-reload sudo systemctl restart kubelet
Work node upgrade
1. Upgrade kubeadm
# replace x in 1.17.x-0 with the latest patch version sudo yum install -y kubeadm-1.17.x-0 --disableexcludes=kubernetes
2. Empty the container on the node
# replace <node-to-drain> with the name of your node you are draining sudo kubectl drain <node-to-drain> --ignore-daemonsets
The following similar output may appear after the above command is executed:
node/ip-172-31-85-18 cordoned WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-dj7d7, kube-system/weave-net-z65qx node/ip-172-31-85-18 drained
3. Upgrade the configuration of kubelet
sudo kubeadm upgrade node
4. Upgrade kubelet and kubectl
# replace x in 1.17.x-0 with the latest patch version yum install -y kubelet-1.17.x-0 kubectl-1.17.x-0 --disableexcludes=kubernetes
Reload and restart kubectl
sudo systemctl daemon-reload sudo systemctl restart kubelet
5. Unseal node
# replace <node-to-drain> with the name of your node sudo kubectl uncordon <node-to-drain>
Verify upgrade results
sudo kubectl get nodes