Deploy K8S1.22.3 stepping pit based on kubedm

Keywords: Operation & Maintenance Docker Kubernetes Container

Server node:

k8s-master01192.168.1.50
k8s-node01192.168.1.51
k8s-node02192.168.1.52

1, To install docker, all servers need to be installed

Install the dependent libraries of Docker.

yum install -y yum-utils device-mapper-persistent-data lvm2

Add the software source information of Alibaba cloud Docker CE.

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

Install Docker CE.

yum makecache fast yum -y install docker-ce

Start Docker service.

systemctl start docker

  Set automatic startup

systemctl enable docker

2, Use kubedm to guide the deployment k8s

Reference link: Use kubedm to boot the cluster Kubernetes

Initial server configuration (all servers operate the same)

#Turn off the firewall, selinux
systemctl stop firewalld
systemctl disable firewalld
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0

#Close swap
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab

#Set host name
hostnamectl set-hostname k8s-master01
hostnamectl set-hostname k8s-node01
hostnamectl set-hostname k8s-node02

#Add hosts in master
cat >> /etc/hosts << EOF
192.168.1.50 k8s-master01
192.168.1.51 k8s-node01
192.168.1.52 k8s-node02
EOF

#Enable IPv4 module
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

Configure domestic alicloud k8s source
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

Install kubedm, kubelet, and kubectl

You need to install the following packages on each machine:

  • Kubedm: the instruction used to initialize the cluster.

  • kubelet: used to start pods, containers, etc. on each node in the cluster.

  • kubectl: command line tool used to communicate with the cluster.

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

sudo systemctl enable --now kubelet

Because kubedm regards kubelet as a system service to manage, it is officially recommended for kubedm based installation   systemd   Drive, not recommended   cgroupfs   Therefore, you need to modify the cgroup driver of kubelet or docker. Otherwise, the following errors will occur because the CGroup drivers of kubelet and docker are inconsistent:

Failed to run kubelet" err="failed to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroupfs\""

Official reference link: Configure cgroup driver | Kubernetes

Method 1. Modify the Cgroup Driver of docker
Modify the / etc/docker/daemon.json file to create one without

{
  "exec-opts": ["native.cgroupdriver=systemd"]
}

Restart docker service

systemctl daemon-reload
systemctl restart docker

Initialize with kubedm on the master node

Since the default pulled image address k8s.gcr.io cannot be accessed domestically, modify the address of the specified alicloud image warehouse

kubeadm init --apiserver-advertise-address=192.168.1.50 \
 --apiserver-bind-port=6443 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--image-repository registry.aliyuncs.com/google_containers

After initialization, the add node command is displayed

Your Kubernetes control-plane has initialized successfully!
​
To start using your cluster, you need to run the following as a regular user:
​
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
​
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
​
Then you can join any number of worker nodes by running the following on each as root:
​
kubeadm join 192.168.1.50:6443 --token 93erio.hbn2ti6z50he0lqs \
    --discovery-token-ca-cert-hash sha256:3bc60f06a19bd09f38f3e05e5cff4299011b7110ca3281796668f4edb29a56d9  #Need to remember
​

Execute on the master node

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Execute in node01 and node02 respectively

kubeadm join 192.168.90.110:6443 --token 0o6bsj.upbk5c0v6vlytltk \
    --discovery-token-ca-cert-hash sha256:7be9e4de61b64a38b4a3579b6f5eefbcd7b32c703a788a8d4d2ffc73a3bc53c

If the node cannot be added and the following error is reported

error execution phase preflight: couldn't validate the identity of the API Server: invalid discovery token CA certificate hash: invalid hash "sha256:7be9e4de61b64a38b4a3579b6f5eefbcd7b32c703a788a8d4d2ffc73a3bc53c", expected a 32 byte SHA-256 hash, found 31 bytes

1,kubernetse-master Regenerate token: 
#kubeadm token create   
1p40ao.o433wcrxg6lnaa05

2,View values
#openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
8be9e4de61b64a38b4a3579b6f5eefbcd7b32c703a788a8d4d2ffc73a3bc53c8

3,stay node Execute this command in the node join It worked
#kubeadm join 192.168.1.50:6443 –token 1p40ao.o433wcrxg6lnaa05 \ --discovery-token-ca-cert-hash sha256:8be9e4de61b64a38b4a3579b6f5eefbcd7b32c703a788a8d4d2ffc73a3bc53c8

After adding, the node status is NotReady, because the network node has not been installed.

[root@master01 manifests]# kubectl get nodes
NAME       STATUS     ROLES                  AGE    VERSION
master01   NotReady   control-plane,master   162m   v1.22.3
node01     NotReady   <none>                 63m    v1.22.3
node02     NotReady   <none>                 63m    v1.22.3

Check component status:

[root@master01 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}   
controller-manager   Healthy   ok   

When detecting the status, the following errors may be caused because the default ports set by kube-controller-manager.yaml and kube-scheduler.yaml under / etc / kubernetes / manifest / are 0:

Unhealthy  Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused 

The solution is to comment out the corresponding port and enter the / etc / kubernetes / manifest / directory for the following operations:

kube-controller-manager.yaml file modification: comment out 27 lines #--- port=0

Kube scheduler.yaml configuration modification: comment out 19 lines #--- port=0

View the status of each node:

Since no network components are currently installed, the status of each node is notready

[root@master01 ~]# kubectl get nodes
NAME       STATUS     ROLES                  AGE   VERSION
master01   NotReady   control-plane,master   23h   v1.22.3
node01     NotReady   <none>                 22h   v1.22.3
node02     NotReady   <none>                 22h   v1.22.3

The following is the reference link for network component configuration: Add Windows node Kubernetes

First download and configure the Linux version of Flannel, and download the latest Flannel manifest file:

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Deploy pod

kubectl apply -f kube-system.yml

Check whether all pod s are running normally

kubectl get pods -n kube-system

If the pod status is abnormal,

Use the kubectl logs current exception pod command - n Kube system to view the log

After k8s deployment, deploy the kuboard interface management tool

kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml

Reference link: Installing Kuboard v3 | Kuboard in K8S

Posted by madhavanrakesh on Fri, 12 Nov 2021 03:07:39 -0800