centos 7 kubernetes cluster deployment
Host (virtual machine) information:
[root@k8s-master ~]# cat /etc/redhat-release CentOS Linux release 7.7.1908 (Core)
Node name | IP |
---|---|
k8s-master | 192.168.1.86 |
k8s-node1 | 192.168.1.87 |
Note:
1. k8s version can be selected by yourself. Take 1.16.2 as an example.
2. Except that cluster initialization is only performed by master, other deployment steps are performed on all nodes.
1. centos 7 configuration
Turn off firewall, selinux, update source
#firewall systemctl disable firewalld.service #Turn off Selinux sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config ## perhaps /etc/selinux/config #Change SELINUX = * to the following SELINUX=disabled #Restart the server #Run the command getenforce to make sure selinux is disable #Install wget yum install -y wget wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo #Update source yum upgrade
2. host configuration
cat /etc/hosts #k8s nodes 192.169.1.86 k8s-master 192.168.1.87 k8s-node1 cat /etc/hostname ## Node name k8s master or k8s node1 # restart reboot
3. Create / etc/sysctl.d/k8s.conf file
#Modify kernel parameters cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF #Execute sysctl -p /etc/sysctl.d/k8s.conf to take effect (sysctl --system) sysctl -p /etc/sysctl.d/k8s.conf #If an error is reported as follows: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory #resolvent: #Install the bridge util software, load the bridge module, and load the Br? Netfilter module yum install -y bridge-utils.x86_64 modprobe bridge modprobe br_netfilter #Close swap swapoff -a echo "vm.swappiness=0" >> /etc/sysctl.d/k8s.conf #Take effect sysctl -p /etc/sysctl.d/k8s.conf
4. Install software source configuration
#Configure k8s software source cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
5. Install docker
##Correct the time first or you will not be able to run docker!!!! # 1. Install ntpdate tool sudo yum -y install ntp ntpdate # 2. Set system time and network time synchronization sudo ntpdate cn.pool.ntp.org # 3. Write system time into hardware time sudo hwclock --systohc # 4. View system time timedatectl #Install docker yum install -y docker-io #Start docker and set up startup systemctl enable docker && systemctl start docker
6. Install kubernetes - optional version
Version specified here - 1.16.2
#View package version yum list --showduplicates | grep 'kubeadm\|kubectl\|kubelet' #Install the specified version of the software yum install -y kubelet-1.16.2 kubeadm-1.16.2 kubectl-1.16.2 #Start the service and set the startup systemctl start kubelet && systemctl enable kubelet
7. Modify configuration
#kubernetes configuration #/Do the following under the usr/bin directory ## Kubelet kubeadm kubectl update permission cd /usr/bin && chmod a+x kubelet kubeadm kubectl export KUBECONFIG=/etc/kubernetes/admin.conf iptables -P FORWARD ACCEPT #docker configuration ##Edit / lib/systemd/system/docker.service to add the following line under [Service] ExecStartPost=/sbin/iptables -P FORWARD ACCEPT ##Restart docker systemctl daemon-reload systemctl restart docker
8. Pull the image and tag
Because the image is pulled from the foreign website by default, it is pulled from the domestic cloud by itself.
Run kubeadm config images list to view the required images and version numbers, and then pull them from Alibaba cloud.
[root@k8s-master bin]# kubeadm config images list W0108 19:53:17.464386 10103 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W0108 19:53:17.464460 10103 version.go:102] falling back to the local client version: v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2
Pull the corresponding image docker pull registry.cn-hangzhou.aliyuncs.com/google'containers/image Name: version No
## Corresponding to the version number above
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.16.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.16.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.16.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.16.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.15-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.2
tag mirror:
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.16.2 k8s.gcr.io/kube-proxy:v1.16.2 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.15-0 k8s.gcr.io/etcd:3.3.15-0 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.2 k8s.gcr.io/coredns:1.6.2
9. Use kubeadm init to initialize the cluster (master only)
Detailed parameter query address: https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/
--apiserver-advertise-address string The IP address announced by the API server that it is listening to. If not, the default network interface is used. --Image repository string default: "k8s.gcr.io" Select the container warehouse to pull the control plane image --Kubernetes version string default: "stable-1" Select a specific version of Kubernetes for the control plane. --Service CIDR string default: "10.96.0.0/12" Specify another IP address segment for the virtual IP address of the service --pod-network-cidr string Indicates the IP address segment that can be used by the pod network. If this parameter is set, the control plane will automatically assign CIDRs to each node.
##Deploy Kubernetes Master ##Executed at 192.168.1.86 (Master) ##Because the default pull image address k8s.gcr.io is not accessible in China, the Alibaba cloud image warehouse address is specified here kubeadm init \ --apiserver-advertise-address=192.168.1.86 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.16.2 \ --service-cidr=10.1.0.0/16 \ --pod-network-cidr=10.244.0.0/16
Initialization successful, display:
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.1.86:6443 --token pwwmps.9cds2s34wlpiyznv \ --discovery-token-ca-cert-hash sha256:a3220f1d2384fe5230cad2302a4ac1f233b03ea24c19c165adb5824f9c358336
Then execute the following command in the master:
# Wait for the command to be executed and execute the following command ## Execute the following command on the master mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config ##Installing the flannel network components ## Execute the following command on the master kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml ## In case of failure to download and install the flannel component ##View node kubectl get node ##View cluster status kubectl get cs # node notready may occur when executing in the master kubectl get pod --all-namespaces -o wide
The Master node is initialized successfully. The status may be NotReady. It will take a while
If the initialization is not successful, please refer to the blog: https://www.jianshu.com/p/f53650a85131 for repair
Initialization problem
- Because the flannel component is not installed
[root@k8s-master ~]# kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-58cc8c89f4-dwg8r 0/1 Pending 0 24m <none> <none> <none> <none> kube-system coredns-58cc8c89f4-jx7cw 0/1 Pending 0 24m <none> <none> <none> <none>
-
Unable to download the flannel component installation yml file directly from the official website
Reference: https://blog.csdn.net/fuck487/article/details/102783300
[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port? ## Solution: create or ftp the local kube-flannel.yml vi $HOME/kube-flannel.yml ## ## Paste kube-flannel.yml ## ## install [root@k8s-master ~]# kubectl apply -f ./kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created
10. Supplementary order
## Then execute this command on both the master and node [root@k8s-master bin]# modprobe ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh modprobe: ERROR: could not insert 'ip_vs': Unknown symbol in module, or unknown parameter (see dmesg) ## Delete ip_vs [root@k8s-master bin]# modprobe ip_vs_rr ip_vs_wrr ip_vs_sh modprobe: ERROR: could not insert 'ip_vs_rr': Unknown symbol in module, or unknown parameter (see dmesg) ## In execution [root@k8s-master bin]# modprobe ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh ##Check to make sure the kernel has the ipvs module turned on [root@k8s-master bin]# lsmod|grep ip_vs ip_vs 145497 0 nf_conntrack 139224 7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4 libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
11. Add node
Get kubeadm join command
# Get the command to add nodes and execute kubeadm token create -- print join command on the master [root@k8s-master ~]# kubeadm token create --print-join-command kubeadm join 192.168.1.86:6443 --token a1qmdh.d79exiuqbzdr616o --discovery-token-ca-cert-hash sha256:a3220f1d2384fe5230cad2302a4ac1f233b03ea24c19c165adb5824f9c358336
join on node node add node
[root@k8s-node1 bin]# kubeadm join 192.168.1.86:6443 --token otjfah.zta4yo0bexibbj52 --discovery-token-ca-cert-hash sha256:60535ebe96b6a4cceab70d551f2b2b507a3641c3dc421469320b915e01377e5c [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
12. Delete node
On the master node:
[root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 3h9m v1.16.2 k8s-node1 Ready <none> 116s v1.16.2 [root@k8s-master ~]# kubectl drain k8s-node1 --delete-local-data --force --ignore-daemonsets node/k8s-node1 cordoned WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-gmq2b, kube-system/kube-proxy-q9ppx node/k8s-node1 drained [root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 3h10m v1.16.2 k8s-node1 Ready,SchedulingDisabled <none> 2m43s v1.16.2 [root@k8s-master ~]# kubectl delete node k8s-node1 node "k8s-node1" deleted [root@k8s-master ~]#
Delete the node (k8s node1):
[root@k8s-node1 ~]# kubeadm reset [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted. [reset] Are you sure you want to proceed? [y/N]: y ## y confirm [preflight] Running pre-flight checks W0109 13:39:15.848313 79539 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory [reset] No etcd config found. Assuming external etcd [reset] Please, manually reset etcd to prevent further issues [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] [reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/cni] The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually by using the "iptables" command. If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables. The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file.
Appendix: inquiry order
##View node execution in master kubectl get nodes ##View cluster status in master kubectl get cs # node notready may occur when executing in the master kubectl get pod --all-namespaces -o wide