Written in front
First, let's talk about my machine configuration, the hot naked machine. There are three configurations as follows
- 10.20.1.103 4C 8G disk 50G node4 master centos7
- 10.20.1.104 4C 8G disk 50G node5 node centos7
- 10.20.1.105 4C 8G disk 50G node6 node centos7
My installation method is based on kubeadm, the official recommended method of k8s, as well as istio. If you want to see the original, please visit the following connection.
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/ https://istio.io/docs/setup/getting-started/
You have docker installed by default here
That's all. Let's start!
kubernetes
step1
First, install some commonly used linux software, which is normal
yum install -y vim wget
step2
Verify that each of your nodes is unique and that the new machine has no problem over and over.
ip link sudo cat /sys/class/dmi/id/product_uuid
step3
Open port of each machine, needless to say
// master execution [root@localhost ~]# firewall-cmd --zone=public --add-port=6443/tcp --permanent success [root@localhost ~]# firewall-cmd --zone=public --add-port=2379/tcp --permanent success [root@localhost ~]# firewall-cmd --zone=public --add-port=2380/tcp --permanent success [root@localhost ~]# firewall-cmd --zone=public --add-port=10250/tcp --permanent success [root@localhost ~]# firewall-cmd --zone=public --add-port=10251/tcp --permanent success [root@localhost ~]# firewall-cmd --zone=public --add-port=10252/tcp --permanent success [root@localhost ~]# firewall-cmd --reload success // node execution [root@localhost ~]# firewall-cmd --zone=public --add-port=10250/tcp --permanent success [root@localhost ~]# firewall-cmd --reload success
step4
Install kubeadm, kubelet and kubectl
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
Up to the previous step, if you don't surf the Internet scientifically, you won't be able to install successfully. The failure is shown as follows. This is normal.
[root@localhost ~]# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.huaweicloud.com * extras: mirrors.huaweicloud.com * updates: mirrors.huaweicloud.com https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed to connect to 2404:6800:4012::200e: Network is unreachable" Trying other mirror. https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed to connect to 2404:6800:4012::200e: Network is unreachable" Trying other mirror. ^Chttps://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#56 - "Callback aborted" Trying other mirror.
Because we can't access google, we need to modify the source and go to other paths to download. After modification, it is shown in the following code block: actually, only two lines of code are commented.
vim /etc/yum.repos.d/kubernetes.repo
name=Kubernetes # baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 # gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg exclude=kube*
Verify the installation, and you can see that we have successfully installed
[root@localhost ~]# kubectl version Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.1", GitCommit:"d224476cd0730baca2b6e357d144171ed74192d6", GitTreeState:"clean", BuildDate:"2020-01-14T21:04:32Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"} The connection to the server localhost:8080 was refused - did you specify the right host or port? [root@localhost ~]# kubelet --version Kubernetes v1.17.1 [root@localhost ~]# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.1", GitCommit:"d224476cd0730baca2b6e357d144171ed74192d6", GitTreeState:"clean", BuildDate:"2020-01-14T21:02:14Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
Start kubelet
systemctl enable --now kubelet
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF
sysctl --system
systemctl daemon-reload systemctl restart kubelet
It's half done by now. The next step is to start the cluster.
step5
Start cluster
// Start master first [root@localhost ~]# kubeadm init --kubernetes-version=v1.17.0 W0117 17:16:23.316968 11538 validation.go:28] Cannot validate kube-proxy config - no validator is available W0117 17:16:23.317084 11538 validation.go:28] Cannot validate kubelet config - no validator is available [init] Using Kubernetes version: v1.17.0 [preflight] Running pre-flight checks [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR Swap]: running with swap on is not supported. Please disable swap [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or highe
After we execute the command, we can see some warnings and errors. Then we execute the following command, turn off the firewall and switch the partition
systemctl stop firewalld swapoff -a
If you execute it again, you can see that there is only one warning. We can modify the driver of docker. As shown in the figure below, restart docker after modification
[root@node1 ~]# vim /etc/docker/daemon.json { "exec-opts":["native.cgroupdriver=systemd"] }
After the restart, we will execute again, which will definitely fail again. It's easy to install with consistent failures. We can't access google. That is, we can't pull down the mirror image.
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull' error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.17.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.17.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.17.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.17.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.4.3-0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.6.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1
So our task now is to solve the following dependent images.
[root@node1 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.17.0 7d54289267dc 5 weeks ago 116MB k8s.gcr.io/kube-scheduler v1.17.0 78c190f736b1 5 weeks ago 94.4MB k8s.gcr.io/kube-apiserver v1.17.0 0cae8d5cc64c 5 weeks ago 171MB k8s.gcr.io/kube-controller-manager v1.17.0 5eb3b7486872 5 weeks ago 161MB k8s.gcr.io/coredns 1.6.5 70f311871ae1 2 months ago 41.6MB k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 2 months ago 288MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 2 years ago 742kB