kubeadm production environment deployment steps
1, Environmental description:
1,centos7
2. ntp clock consistency is required in cluster environment
3,getenforce 0
4,disable firewall
2, Host condition:
| k8s-master | k8s-node1 | k8s-node1 |
| -------- | -------- | -------- |
| 172.16.0.160 | 172.16.0.161 | 172.16.0.162 |
172.16.0.160 k8s-master
172.16.0.161 k8s-node1
172.16.0.162 k8s-node2
3, Environmental preparation
1. yum source configuration
yum warehouse is configured for all hosts of master, node1 and node2
kubernetes.repo:
[kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
docker-ce.repo
[docker-ce-stable] name=Docker CE Stable - $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stable enabled=1 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-stable-debuginfo] name=Docker CE Stable - Debuginfo $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/stable enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-stable-source] name=Docker CE Stable - Sources baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/stable enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-edge] name=Docker CE Edge - $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/edge enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-edge-debuginfo] name=Docker CE Edge - Debuginfo $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/edge enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-edge-source] name=Docker CE Edge - Sources baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/edge enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-test] name=Docker CE Test - $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/test enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-test-debuginfo] name=Docker CE Test - Debuginfo $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/test enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-test-source] name=Docker CE Test - Sources baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/test enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-nightly] name=Docker CE Nightly - $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/nightly enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-nightly-debuginfo] name=Docker CE Nightly - Debuginfo $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/nightly enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-nightly-source] name=Docker CE Nightly - Sources baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/nightly enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
4, Start installation
1. The master executes the following commands
[root@k8s-master ~]#yum install docker-ce kubectl kubeadm kubelet -y [root@k8s-master ~]# systemctl daemon-reload [root@k8s-master ~]# systemctl start docker
3. Make sure that the following values are all 1
[root@k8s-master ~]# cat /proc/sys/net/bridge/bridge-nf-call-ip6tables 1 [root@k8s-master ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables 1
4. Turn on self starting
[root@k8s-master ~]#systemctl enable docker [root@k8s-master ~]#systemctl enable kubelet
5. Initialization of master node
kubeadm init --kubernetes-version=v1.12.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
Because of the wall, most users will report an error
W0422 16:22:43.268820 11829 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.18.2 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.18.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.18.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.18.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.18.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) docker pull mirrorgooglecontainers/kube-apiserver:v1.18.2 , error: exit status 1 docker pull mirrorgooglecontainers/kube-apiserver:v1.18.2 [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.2: output: Error response from daemon: Get https://k8s.gcr.io/docker pull mirrorgooglecontainers/kube-apiserver:v1.18.2 v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) docker pull mirrorgooglecontainers/kube-apiserver:v1.18.2 , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.4.3-0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.6.7: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher
Solution:
Use Alibaba cloud's image to pull first
https://blog.csdn.net/networken/article/details/84571373
[root@k8s-master ~]#vim k8s.sh
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.2 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.2 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.2 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.2 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 docker pull coredns/coredns:1.6.7 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.2 k8s.gcr.io/kube-proxy:v1.18.2 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.2 k8s.gcr.io/kube-scheduler:v1.18.2 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.2 k8s.gcr.io/kube-apiserver:v1.18.2 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.2 k8s.gcr.io/kube-controller-manager:v1.18.2 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0 docker tag coredns/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.2 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.2 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.2 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.2 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 docker rmi coredns/coredns:1.6.7
[root@k8s-master ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.18.2 0d40868643c6 5 days ago 117MB k8s.gcr.io/kube-apiserver v1.18.2 6ed75ad404bd 5 days ago 173MB k8s.gcr.io/kube-controller-manager v1.18.2 ace0a8c17ba9 5 days ago 162MB k8s.gcr.io/kube-scheduler v1.18.2 a3099161e137 5 days ago 95.3MB k8s.gcr.io/pause 3.2 80d28bedfe5d 2 months ago 683kB k8s.gcr.io/coredns 1.6.7 67da37a9a360 2 months ago 43.8MB k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 5 months ago 288MB
Reinitialize
[root@k8s-master ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
No error is reported to prove successful initialization
During initialization, the following information needs to be saved for subsequent node nodes to join the master
Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.16.0.160:6443 --token xxxxxxxxxxx \ --discovery-token-ca-cert-hash sha256:xxxxxx
[root@k8s-master ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true"}
Description master component started successfully
Continue to install flannel in master
[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 16h v1.18.2
[root@k8s-master ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-66bff467f8-fn8sx 1/1 Running 0 16h coredns-66bff467f8-k74fn 1/1 Running 0 16h etcd-k8s-master 1/1 Running 0 16h kube-apiserver-k8s-master 1/1 Running 0 16h kube-controller-manager-k8s-master 1/1 Running 0 16h kube-flannel-ds-amd64-jg7zw 1/1 Running 0 65s kube-proxy-n5rfj 1/1 Running 0 16h kube-scheduler-k8s-master 1/1 Running 0 16h
6. Initialize node1 node2
First, make sure node1 node2 installs the following
yum install docker-ce kubeadm kubelet -y systemctl enable docker systemctl enable kubelet
7. Add node1 and node2 to the master cluster
[root@k8s-node1 ~]# kubeadm join 172.16.0.160:6443 --token xxxx --discovery-token-ca-cert-hash sha256:xxxx W0423 10:23:35.439522 22002 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
It proves that the join is successful, and node2 executes the same command
Verify at this time
[root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 17h v1.18.2 k8s-node1 NotReady <none> 5m44s v1.18.2 k8s-node2 NotReady <none> 25s v1.18.2
At this time, not ready is displayed, indicating that the node has been successfully joined. At this time, it may be pulling the image and waiting for a few minutes to view it
[root@k8s-node1 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.18.2 0d40868643c6 6 days ago 117MB quay.io/coreos/flannel v0.12.0-amd64 4e9f801d2217 5 weeks ago 52.8MB k8s.gcr.io/pause 3.2 80d28bedfe5d 2 months ago
[root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 17h v1.18.2 k8s-node1 Ready <none> 11m v1.18.2 k8s-node2 Ready <none> 5m56s v1.18.2
At this time, it is proved that the full participation is successful
Environment installation completed.