This paper introduces how to deploy Kubernetes 1.17.4 cluster using kubeadm on ubuntu 18.04 64 bit dual core CPU virtual machine. The network plug-in is flannel v0.11.0, and the image source is alicloud
1, Install docker
apt-get install docker.io
Execute the following command to create a new / etc/docker/daemon.json file:
cat > /etc/docker/daemon.json <<-EOF
{
"registry-mirrors": [
"https://a8qh6yqv.mirror.aliyuncs.com",
"http://hub-mirror.c.163.com"
],
"exec-opts": ["native.cgroupdriver=systemd"]
}
Note:
Registry mirrors is the address of the image accelerator.
native.cgroupdriver=systemd indicates that the cgroup driver used is systemd (k8s uses this method), and the default is cgroupfs. The reason for the modification is that the driver mode of k8s modified in kubeadm.conf is not successful.
Restart docker and view cgroup:
# systemctl restart docker
# docker info | grep -i cgroup
Cgroup Driver: systemd
If systemd appears, the modification is successful.
2, Deploy k8s master host
k8s is deployed by master and node nodes. This section is the master host.
2.1 turn off swap
Edit the / etc/fstab file and comment out the lines attached by the swap partition. For example:
# swap was on /dev/sda5 during installation UUID=aaa38da3-6e60-4e9d-bfc6-7128fd05f1c7 none swapsw 0 0
Execute again: ා swapoff -a
2.2 add domestic k8s source. Select Alibaba cloud's:
# cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
Add key: ා cat https://packages.cloud.google.com/apt/doc/apt-key.gpg| sudo apt key add-
If it is not successful, download it through some methods: https://packages.cloud.google.com/apt/doc/apt-key.gpg, and put it into the project directory. Re implementation:
# cat apt-key.gpg | sudo apt-key add -
3.3 update source
# apt-get update
Install kubeadm, kubectl, kubelet, kubernetes CNI and other tools.
# apt-get install -y kubeadm kubectl kubelet kubernetes-cni
Note 1: installation of kubeadm will automatically install kubectl, kubelet and kubernetes CNI, so it is OK to specify only kubeadm.
Note 2: when this article is installed, the version is 1.17.4 and kubernetes CNI is 0.7.5. The download file is located in the / var/cache/apt/archives / directory.
3.4 get the image version required for deployment
# kubeadm config images list
The output is as follows:
W1214 08:46:14.303772 8461 version.go:102] falling back to the local client version: v1.17.4 W1214 08:46:14.304223 8461 validation.go:28] Cannot validate kube-proxy config - no validator is available W1214 08:46:14.304609 8461 validation.go:28] Cannot validate kubelet config - no validator is available k8s.gcr.io/kube-apiserver:v1.17.4 k8s.gcr.io/kube-controller-manager:v1.17.4 k8s.gcr.io/kube-scheduler:v1.17.4 k8s.gcr.io/kube-proxy:v1.17.4 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5
The warning message prompted above can be ignored. Here is the version of the image to confirm that this version of kubeadm matches, because compatibility problems may occur due to different component versions.
3.5 pull the image file.
Generally, the image of k8s.gcr.io cannot be downloaded directly in China. There are two ways:
1. When initializing k8s, use the Alibaba cloud image address, which can be downloaded successfully. See initialization below.
2. Download the above image. Use the following script pullk8s.sh (note that the script must add the x attribute):
#!/bin/bash # The following image should be removed from the prefix of "k8s.gcr.io /", and the version should be changed to the version obtained by the kubeadm config images list command images=( kube-apiserver:v1.17.0 kube-controller-manager:v1.17.0 kube-scheduler:v1.17.0 kube-proxy:v1.17.0 pause:3.1 etcd:3.4.3-0 coredns:1.6.5 ) for imageName in ${images[@]} ; do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName done
Pull:
chmod +x pullk8s.sh
bash pullk8s.sh (or. / pullk8s.sh)
3.6 network
Set network configuration:
mkdir -p /etc/cni/net.d cat >/etc/cni/net.d/10-mynet.conf <<-EOF { "cniVersion": "0.3.0", "name": "mynet", "type": "bridge", "bridge": "cni0", "isGateway": true, "ipMasq": true, "ipam": { "type": "host-local", "subnet": "10.244.0.0/16", "routes": [ {"dst": "0.0.0.0/0"} ] } } EOF cat >/etc/cni/net.d/99-loopback.conf <<-EOF { "cniVersion": "0.3.0", "type": "loopback" } EOF
After practice, this step is not OK.
3.7 download the flannel image
docker pull quay.io/coreos/flannel:v0.11.0-amd64
Note: if you cannot download, you need to use other methods.
flannel image information:
# docker images | grep flannel
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 11 months ago 52.6MB
3.8 initialization
Version 1:
kubeadm init --pod-network-cidr=10.244.0.0/16 \ --image-repository registry.aliyuncs.com/google_containers
Note:
– pod network CIDR specifies the network segment, which will be used by subsequent network plug-ins (flannel is used in this article).
– image repository specifies the image address, which is k8s.gcr.io by default, and Alibaba cloud image address registry.aliyuncs.com/google'u containers.
Note that other parameters default.
The above command is equivalent to the following command:
kubeadm init \ --apiserver-advertise-address=192.168.0.102 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.17.0 \ --service-cidr=10.1.0.0/16\ --pod-network-cidr=10.244.0.0/16
Output:
W1221 17:44:19.880281 2865 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W1221 17:44:19.880405 2865 version.go:102] falling back to the local client version: v1.17.0 W1221 17:44:19.880539 2865 validation.go:28] Cannot validate kube-proxy config - no validator is available W1221 17:44:19.880546 2865 validation.go:28] Cannot validate kubelet config - no validator is available [init] Using Kubernetes version: v1.17.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [ubuntu kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.102] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [ubuntu localhost] and IPs [192.168.0.102 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [ubuntu localhost] and IPs [192.168.0.102 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W1221 17:50:12.262505 2865 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W1221 17:50:12.268198 2865 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 17.504683 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node ubuntu as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node ubuntu as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: 1rpp8b.axfud1xrsvx4q8nw [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.50.128:6443 --token 1rpp8b.axfud1xrsvx4q8nw \ --discovery-token-ca-cert-hash sha256:6bf952d45bbdc121fa90583eac33f11f0a3f4b491f29996a56fc289363843e3c
Version 2: pull the version according to the previous script:
kubeadm init --pod-network-cidr=10.244.0.0/16
Note: problems may be encountered after successful initialization:
1, [ERROR Port-10251]: Port 10250 is in use
terms of settlement:
Restart kubeadm: [root @ k8s master ~], and kubeadm reset
If the port is still occupied after restart, check the process of occupying the port: netstat - tunlp | grep 10250
Force to kill the process occupying the port: sudo fuser -k -n tcp 10250
After the deployment is successful, copy the admin.conf file to the corresponding directory of the current user according to the prompt. The admin.conf file will be used later (it needs to be copied to the node node).
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
During initialization, if it does not exist, the image will be downloaded automatically. After initialization, the image is as follows:
# docker images REPOSITORY TAG IMAGE ID CREATED SIZE registry.aliyuncs.com/google_containers/kube-proxy v1.17.4 7d54289267dc 1 days ago 116MB registry.aliyuncs.com/google_containers/kube-apiserver v1.17.4 0cae8d5cc64c 1 days ago 171MB registry.aliyuncs.com/google_containers/kube-controller-manager v1.17.4 5eb3b7486872 1 days ago 161MB registry.aliyuncs.com/google_containers/kube-scheduler v1.17.4 78c190f736b1 1 days ago 94.4MB registry.aliyuncs.com/google_containers/coredns 1.6.5 70f311871ae1 6 weeks ago 41.6MB registry.aliyuncs.com/google_containers/etcd 3.4.3-0 303ce5db0e90 8 weeks ago 288MB registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB
The pod status is as follows:
# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-9d85f5447-67qtv 0/1 Pending 0 3h26m coredns-9d85f5447-cg87c 0/1 Pending 0 3h26m etcd-ubuntu 1/1 Running 0 3h27m kube-apiserver-ubuntu 1/1 Running 0 3h27m kube-controller-manager-ubuntu 1/1 Running 0 3h27m kube-proxy-chqbq 1/1 Running 0 3h26m kube-scheduler-ubuntu 1/1 Running 0 3h27m
Except that coredns status is Pending, all other pod s are running. This is because the network plug-in is not deployed. This paper chooses flannel.
Execute the following command to deploy flannel:
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Note:
Deploy using the kube-flannel.yml file of the flannel repository. Refer to this document for details.
If you can't access it, you can manually download the https://github.com/cores/flannel/blob/master/documentation/kube-flannel.yml file to the current directory, and then execute the kubectl apply -f kube-flannel.yml command.
# kubectl apply -f kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created
3, Node node
k8s is deployed by master and node nodes. This section is node node.
3.1 preconditions
Operate on a node node.
1. Install kubeadm, see above.
2. Download the flannel image, as mentioned above (if it is not downloaded in advance, it will be downloaded automatically when joining the cluster).
3. Copy the / etc/kubernetes/admin.conf file of the host to the / etc/kubernetes / directory of the node node. (Note: use scp command in master node, kubernetes does not exist)
3.2 join the cluster
At this time, the k8s service has not been started. Execute the following command to join the node:
kubeadm join 192.168.50.128:6443 --token 1rpp8b.axfud1xrsvx4q8nw \
--discovery-token-ca-cert-hash sha256:6bf952d45bbdc121fa90583eac33f11f0a3f4b491f29996a56fc289363843e3c
The prompt message is as follows:
[preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
During the process of joining the cluster, the necessary k8s image will be downloaded. Note that the master computer has been designated as the source of the Ali source, so it is also the source on the node node.
REPOSITORY TAG IMAGE ID CREATED SIZE registry.aliyuncs.com/google_containers/kube-proxy v1.17.0 7d54289267dc 2 weeks ago 116MB registry.aliyuncs.com/google_containers/coredns 1.6.5 70f311871ae1 7 weeks ago 41.6MB quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 11 months ago 52.6MB registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB
After joining successfully, the following related services are running in this node:
# ps aux | grep kube root 3269 1.6 4.2 754668 86784 ? Ssl Dec20 18:34 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.1 root 3632 0.1 1.1 140104 22412 ? Ssl Dec20 2:14 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=node root 4385 0.0 1.6 407356 33704 ? Ssl Dec20 0:51 /opt/bin/flanneld --ip-masq --kube-subnet-mgr root 121292 0.0 0.0 14228 1032 pts/0 S+ 00:33 0:00 grep --color=auto kube
It mainly includes kubelet, Kube proxy, flanneld, etc.
The docker container list is as follows:
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2fde9bb78fd7 ff281650a721 "/opt/bin/flanneld -..." 7 minutes ago Up 7 minutes k8s_kube-flannel_kube-flannel-ds-amd64-28p6z_kube-system_f40a2875-70eb-468b-827d-fcb59be3416b_1 aa7ca3d5825e registry.aliyuncs.com/google_containers/kube-proxy "/usr/local/bin/kube..." 8 minutes ago Up 8 minutes k8s_kube-proxy_kube-proxy-n6xv5_kube-system_3df8b7ae-e5b8-4256-9857-35bd24f7e025_0 ac61ed8d7295 registry.aliyuncs.com/google_containers/pause:3.1 "/pause" 8 minutes ago Up 8 minutes k8s_POD_kube-flannel-ds-amd64-28p6z_kube-system_f40a2875-70eb-468b-827d-fcb59be3416b_0 423f9e42c082 registry.aliyuncs.com/google_containers/pause:3.1 "/pause" 8 minutes ago Up 8 minutes k8s_POD_kube-proxy-n6xv5_kube-system_3df8b7ae-e5b8-4256-9857-35bd24f7e025_0
To view the flannel network information:
# cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.1.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
To view local IP information:
# ifconfig
Five, validation
Execute in the master node:
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node Ready <none> 17m v1.17.0
ubuntu Ready master 5h11m v1.17.0
You can see that both machines are Ready. The node machine changes from NotReady to Ready, which takes about 10 seconds.
Simply test pod with busybox image. Execute in the master node:
# kubectl run -i --tty busybox --image=latelee/busybox --restart=Never -- sh
Wait a moment to enter the busybox command line:
# uname -a
Linux busybox 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:33:37 UTC 2016 x86_64 GNU/Linux
From another command line, check the pod running status:
# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox 1/1 Running 0 74s 10.244.1.4 node <none> <none>
You can see that the pod is in Running status and Running on the node.
View on the node node:
# docker ps | grep busybox ba5d1a480294 latelee/busybox "sh" 2 minutes ago Up 2 minutes k8s_busybox_busybox_default_20d757f7-8ea7-4e51-93fc-514029065a59_0 8c643171ac09 registry.aliyuncs.com/google_containers/pause:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_busybox_default_20d757f7-8ea7-4e51-93fc-514029065a59_0
At this time, when the master node exits from busybox, the pod still exists, but it is not READY, and the node host does not have a busybox container running.
After verification, k8s deployment succeeded.
Reference documents: https://blog.csdn.net/subfate/article/details/103774072