Write before
This chapter is kubernetes series tutorials The second = article mainly introduces the deployment of the kubernetes cluster through the kubeadm installation tool. Considering the limitations of the domestic network, the mirror has been downloaded to the network disk for offline deployment.
1. Overview of the environment
1.1 Installation Overview
There are two general ways to install kubernetes: manual binary installation and automatic installation of kubeadm. The new version of kubeadm has now deployed the kubernetes management components in a cluster as pod. The community currently recommends an automated deployment of kubeadm. It is also interesting to deploy the kubernetes cluster step by step in a binary way.Either way, limited by GFW, most of the mirrors need *** to download, so you can refresh and solve it yourself. This article installs the deployment offline and pours the downloaded mirrors into each installation.
1.2 Introduction to the Environment
Software Version
Software Name | Software Version |
---|---|
OS | CentOS Linux release 7.6.1810 (Core) |
Docker | docker-ce-18.03.1.ce-1.el7 |
Kubernetes | 1.14.1 |
Kubeadm | kubeadm-1.14.1-0.x86_64 |
etcd | 3.3.10 |
flannel | v0.11.0 |
Environment instructions, all three machines are Cloud Virtual Machine purchased on Tencent's cloud, and the machine configuration is 2vcpu+4G memory+50G disk
host name | role | IP Address | Software |
---|---|---|---|
node-1 | master | 10.254.100.101 | docker,kubelet,etcd,kube-apiserver,kube-controller-manager,kube-scheduler |
node-2 | worker | 10.254.100.102 | docker,kubelet,kube-proxy,flannel |
node-3 | worker | 10.254.100.103 | docker,kubelet,kube-proxy,flannel |
1.3 Environmental Preparation
1. Set hostname, the other two nodes have similar settings root@VM_100_101_centos ~# hostnamectl set-hostname node-1 root@VM_100_101_centos ~# hostname node-1
2. Set up hosts File, the other two nodes set the same content root@node-1 ~# vim /etc/hosts 127.0.0.1 localhost localhost.localdomain 10.254.100.101 node-1 10.254.100.102 node-2 10.254.100.103 node-3
3.Set up passwordless login //Generate key pair root@node-1 .ssh# ssh-keygen -P '' Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:zultDMEL8bZmpbUjQahVjthVAcEkN929w5EkUmPkOrU root@node-1 The key's randomart image is: +---RSA 2048----+ | .=O=+=o.. | | o+o..+.o+ | | .oo=. o. o | | . . * oo .+ | | oSOo.E . | | oO.o. | | o++ . | | . .o | | ... | +----SHA256-----+
Copy public key to node-2 and node-3 node root@node-1 .ssh# ssh-copy-id -i /root/.ssh/id_rsa.pub node-2: /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host 'node-1 (10.254.100.101)' can't be established. ECDSA key fingerprint is SHA256:jLUH0exgyJdsy0frw9R+FiWy+0o54LgB6dgVdfc6SEE. ECDSA key fingerprint is MD5:f4:86:a8:0e:a6:03:fc:a6:04:df:91:d8:7a:a7:0d:9e. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@node-1's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'node-2'" and check to make sure that only the key(s) you wanted were added.
Copy public key to node-3 node root@node-1 .ssh# ssh-copy-id -i /root/.ssh/id_rsa.pub node-3: /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@node-3's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'node-3'" and check to make sure that only the key(s) you wanted were added.
4.Test passwordless login root@node-1 ~# ssh node-2 hostname node-2 root@node-1 ~# ssh node-3 hostname node-3
5. Close the firewall and SElinux [root@node-1 ~]# systemctl stop firewalld [root@node-1 ~]# systemctl disable firewalld [root@node-1 ~]# sed -i '/^SELINUX=/ s/enforcing/disabled/g' /etc/selinux/config [root@node-1 ~]# setenforce 0
1.4 Install Docker
1. download docker source [root@node-1 ~]# wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
2.install docker-ce [root@node-1 ~]# yum install docker-ce-18.03.1.ce-1.el7.centos
3. Set up cgroup driver Type is systemd [root@node-1 ~]# cat > /etc/docker/daemon.json <<EOF > { > "exec-opts": ["native.cgroupdriver=systemd"], > "log-driver": "json-file", > "log-opts": { > "max-size": "100m" > }, > "storage-driver": "overlay2", > "storage-opts": [ > "overlay2.override_kernel_check=true" > ] > } > EOF
4. start-up docker Service and Validation [root@node-1 ~]# systemctl restart docker [root@node-1 ~]# systemctl enable docker Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. [root@node-1 ~]# docker version Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 8 Server Version: 18.03.1-ce Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: systemd Plugins: Volume: local Network: bridge host macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88 runc version: 4fc53a81fb7c994640722ac585fa9ca548971871 init version: 949e6fa Security Options: seccomp Profile: default Kernel Version: 3.10.0-957.el7.x86_64 Operating System: CentOS Linux 7 (Core) OSType: linux Architecture: x86_64 CPUs: 2 Total Memory: 3.701GiB Name: node-1 ID: WCUZ:IJ3M:XGX4:S77A:3UG5:PTL4:MFJE:NNUT:IP4J:4PFU:OYMQ:X4LG Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false
5. Verify Installation [root@node-1 ~]# docker info Client: Debug Mode: false Server: Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 19.03.1 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: systemd. #cgroup driver type Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 894b81a4b802e4eb2a91d1ce216b8817763c29fb runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f init version: fec3683 Security Options: seccomp Profile: default Kernel Version: 3.10.0-957.el7.x86_64 Operating System: CentOS Linux 7 (Core) OSType: linux Architecture: x86_64 CPUs: 2 Total Memory: 3.701GiB Name: node-1 ID: WCUZ:IJ3M:XGX4:S77A:3UG5:PTL4:MFJE:NNUT:IP4J:4PFU:OYMQ:X4LG Docker Root Dir: /var/lib/docker Debug Mode: false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled
2. Install the kubernetes cluster
2.1 Install components such as kubeadm
1. Install the kubernetes source, the kubernetes source of Ali can be used in China, the speed will be faster cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF 2. Install kubeadm, kubelet, kubectl [root@node-1 ~]# yum install kubeadm-1.14.1-0 kubectl-1.14.1-0 kubelet-1.14.1-0 --disableexcludes=kubernetes -y Plugins loaded: fastestmirror, langpacks Loading mirror speeds from cached hostfile Resolving dependencies -->Checking transactions --->Package kubeadm.x86_64.0.1.14.1-0 will be installed -->Processing dependency kubernetes-cni >= 0.7.5, required by package kubeadm-1.14.1-0.x86_64 -->Processing dependency cri-tools >= 1.11.0, required by package kubeadm-1.14.1-0.x86_64 --->Package kubectl.x86_64.0.1.14.1-0 will be installed --->Package kubelet.x86_64.0.1.14.1-0 will be installed -->Processing dependency socat required by package kubelet-1.14.1-0.x86_64 -->Processing dependency conntrack required by package kubelet-1.14.1-0.x86_64 -->Checking transactions --->Package conntrack-tools.x86_64.0.1.4.4-4.el7 will be installed -->Processing dependency libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit) required by package conntrack-tools-1.4.4-4.el7.x86_64 -->Processing dependency libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit) required by package conntrack-tools-1.4.4-4.el7.x86_64 -->Processing dependency libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit) required by package conntrack-tools-1.4.4-4.el7.x86_64 -->Processing dependency libnetfilter_queue.so.1()(64bit), which is required by package conntrack-tools-1.4.4-4.el7.x86_64 -->Processing dependency libnetfilter_cttimeout.so.1()(64bit), which is required by package conntrack-tools-1.4.4-4.el7.x86_64 -->Processing dependency libnetfilter_cthelper.so.0()(64bit), which is required by package conntrack-tools-1.4.4-4.el7.x86_64 --->Package cri-tools.x86_64.0.1.13.0-0 will be installed --->Package kubernetes-cni.x86_64.0.0.7.5-0 will be installed --->Package socat.x86_64.0.1.7.3.2-2.el7 will be installed -->Checking transactions --->Package libnetfilter_cthelper.x86_64.0.1.0.0-9.el7 will be installed --->Package libnetfilter_cttimeout.x86_64.0.1.0.0-6.el7 will be installed --->Package libnetfilter_queue.x86_64.0.1.0.2-2.el7_2 will be installed -->Resolve dependencies complete Dependency Resolution ========================================================================================================================================================== Package Schema Version Source Size ========================================================================================================================================================== Installing: kubeadm x86_64 1.14.1-0 kubernetes 8.7 M kubectl x86_64 1.14.1-0 kubernetes 9.5 M kubelet x86_64 1.14.1-0 kubernetes 23 M Install for dependency: conntrack-tools x86_64 1.4.4-4.el7 os 186 k cri-tools x86_64 1.13.0-0 kubernetes 5.1 M kubernetes-cni x86_64 0.7.5-0 kubernetes 10 M libnetfilter_cthelper x86_64 1.0.0-9.el7 os 18 k libnetfilter_cttimeout x86_64 1.0.0-6.el7 os 18 k libnetfilter_queue x86_64 1.0.2-2.el7_2 os 23 k socat x86_64 1.7.3.2-2.el7 os 290 k Transaction Summary ========================================================================================================================================================== Several important dependency packages need to be installed: socat, cri-tools, cni, and so on. 3. Set iptables bridge parameters [root@node-1 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf > net.bridge.bridge-nf-call-ip6tables = 1 > net.bridge.bridge-nf-call-iptables = 1 > EOF [root@node-1 ~]# sysctl --system, then use the sysctl-a|grep parameter to verify validity 4. Start the kubelet service [root@node-1 ~]# systemctl restart kubelet [root@node-1 ~]# systemctl enable kubelet
2.2 Pour in Installation Mirror
Download and unzip the mirror required by kubernetes from the network disk (address: https://pan.baidu.com/s/1hw8Q0Vf3xvhKoEiVtMi6SA) into the kubernetes/v.14.1 directory. Follow these steps:
1.Download the mirror from the cloud disk and upload it to each node, enter the node where the mirror is located, and pour it into the environment. [root@node-1 v1.14.1]# docker image load -i etcd:3.3.10.tar [root@node-1 v1.14.1]# docker image load -i pause:3.1.tar [root@node-1 v1.14.1]# docker image load -i coredns:1.3.1.tar [root@node-1 v1.14.1]# docker image load -i flannel:v0.11.0-amd64.tar [root@node-1 v1.14.1]# docker image load -i kube-apiserver:v1.14.1.tar [root@node-1 v1.14.1]# docker image load -i kube-controller-manager:v1.14.1.tar [root@node-1 v1.14.1]# docker image load -i kube-scheduler:v1.14.1.tar [root@node-1 v1.14.1]# docker image load -i kube-proxy:v1.14.1.tar 2. Check Mirror List [root@node-1 v1.14.1]# docker image list REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.14.1 20a2d7035165 3 months ago 82.1MB k8s.gcr.io/kube-apiserver v1.14.1 cfaa4ad74c37 3 months ago 210MB k8s.gcr.io/kube-scheduler v1.14.1 8931473d5bdb 3 months ago 81.6MB k8s.gcr.io/kube-controller-manager v1.14.1 efb3887b411d 3 months ago 158MB quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 6 months ago 52.6MB k8s.gcr.io/coredns 1.3.1 eb516548c180 6 months ago 40.3MB k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 8 months ago 258MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 19 months ago 742kB
2.3 kubeadm initialization cluster
1. kubeadm initializes the cluster, using --pod-network-cidr to specify the segment used by the pod when initializing with kubeadm. The setting value depends on the network plugin selection. This paper takes flannel as an example and sets the value to 10.244.0.0/16 (if the settings are different, you need to be consistent with the yaml file of the corresponding plugin in subsequently initializing the network plugin), and if there are many installationsDifferent container runtime s can specify the path to the socket file through--cri-socket. If more than one network card can specify the master address through--apiserver-advertise-address, the default address of the default gateway to the cloud host is selected.
[root@node-1 ~]# kubeadm init --apiserver-advertise-address 10.254.100.101 --apiserver-bind-port 6443 --kubernetes-version 1.14.1 --pod-network-cidr 10.244.0.0/16 [init] Using Kubernetes version: v1.14.1 [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.03.1-ce. Latest validated version: 18.09 [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'#Download Mirror [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki"#Generate certificates such as CA [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [node-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.254.100.101] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [node-1 localhost] and IPs [10.254.100.101 127.0.0.1 ::1] [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [node-1 localhost] and IPs [10.254.100.101 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests"#Generate master node static pod profile [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 18.012370 seconds [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --experimental-upload-certs [mark-control-plane] Marking the node node-1 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node node-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: r8n5f2.9mic7opmrwjakled [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles#Configure RBAC Authorization [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube #Configure Environment Variable Profile sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: #Install Network Plugins https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.254.100.101:6443 --token r8n5f2.9mic7opmrwjakled \ #Add node command, record first --discovery-token-ca-cert-hash sha256:16e383c8abff6233021331944080087f0514ddd15d96c65d19443b0af02d64ab
The installation command kubeadm init --apiserver-advertise-address 10.254.100.101 --apiserver-bind-port 6443 --kubernetes-version 1.14.1 --pod-network-cidr 10.244.0.0/16 shows some important steps in the installation of kubeadm: download the image, generate the certificate, generate the configuration file, configure RBAC authorization certificates, configure environment variables, install the network plug-in guide, addAdd a node guide profile.
2. Generate a kubectl environment profile
[root@node-1 ~]# mkdir /root/.kube [root@node-1 ~]# cp -i /etc/kubernetes/admin.conf /root/.kube/config [root@node-1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION node-1 NotReady master 6m29s v1.14.1
3. Add a node, add two other nodes to the cluster, and copy the above add node command to the specified node.
[root@node-3 ~]# kubeadm join 10.254.100.101:6443 --token r8n5f2.9mic7opmrwjakled \ > --discovery-token-ca-cert-hash sha256:16e383c8abff6233021331944080087f0514ddd15d96c65d19443b0af02d64ab [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.03.1-ce. Latest validated version: 18.09 [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. //Similarly, it can be added to the node-2 node and then verified by kubectl get nodes, since network plugin is not installed yet. //All node s show NotReady status: [root@node-1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION node-1 NotReady master 16m v1.14.1 node-2 NotReady <none> 4m34s v1.14.1 node-3 NotReady <none> 2m10s v1.14.1
- Install network plugin, kubernetes supports many types of network plugins, requires network to support CNI plugins, CNI is Container Network Interface, requires that kubernetes'medium pod network access methods: network interchange between node and node, network interchange between pod and pod, network interchange between node and pod, different CNI plugin support features are different.Kubernetes supports a variety of open source network CNI plug-ins, such as flannel, calico, canal, weave, etc.Flannel is an overlay network model, which constructs tunnels through vxlan tunnels to connect networks in k8s. The following is the installation process:
[root@node-1 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml podsecuritypolicy.extensions/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created
From the above output, you know that deploying flannel requires RBAC authorization, configuring configmap and daemonset, where Daemonset can accommodate various types of CPU architectures, and has several installed by default, typically adm64. You can download and edit the above url, keep the kube-flannel-ds-amd64 daemonset, or delete it.
1. See flannel Installed daemonsets [root@node-1 ~]# kubectl get daemonsets -n kube-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-flannel-ds-amd64 3 3 3 3 3 beta.kubernetes.io/arch=amd64 2m34s kube-flannel-ds-arm 0 0 0 0 0 beta.kubernetes.io/arch=arm 2m34s kube-flannel-ds-arm64 0 0 0 0 0 beta.kubernetes.io/arch=arm64 2m34s kube-flannel-ds-ppc64le 0 0 0 0 0 beta.kubernetes.io/arch=ppc64le 2m34s kube-flannel-ds-s390x 0 0 0 0 0 beta.kubernetes.io/arch=s390x 2m34s kube-proxy 3 3 3 3 3 <none> 30m //Delete unwanted damonsets [root@node-1 ~]# kubectl delete daemonsets kube-flannel-ds-arm kube-flannel-ds-arm64 kube-flannel-ds-ppc64le kube-flannel-ds-s390x -n kube-system daemonset.extensions "kube-flannel-ds-arm" deleted daemonset.extensions "kube-flannel-ds-arm64" deleted daemonset.extensions "kube-flannel-ds-ppc64le" deleted daemonset.extensions "kube-flannel-ds-s390x" deleted
Verify the installation of node at this time, all nodes have been displayed as Ready, installation is complete!
[root@node-1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION node-1 Ready master 29m v1.14.1 node-2 Ready <none> 17m v1.14.1 node-3 Ready <none> 15m v1.14.1
2.4 Configure kubectl command completion
When using kubectl and kubernetes to interact, you can use abbreviation mode or full mode, such as kubectl get nodes and kubectl get no, to achieve the same effect. To improve work efficiency, you can use command completion to speed up work efficiency.
[root@node-1 ~]# kubectl completion bash >/etc/kubernetes/kubectl.sh [root@node-1 ~]# echo "source /etc/kubernetes/kubectl.sh" >>/root/.bashrc [root@node-1 ~]# cat /root/.bashrc # .bashrc # User specific aliases and functions alias rm='rm -i' alias cp='cp -i' alias mv='mv -i' # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi source /etc/kubernetes/kubectl. //Make Configuration Effective [root@node-1 ~]# source /etc/kubernetes/kubectl.sh //Enter kubectl get co from the command line and press TAB to complete automatically [root@node-1 ~]# kubectl get co componentstatuses configmaps controllerrevisions.apps [root@node-1 ~]# kubectl get componentstatuses
3. Verify installation service status
- Verify node status
Obtain node You can see the status, role, launch market, version of the [root@node-1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION node-1 Ready master 46m v1.14.1 node-2 Ready <none> 34m v1.14.1 node-3 Ready <none> 32m v1.14.1 //View node details to see labels, addresses, resource profiles, resource allocations, event log information, and more [root@node-1 ~]# kubectl describe node node-1 Name: node-1 Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=node-1 kubernetes.io/os=linux node-role.kubernetes.io/master= Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"3a:32:d1:a2:ac:e2"} flannel.alpha.coreos.com/backend-type: vxlan flannel.alpha.coreos.com/kube-subnet-manager: true flannel.alpha.coreos.com/public-ip: 10.254.100.101 kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Sat, 10 Aug 2019 17:35:45 +0800 Taints: node-role.kubernetes.io/master:NoSchedule Unschedulable: false Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Sat, 10 Aug 2019 18:22:26 +0800 Sat, 10 Aug 2019 17:35:42 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Sat, 10 Aug 2019 18:22:26 +0800 Sat, 10 Aug 2019 17:35:42 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Sat, 10 Aug 2019 18:22:26 +0800 Sat, 10 Aug 2019 17:35:42 +0800 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Sat, 10 Aug 2019 18:22:26 +0800 Sat, 10 Aug 2019 18:04:26 +0800 KubeletReady kubelet is posting ready status Addresses: InternalIP: 10.254.100.101 Hostname: node-1 Capacity: cpu: 2 ephemeral-storage: 51473868Ki hugepages-2Mi: 0 memory: 3880524Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 47438316671 hugepages-2Mi: 0 memory: 3778124Ki pods: 110 System Info: Machine ID: 0ea734564f9a4e2881b866b82d679dfc System UUID: DA7F90FC-7E95-4570-A0E9-317270B8EE3C Boot ID: 84b9bebb-598b-48ab-b0c4-bbd19d8d566e Kernel Version: 3.10.0-957.el7.x86_64 OS Image: CentOS Linux 7 (Core) Operating System: linux Architecture: amd64 Container Runtime Version: docker://18.3.1 Kubelet Version: v1.14.1 Kube-Proxy Version: v1.14.1 PodCIDR: 10.244.0.0/24 Non-terminated Pods: (5 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- kube-system etcd-node-1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 46m kube-system kube-apiserver-node-1 250m (12%) 0 (0%) 0 (0%) 0 (0%) 46m kube-system kube-controller-manager-node-1 200m (10%) 0 (0%) 0 (0%) 0 (0%) 46m kube-system kube-proxy-x5t6r 0 (0%) 0 (0%) 0 (0%) 0 (0%) 47m kube-system kube-scheduler-node-1 100m (5%) 0 (0%) 0 (0%) 0 (0%) 46m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 550m (27%) 0 (0%) memory 0 (0%) 0 (0%) ephemeral-storage 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 47m kubelet, node-1 Starting kubelet. Normal NodeHasSufficientMemory 47m (x8 over 47m) kubelet, node-1 Node node-1 status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 47m (x8 over 47m) kubelet, node-1 Node node-1 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 47m (x7 over 47m) kubelet, node-1 Node node-1 status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 47m kubelet, node-1 Updated Node Allocatable limit across pods Normal Starting 47m kube-proxy, node-1 Starting kube-proxy. Normal NodeReady 18m kubelet, node-1 Node node-1 status is now: NodeReady
- View the build status, the core builds in kubernetes, including scheduler, controller-manager, etcd
[root@node-1 ~]# kubectl get componentstatuses NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true"}
- To see the pod situation, the roles in the master include kube-apiserver, kube-scheduler, kube-controller-manager, etcd, coredns deployed in the cluster as pods, and the kube-proxy of the worker node as pod.The pod is actually controlled by other controllers such as daemonset.
View all running in the current system pods Status. [root@node-1 ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-fb8b8dccf-hrqm8 1/1 Running 0 50m coredns-fb8b8dccf-qwwks 1/1 Running 0 50m etcd-node-1 1/1 Running 0 48m kube-apiserver-node-1 1/1 Running 0 49m kube-controller-manager-node-1 1/1 Running 0 49m kube-proxy-lfckv 1/1 Running 0 38m kube-proxy-x5t6r 1/1 Running 0 50m kube-proxy-x8zqh 1/1 Running 0 36m kube-scheduler-node-1 1/1 Running 0 49m //View the list of daemonsets and deployments [root@node-1 ~]# kubectl get ds -n kube-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-flannel-ds-amd64 3 3 3 3 3 beta.kubernetes.io/arch=amd64 3m21s kube-proxy 3 3 3 3 3 <none> 58m [root@node-1 ~]# kubectl get deployments -n kube-system NAME READY UP-TO-DATE AVAILABLE AGE coredns 2/2 2 2 58m //From the above output, you can see that flannel and kube-proxy are deployed in the cluster as Daemonsets and coredns as deployments. //However, no controllers such as kube-apiserver are seen, and in fact the master's components are deployed in the cluster as static pod s.
4. Reference Documents
-
Container Runtime installation documentation https://kubernetes.io/docs/setup/production-environment/container-runtimes/
-
Kubeadm installation https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
- Initialize the kubeadm cluster https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network
When your talent can't sustain your ambition, you should settle down to study
Return kubernetes series tutorial catalog
Keep an eye on my articles + forward them