Start a secure kubernetes cluster using kubeadm
Official Document Reference Links: https://kubernetes.io/docs/setup/independent/install-kubeadm/
Environmental preparation
CentOS 7
At least 2G or more of memory
At least two CPU s
Network connectivity for each machine in the cluster (firewall, selinux, NetworkManager turned off)
Each node has a unique host name, MAC address, and roduct UUID
Specific ports are not occupied.
Disable swap or kubelet will not work properly
How to verify that each node's MAC address and product_uuid are unique
Use ip link or ifconfig-a to get the MAC address of the network interface
Use the sudo cat/sys/class/dmi/id/product_uuid command to check
(UUID is a number generated on one machine that guarantees that it is unique to all machines in the same space-time, has a UUID in linux, and is stored in a file/sys/class/dmi/id/product_uuid)
Kubernetes uses these values to uniquely identify nodes in the cluster.If these values are not unique for each node, the installation process may fail.
Dependent Ports
Master node:
Protocol |
direction |
Port Range |
purpose |
TCP |
In |
6443* |
Kubernetes API server |
TCP |
In |
2379-2380 |
Etcd (can be an external separate etcd cluster or a custom port) |
TCP |
In |
10250 |
kubelet API |
TCP |
In |
10251 |
kube-scheduler |
TCP |
In |
10252 |
kube-controller-manager |
TCP |
In |
10255 |
Read-only Kubelet API |
Node node:
Protocol |
direction |
Port Range |
purpose |
TCP |
In |
10250 |
kubelet API |
TCP |
In |
10255 |
Read-only Kubelet API |
TCP |
In |
30000-32767 |
NodePort Services** |
Official start of deployment
Master node IP: 192.168.214.166
Node1 Node IP:192.168.214.167
Node2 Node IP: 192.168.214.168
All nodes change hostname and join resolution
[root@localhost ~]# hostnamectl set-hostname master [root@localhost ~]# bash [root@master ~]# vim /etc/hosts 192.168.214.166 master 192.168.214.167 node1 192.168.214.168 node2
All nodes close firewalls, selinux, NetworkManager
[root@master ~]# systemctl stop firewalld [root@master ~]# systemctl disable firewalld [root@master ~]# sed -i "s/SELINUX=.*/SELINUX=disabled/g" /etc/selinux/config [root@master ~]# setenforce 0 [root@master ~]# systemctl stop NetworkManager [root@master ~]# systemctl disable NetworkManager
All nodes disable swap
[root@master ~]# swapoff -a [root@master ~]# sed -i '/^.*swap.*/d' /etc/fstab
All nodes install docker
#Remove docker from machine [root@master ~]# yum remove docker docker-common container-selinux docker-selinux docker-engine-selinux docker-ce docker-ee docker-engine #Install yum-utils, which provides a yum-config-manager unit, with device-mapper-persistent-data and lvm2 installed to store the two packages necessary for device mapping. [root@master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 [root@master ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo [root@master ~]# yum makecache [root@master ~]# yum install docker-ce -y
Note: To ensure that Docker's GroupDriver is consistent with Kubelet's GroupDriver, specify the Driver as systemd here.
Modify Docker startup parameters
sed -i 's#ExecStart=.*#ExecStart=/usr/bin/dockerd -s overlay2 --storage-opt overlay2.override_kernel_check=true --exec-opt native.cgroupdriver=systemd --log-driver=json-file --log-opt max-size=100m --log-opt max-file=10#g' /usr/lib/systemd/system/docker.service sed -i '/ExecStartPost=.*/d' /usr/lib/systemd/system/docker.service sed -i '/ExecStart=.*/aExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT' /usr/lib/systemd/system/docker.service sed -i '/Environment=.*/d' /usr/lib/systemd/system/docker.service #Specify that the log drive is json-file, and that the maximum log size is 100m, with a maximum of 10 historical logs retained. #Specifies that the storage driver is overlay2. #Specify CGroup driver as systemd
All Nodes Start Docker and Set Start Self Start
[root@node2 ~]# systemctl daemon-reload [root@node2 ~]# systemctl start docker.service [root@node2 ~]# systemctl enable docker.service
Verify docker is started and parameters are valid
[root@master ~]# ps -ef | grep docker root 3458 1 0 06:56 ? 00:00:00 /usr/bin/dockerd -s overlay2 --storage-opt overlay2.override_kernel_check=true --exec-opt native.cgroupdriver=systemd --log-driver=json-file --log-opt max-size=100m --log-opt max-file=10 root 3477 3458 0 06:56 ? 00:00:01 containerd --config /var/run/docker/containerd/containerd.toml --log-level info root 3615 1139 0 07:00 pts/0 00:00:00 grep --color=auto docker
All nodes install kubelet, kubeadm, kubectl
kubelet runs on all Cluster nodes and is responsible for starting Pod s and containers.
kubeadm is used to initialize Cluster.
kubectl is the Kubernetes command line tool.kubectl allows you to deploy and manage applications, view resources, create, delete, and update components.
The kubeadm method does not install kubelet and kubectl.So we will install the corresponding version of the package by ourselves.
The roles of concepts, components, and term meanings in the kubernetes cluster can be found in my previous articles. https://blog.51cto.com/13210651/2354770
-
Note: In a cluster started with kubeadm, the components of the Master node are handed over to kubelet for management, that is, kube-scheduler, kube-apiserver, kube-controller-manager, kube-proxy, and flannl all start up as containers.
[root@master ~]# vim /etc/yum.repo.d/kubernetes.repo [kubernetes] name=kubernetes repo baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck=0 enabled=1 [root@master ~]# yum install kubelet kubeadm kubectl -y yum repolist //Check if the configuration is correct [root@master ~]# rpm -ql kubelet /etc/kubernetes/manifests //List Directory /etc/sysconfig/kubelet //configuration file /etc/systemd/system/kubelet.service //main program /usr/bin/kubelet [root@master ~]# systemctl enable kubelet && systemctl start kubelet
All Nodes Modify Kernel Parameters
[root@master ~]# sed -i '/net.bridge.bridge-nf-call-iptables/d' /usr/lib/sysctl.d/00-system.conf [root@master ~]# sed -i '/net.bridge.bridge-nf-call-ip6tables/d' /usr/lib/sysctl.d/00-system.conf [root@master ~]# sed -i '$a net.bridge.bridge-nf-call-iptables = 1' /usr/lib/sysctl.d/00-system.conf [root@master ~]# sed -i '$a net.bridge.bridge-nf-call-ip6tables = 1' /usr/lib/sysctl.d/00-system.conf [root@master ~]# sysctl --system [root@master ~]# [ -f /proc/sys/fs/may_detach_mounts ] && sed -i "/fs.may_detach_mounts/ d" /etc/sysctl.conf [root@master ~]# [ -f /proc/sys/fs/may_detach_mounts ] && echo "fs.may_detach_mounts=1" >> /etc/sysctl.conf [root@master ~]# sysctl -p fs.may_detach_mounts = 1
Note: A new parameter was introduced in centos7.4 to control the behavior of the kernel.A new parameter was introduced in CentOS 7.4 to control the behavior of the kernel./proc/sys/fs/may_detach_mounts is set to 0 by default; this value needs to be set to 1 when the system has containers running. https://bugzilla.redhat.com/show_bug.cgi?id=1441737
Configure the kubelet startup parameters (note distinguishing node names)
[root@master ~]# vim /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--cluster-dns=172.17.0.10 --cluster-domain=cluster.local --hostname-override=master --provider-id=master --pod_infra_container_image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1 --max-pods=40 --cert-dir=/var/lib/kubelet/pki --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --root-dir=/var/lib/kubelet --authentication-token-webhook --resolv-conf=/etc/resolv.conf --rotate-certificates --feature-gates=RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true,CustomPodDNS=true --pod-manifest-path=/etc/kubernetes/manifests" [root@master ~]# systemctl daemon-reload
--cluster-dns=172.17.0.10 # DNS server IP address list.For pods with "dnsPolicy = ClusterFirst", this value is used for container DNS servers.
Note: All DNS servers in the list must provide the same recordset, otherwise cluster name resolution may not work properly.There is no guarantee which DNS server can be contacted for name resolution.
--cluster-domain=cluster.local #domain name of the cluster.If set, kubelet will configure all containers to search this domain in addition to the host's search domain
--hostname-override=master #If not empty, use this string as the identity instead of the actual hostname.If--cloud-provider is set, the cloud provider determines the name of the node
--provider-id=master #The unique identifier used to identify a node in machine database, cloudprovider
--pod_infra_container_image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1 # Specify a mirror of the pause container for kubernetes
Reference link: http://www.itboth.com/d/MrQjym/nginx-kubernetes
--max-pods=40 # Maximum Pods
--cert-dir=/var/lib/kubelet/pki #The directory where the TLS certificate is located.This parameter is ignored if--tls-cert-file and--tls-private-key-file are provided.
--network-plugin=cni #<Warning: Alpha Function > Kubelet / pod Network Plugin invoked by various events in the life cycle.This parameter is only valid if container-runtime is set to docker.
--cni-conf-dir=/etc/cni/net.d # <Warning: Alpha feature>Search directory for CNI profile.Valid only when container-runtime is set to docker.(
--cni-bin-dir=/opt/cni/bin # <Warning: Alpha feature>Directory list for searching CNI plug-in binaries.Valid only when container-runtime is set to docker.
--root-dir=/var/lib/kubelet # path to the kubelet file.Some volume s will be in this directory and must be specified on large disks.
--authentication-token-webhook #Use the TokenReview API to determine the authentication that hosts the token.
--resolv-conf=/etc/resolv.conf #If Pod's dnsPolicy is set to Default, it will inherit the name resolution configuration from the Node node running Pod.
--feature-gates=RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true,CustomPodDNS=true --rotate-certificates
#Enable client-side, server-side certificate rotation <Warning: Beta feature>When a certificate expires, request a new certificate from the kube-apiserver to automatically update the kubelet client certificate.
--pod-manifest-path=/etc/kubernetes/manifests"
All nodes are the same so far: selinux, firewall, swap off, docker installation configuration, kubelet installation configuration, memory parameter modification
Next deploy master
In order to cope with the problem of network blockage, we manually download the related image in advance and retype tag:
Master Prepare on docker pull mirrorgooglecontainers/kube-apiserver:v1.13.1 docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.1 docker pull mirrorgooglecontainers/kube-scheduler:v1.13.1 docker pull mirrorgooglecontainers/kube-proxy:v1.13.1 docker pull mirrorgooglecontainers/pause:3.1 docker pull mirrorgooglecontainers/etcd:3.2.24 docker pull coredns/coredns:1.2.6 docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 docker tag mirrorgooglecontainers/kube-apiserver:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1 docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1 docker tag mirrorgooglecontainers/kube-scheduler:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1 docker tag mirrorgooglecontainers/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1 docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1 docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24 docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6 docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64 docker rmi mirrorgooglecontainers/kube-apiserver:v1.13.1 docker rmi mirrorgooglecontainers/kube-controller-manager:v1.13.1 docker rmi mirrorgooglecontainers/kube-scheduler:v1.13.1 docker rmi mirrorgooglecontainers/kube-proxy:v1.13.1 docker rmi mirrorgooglecontainers/pause:3.1 docker rmi mirrorgooglecontainers/etcd:3.2.24 docker rmi coredns/coredns:1.2.6 docker rmi registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 [root@master ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.13.1 fdb321fd30a0 2 months ago 80.2MB k8s.gcr.io/kube-apiserver v1.13.1 40a63db91ef8 2 months ago 181MB k8s.gcr.io/kube-scheduler v1.13.1 ab81d7360408 2 months ago 79.6MB k8s.gcr.io/kube-controller-manager v1.13.1 26e6f1db2a52 2 months ago 146MB k8s.gcr.io/coredns 1.2.6 f59dcacceff4 3 months ago 40MB k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 5 months ago 220MB quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 13 months ago 44.6MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 14 months ago 742kB Node To prepare docker pull mirrorgooglecontainers/kube-proxy:v1.13.1 docker pull mirrorgooglecontainers/pause:3.1 docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
Initialize Master with kubeadm
--apiserver-advertise-address specifies which IP Master nodes use to communicate with other nodes in Cluster.If you do not specify the IP address of the interface where the default gateway will be selected.
--apiserver-bind-port specifies the port that the Master node's APIServer listens on.
--pod-network-cidr specifies the range of the Pod network.The use of this parameter depends on the network scheme used, and the classic flannel network scheme will be used in this paper.
--service-cidr specifies the scope of the Service network.Note to be consistent with the CIDR of the specific address of the DNS specified in the kubelet.
--service-dns-domain specifies that the Kubernetes internal Service network uses the DNS domain name.Be aware of the consistency specified in the kubelet.
--The token-ttl certificate expiration time.
[root@master ~]# kubeadm init --apiserver-advertise-address 192.168.214.166 --apiserver-bind-port=6443 --pod-network-cidr=172.16.0.0/16 --service-cidr=172.17.0.0/16 --service-dns-domain=cluster.local --token-ttl=2400h0m0s --kubernetes-version=v1.13.1 [init] Using Kubernetes version: v1.13.1 [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.2. Latest validated version: 18.06 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.214.166 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.214.166 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "ca" certificate and key [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [172.17.0.1 192.168.214.166] [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 22.508526 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: n6312v.ewq7swb59ceu2fce [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.214.166:6443 --token n6312v.ewq7swb59ceu2fce --discovery-token-ca-cert-hash sha256:25369a6cbe5abc31a3177d28e302c9ea9236766e4447051ad259bef6c209df67
Note:
kubeadm performs pre-initialization checks, Docker version checks, kubelet checks, swap checks, and so on.
Generate CA certificates and keys.[cert]
Generate apiserver certificate and key.
Generate additional certificates and keys, location/etc/kubernetes/pki
Generate KubeConfig files, which need to communicate with Master at / etc/kubernetes/, kubelet(kubelet.conf), kubectl(admin.conf), and so on.[kubeconfig]
The manifest is generated at / etc/kubernetes/manifests/, and kubelet uses the generated yaml file to launch the components of the Master node.
Add the label node-role.kubernetes.io/master="" to the Master node so that it does not participate in any OD scheduling.
Configure rules related to RBAC.
Install the necessary components kube-dns and kube-proxy.
Initialization succeeded with some tips.
Also note that tokens need to be saved for use when a node joins a cluster and cannot be reproduced later, in case you forget to create [bootstraptoken] using the kubeadm token list view or the kubeadm token creation in the master node
Add a parameter if swap error occurs during execution --ignore-preflight-errors=Swap
If you need to redo the kubeadm init, we'd better clean up both the last initialization and container services as follows:
(1)kubeadm reset (2)systemctl stop kubelet (3)docker stop $(docker ps -qa) && docker rm $(docker ps -qa) ##Do not use this command if there are other services on the docker, you will need to manually locate the kubernetes-related containers and delete them (4)systemctl start kubelet (5)kubeadm init
Configure kubectl
kubectl is a command line tool for managing Kubernetes Cluster.
Configure kubectl on Master using the root user by executing the following commands:
[root@master ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile [root@master ~]# source /etc/profile [root@master ~]# echo $KUBECONFIG /etc/kubernetes/admin.conf ##This error occurred during installation because the above parameters were not configured, /etc/kubernetes/admin.conf is the file used primarily to pass parameters during cluster initialization The connection to the server localhost:8080 was refused - did you specify the right host or port?
Configure kubectl autocompletion:
In version 1.3, kubectl added a completions command that can be used to autocomplete
# yum install -y bash-completion #Locate bash_completion (no locate command to install yum -y install mlocate-y) /usr/share/bash-completion/bash_completion # source /usr/share/bash-completion/bash_completion # source <(kubectl completion bash)
View component status using kubectl
[root@master ~]# kubectl get componentstatus(cs) NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"}
Look at the pods in the cluster and see that coredns are pending and master components are running
[root@master ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-86c58d9df4-8r2tf 0/1 Pending 0 5h57m kube-system coredns-86c58d9df4-nltw6 0/1 Pending 0 5h57m kube-system etcd-master 1/1 Running 0 5h57m kube-system kube-apiserver-master 1/1 Running 0 5h57m kube-system kube-controller-manager-master 1/1 Running 0 5h56m kube-system kube-proxy-xcpcg 1/1 Running 0 5h57m kube-system kube-scheduler-master 1/1 Running 0 5h57m
For Kubernetes Cluster to work, you must install a Pod network or Pods cannot communicate with each other.
These core components above, although in the running state, are not running in the Pod network (at which point the pod network has not been created), but using the Host network, for example, kube-APIserver. Let's verify that
[root@master ~]# kubectl get pods -n kube-system kube-apiserver-master NAME READY STATUS RESTARTS AGE kube-apiserver-master 1/1 Running 3 1d 1/1 Running 3 1d #View the container id of kube-apiserver [root@master ~]# docker ps |grep apiserver c120c761b764 9df3c00f55e6 "kube-apiserver --..." 33 minutes ago #View the network properties of the corresponding pause container [root@master ~]# docker inspect c120c761b764 "NetworkMode": "host",
Next install the Pod network
[root@master ~]# wget https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml [root@master ~]# sed -i 's#"Network": "10.244.0.0/16",#"Network": "172.20.0.0/16",#g' kube-flannel.yml [root@master ~]# sed -i 's#"quay.io/coreos/flannel:v0.9.1-amd64",#"quay.io/coreos/flannel:v0.10.0-amd64",#g' kube-flannel.yml ##kube-flannel.yaml file reference [root@master ~]# vim kube-flannel.yaml --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel rules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "type": "flannel", "delegate": { "isDefaultGateway": true } } net-conf.json: | { "Network": "172.16.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: amd64 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.10.0-amd64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conf volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.10.0-amd64 command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ] securityContext: privileged: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name //Note: flannel In the file Network And kubadm init Designated Pod Network consistent, we set 172.16.0.0/16,Default is 10.244.0.0/16 flannel Version changed based on previous Downloads v0.10.0 Edition
Check to see if CoreDNS Pod is working properly. If it is working properly, master installation is successful
[root@master ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-86c58d9df4-8r2tf 1/1 Running 0 6h16m kube-system coredns-86c58d9df4-nltw6 1/1 Running 0 6h16m kube-system etcd-master 1/1 Running 0 6h15m kube-system kube-apiserver-master 1/1 Running 0 6h15m kube-system kube-controller-manager-master 1/1 Running 0 6h15m kube-system kube-flannel-ds-amd64-b9kfs 1/1 Running 0 8m49s kube-system kube-proxy-xcpcg 1/1 Running 0 6h16m kube-system kube-scheduler-master 1/1 Running 0 6h15m
Check the status of the primary node and we can see that it is Ready
[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 6h23m v1.13.3 #Command Line Validation [root@master ~]# curl --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/apiserver-kubelet-client.crt --key /etc/kubernetes/pki/apiserver-kubelet-client.key https://192.168.214.166:6443
Add node Node to Cluster
The last command of the information returned when the Master was just initialized is executed on the node
[root@node1 ~]# kubeadm join 192.168.214.166:6443 --token n6312v.ewq7swb59ceu2fce --discovery-token-ca-cert-hash sha256:25369a6cbe5abc31a3177d28e302c9ea9236766e4447051ad259bef6c209df67 [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.2. Latest validated version: 18.06 [discovery] Trying to connect to API Server "192.168.214.166:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.214.166:6443" [discovery] Requesting info from "https://192.168.214.166:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.214.166:6443" [discovery] Successfully established connection with API Server "192.168.214.166:6443" [join] Reading configuration from the cluster... [join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
View cluster status, cluster deployed successfully
[root@master my.conf]# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 12h v1.13.3 node1 Ready <none> 28m v1.13.3 node2 Ready <none> 153m v1.13.3 [root@master my.conf]# kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-86c58d9df4-8r2tf 1/1 Running 0 12h 172.16.0.3 master <none> <none> kube-system coredns-86c58d9df4-nltw6 1/1 Running 0 12h 172.16.0.2 master <none> <none> kube-system etcd-master 1/1 Running 0 12h 192.168.214.166 master <none> <none> kube-system kube-apiserver-master 1/1 Running 0 12h 192.168.214.166 master <none> <none> kube-system kube-controller-manager-master 1/1 Running 0 12h 192.168.214.166 master <none> <none> kube-system kube-flannel-ds-dms4z 1/1 Running 0 136m 192.168.214.166 master <none> <none> kube-system kube-flannel-ds-gf4zk 1/1 Running 6 28m 192.168.214.167 node1 <none> <none> kube-system kube-flannel-ds-wfbh5 1/1 Running 2 136m 192.168.214.168 node2 <none> <none> kube-system kube-proxy-d486m 1/1 Running 0 28m 192.168.214.167 node1 <none> <none> kube-system kube-proxy-qpntl 1/1 Running 0 154m 192.168.214.168 node2 <none> <none> kube-system kube-proxy-xcpcg 1/1 Running 0 12h 192.168.214.166 master <none> <none> kube-system kube-scheduler-master 1/1 Running 0 12h 192.168.214.166 master <none> <none>
Disassemble Cluster
Delete node
kubectl drain Node name --delete-local-data --force --ignore-daemonsets kubectl delete node node Node name [root@master ~]# kubectl drain node1 --delete-local-data --force --ignore-daemonsets node/node1 cordoned WARNING: Ignoring DaemonSet-managed pods: kube-flannel-ds-qmdxs, kube-proxy-rzcpr node/node1 drained [root@master ~]# kubectl delete node node1 node "node1" deleted
After removing the nodes, we can reset the cluster by executing the following commands:
[root@master ~]# kubeadm reset