New features of Kubernetes 1.13
-
Using kubeadm (GA) to simplify Kubernetes cluster management
Most engineers with Kubernetes should be able to use kubeadm. It is an important tool for managing the life cycle of a cluster, from creation to configuration to upgrade; now kubeadm has officially become GA. Kubeadm handles the guidance of production clusters on existing hardware and configures the core Kubernetes components in a best practice manner to provide a secure and simple connection process for new nodes and support easy upgrades. This GA version is noteworthy for its graduated advanced features, especially pluggability and configurability. The scope of kubeadm is the toolkit for administrators and automated, higher-level systems, and this version is an important step in this direction.
-
Container Storage Interface (CSI) into GA
Container Storage Interface (CSI) is now GA, introduced as alpha in v1.9 and beta in v1.10. With CSI, the Kubernetes volume layer becomes truly scalable. This provides third-party storage providers with the opportunity to write plug-ins that interoperate with Kubernetes without touching the core code. The specification itself has reached 1.0 status.
-
CoreDNS is now the default DNS server for Kubernetes
In 1.11, we announced that CoreDNS has reached the general availability of DNS-based service discovery. In 1.13, CoreDNS now replaces kube-dns with Kubernetes'default DNS server. CoreDNS is a universal, authoritative DNS server that provides backward compatible but extensible integration with Kubernetes. CoreDNS has fewer mobile components than previous DNS servers because it is a single executable file and a single process, and supports flexible use cases by creating custom DNS entries. It is also written in Go to provide memory security.
I. Official Documents
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#downloads-for-v1131 https://kubernetes.io/docs/home/?path=users&persona=app-developer&level=foundational https://github.com/etcd-io/etcd https://shengbao.org/348.html https://github.com/coreos/flannel http://www.cnblogs.com/blogscc/p/10105134.html https://blog.csdn.net/xiegh2014/article/details/84830880 https://blog.csdn.net/tiger435/article/details/85002337 https://www.cnblogs.com/wjoyxt/p/9968491.html https://blog.csdn.net/zhaihaifei/article/details/79098564 http://blog.51cto.com/jerrymin/1898243 http://www.cnblogs.com/xuxinkun/p/5696031.html
II. Download Links
Client Binaries https://dl.k8s.io/v1.13.1/kubernetes-client-linux-amd64.tar.gz Server Binaries https://dl.k8s.io/v1.13.1/kubernetes-server-linux-amd64.tar.gz Node Binaries https://dl.k8s.io/v1.13.1/kubernetes-node-linux-amd64.tar.gz etcd https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz flannel https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
III. Partition of Roles
k8s-master1 10.2.8.44 k8s-master etcd,kube-apiserver,kube-controller-manager,kube-scheduler k8s-node1 10.2.8.65 k8s-node etcd,kubelet,docker,kube_proxy k8s-node2 10.2.8.34 k8s-node etcd,kubelet,docker,kube_proxy
Master deployment
4.1 Download Software
wget https://dl.k8s.io/v1.13.1/kubernetes-server-linux-amd64.tar.gz wget https://dl.k8s.io/v1.13.1/kubernetes-client-linux-amd64.tar.gz wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
4.2 cfssl installation
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
4.3 Create etcd certificates
mkdir /k8s/etcd/{bin,cfg,ssl} -p mkdir /k8s/kubernetes/{bin,cfg,ssl} -p cd /k8s/etcd/ssl/
1) etcd ca configuration
cat << EOF | tee ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "etcd": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
2) etcd ca certificate
cat << EOF | tee ca-csr.json { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ] } EOF
3) etcd server certificate
cat << EOF | tee server-csr.json { "CN": "etcd", "hosts": [ "10.2.8.44", "10.2.8.65", "10.2.8.34" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ] } EOF
4) Generating etcd ca certificates and private keys
Initialize ca
cfssl gencert -initca ca-csr.json | cfssljson -bare ca [root@elasticsearch01 ssl]# ls ca-config.json ca-csr.json server-csr.json [root@elasticsearch01 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca 2018/12/26 16:13:54 [INFO] generating a new CA key and certificate from CSR 2018/12/26 16:13:54 [INFO] generate received request 2018/12/26 16:13:54 [INFO] received CSR 2018/12/26 16:13:54 [INFO] generating key: rsa-2048 2018/12/26 16:13:54 [INFO] encoded CSR 2018/12/26 16:13:54 [INFO] signed certificate with serial number 144752911121073185391033754516204538929473929443 [root@elasticsearch01 ssl]# ls ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server-csr.json
Generate server certificates
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server [root@elasticsearch01 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server 2018/12/26 16:18:53 [INFO] generate received request 2018/12/26 16:18:53 [INFO] received CSR 2018/12/26 16:18:53 [INFO] generating key: rsa-2048 2018/12/26 16:18:54 [INFO] encoded CSR 2018/12/26 16:18:54 [INFO] signed certificate with serial number 388122587040599986639159163167557684970159030057 2018/12/26 16:18:54 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@elasticsearch01 ssl]# ls ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server.csr server-csr.json server-key.pem server.pem
4.4 etcd installation
1) Decompression
tar -xvf etcd-v3.3.10-linux-amd64.tar.gz cd etcd-v3.3.10-linux-amd64/ cp etcd etcdctl /k8s/etcd/bin/
2) Configure etcd master file
vim /k8s/etcd/cfg/etcd.conf #[Member] ETCD_NAME="etcd01" ETCD_DATA_DIR="/data1/etcd" ETCD_LISTEN_PEER_URLS="https://10.2.8.44:2380" ETCD_LISTEN_CLIENT_URLS="https://10.2.8.44:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.2.8.44:2380" ETCD_ADVERTISE_CLIENT_URLS="https://10.2.8.44:2379" ETCD_INITIAL_CLUSTER="etcd01=https://10.2.8.44:2380,etcd02=https://10.2.8.65:2380,etcd03=https://10.2.8.34:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" #[Security] ETCD_CERT_FILE="/k8s/etcd/ssl/server.pem" ETCD_KEY_FILE="/k8s/etcd/ssl/server-key.pem" ETCD_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem" ETCD_CLIENT_CERT_AUTH="true" ETCD_PEER_CERT_FILE="/k8s/etcd/ssl/server.pem" ETCD_PEER_KEY_FILE="/k8s/etcd/ssl/server-key.pem" ETCD_PEER_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem" ETCD_PEER_CLIENT_CERT_AUTH="true"
3) Configure etcd startup file
mkdir /data1/etcd vim /usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify WorkingDirectory=/data1/etcd/ EnvironmentFile=-/k8s/etcd/cfg/etcd.conf # set GOMAXPROCS to number of processors ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /k8s/etcd/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" --listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" --advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" --initial-cluster-token=\"${ETCD_INITIAL_CLUSTER_TOKEN}\" --initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" --initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\" --cert-file=\"${ETCD_CERT_FILE}\" --key-file=\"${ETCD_KEY_FILE}\" --trusted-ca-file=\"${ETCD_TRUSTED_CA_FILE}\" --client-cert-auth=\"${ETCD_CLIENT_CERT_AUTH}\" --peer-cert-file=\"${ETCD_PEER_CERT_FILE}\" --peer-key-file=\"${ETCD_PEER_KEY_FILE}\" --peer-trusted-ca-file=\"${ETCD_PEER_TRUSTED_CA_FILE}\" --peer-client-cert-auth=\"${ETCD_PEER_CLIENT_CERT_AUTH}\"" Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
4) start up
Note that before startup, etcd02 and etcd03 are also configured
systemctl daemon-reload systemctl enable etcd systemctl start etcd
5) Service Inspection
/k8s/etcd/bin/etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://10.2.8.44:2379,https://10.2.8.65:2379,https://10.2.8.34:2379" cluster-health member c21df2258ce015e6 is healthy: got healthy result from https://10.2.8.34:2379 member d427109ed3caf9c3 is healthy: got healthy result from https://10.2.8.44:2379 member ec8c40660d3c1192 is healthy: got healthy result from https://10.2.8.65:2379 cluster is healthy
4.5 Generating kubernets Certificate and Private Key
1) Making kubernetes ca Certificate
cd /k8s/kubernetes/ssl cat << EOF | tee ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
cat << EOF | tee ca-csr.json { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca - [root@elasticsearch01 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca - 2018/12/27 09:47:08 [INFO] generating a new CA key and certificate from CSR 2018/12/27 09:47:08 [INFO] generate received request 2018/12/27 09:47:08 [INFO] received CSR 2018/12/27 09:47:08 [INFO] generating key: rsa-2048 2018/12/27 09:47:08 [INFO] encoded CSR 2018/12/27 09:47:08 [INFO] signed certificate with serial number 156611735285008649323551446985295933852737436614 [root@elasticsearch01 ssl]# ls ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
2) Making apiserver certificate
cat << EOF | tee server-csr.json { "CN": "kubernetes", "hosts": [ "10.0.0.1", "127.0.0.1", "10.2.8.44", "10.2.8.65", "10.2.8.34", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server [root@elasticsearch01 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server 2018/12/27 09:51:56 [INFO] generate received request 2018/12/27 09:51:56 [INFO] received CSR 2018/12/27 09:51:56 [INFO] generating key: rsa-2048 2018/12/27 09:51:56 [INFO] encoded CSR 2018/12/27 09:51:56 [INFO] signed certificate with serial number 399376216731194654868387199081648887334508501005 2018/12/27 09:51:56 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@elasticsearch01 ssl]# ls ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server.csr server-csr.json server-key.pem server.pem
3) Make kube-proxy certificate
cat << EOF | tee kube-proxy-csr.json { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy [root@elasticsearch01 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy 2018/12/27 09:52:40 [INFO] generate received request 2018/12/27 09:52:40 [INFO] received CSR 2018/12/27 09:52:40 [INFO] generating key: rsa-2048 2018/12/27 09:52:40 [INFO] encoded CSR 2018/12/27 09:52:40 [INFO] signed certificate with serial number 633932731787505365511506755558794469389165123417 2018/12/27 09:52:40 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@elasticsearch01 ssl]# ls ca-config.json ca-csr.json ca.pem kube-proxy-csr.json kube-proxy.pem server-csr.json server.pem ca.csr ca-key.pem kube-proxy.csr kube-proxy-key.pem server.csr server-key.pem
4.6 Deployment of kubernetes server
kubernetes master The node runs the following components:
kube-apiserver
kube-scheduler
kube-controller-manager
kube-scheduler and kube-controller-manager It can run in cluster mode and pass through leader Elections produce a working process, while other processes are blocked. master Availability in three-node high availability mode
1) Unzip files
tar -zxvf kubernetes-server-linux-amd64.tar.gz cd kubernetes/server/bin/ cp kube-scheduler kube-apiserver kube-controller-manager kubectl /k8s/kubernetes/bin/
2) Deployment of kube-apiserver components
Create TLS Bootstrapping Token
[root@elasticsearch01 bin]# head -c 16 /dev/urandom | od -An -t x | tr -d ' ' f2c50331f07be89278acdaf341ff1ecc vim /k8s/kubernetes/cfg/token.csv f2c50331f07be89278acdaf341ff1ecc,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
Create Apiserver configuration file
vim /k8s/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://10.2.8.44:2379,https://10.2.8.65:2379,https://10.2.8.34:2379 \ --bind-address=10.2.8.44 \ --secure-port=6443 \ --advertise-address=10.2.8.44 \ --allow-privileged=true \ --service-cluster-ip-range=10.254.0.0/16 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --enable-bootstrap-token-auth \ --token-auth-file=/k8s/kubernetes/cfg/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/k8s/kubernetes/ssl/server.pem \ --tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem \ --client-ca-file=/k8s/kubernetes/ssl/ca.pem \ --service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/k8s/etcd/ssl/ca.pem \ --etcd-certfile=/k8s/etcd/ssl/server.pem \ --etcd-keyfile=/k8s/etcd/ssl/server-key.pem"
Create apiserver system D file
vim /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
Startup service
systemctl daemon-reload systemctl enable kube-apiserver systemctl start kube-apiserver [root@elasticsearch01 bin]# systemctl status kube-apiserver ● kube-apiserver.service - Kubernetes API Server Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-12-27 14:41:22 CST; 20s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 22060 (kube-apiserver) CGroup: /system.slice/kube-apiserver.service └─22060 /k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://10.2.8.44:2379,https://10.2.... [root@elasticsearch01 bin]# ps -ef |grep kube-apiserver root 22060 1 5 14:41 ? 00:00:14 /k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://10.2.8.44:2379,https://10.2.8.65:2379,https://10.2.8.34:2379 --bind-address=10.2.8.44 --secure-port=6443 --advertise-address=10.2.8.44 --allow-privileged=true --service-cluster-ip-range=10.254.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth --token-auth-file=/k8s/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/k8s/kubernetes/ssl/server.pem --tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem --client-ca-file=/k8s/kubernetes/ssl/ca.pem --service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem --etcd-cafile=/k8s/etcd/ssl/ca.pem --etcd-certfile=/k8s/etcd/ssl/server.pem --etcd-keyfile=/k8s/etcd/ssl/server-key.pem [root@elasticsearch01 bin]# netstat -tulpn |grep kube-apiserve tcp 0 0 10.2.8.44:6443 0.0.0.0:* LISTEN 22060/kube-apiserve tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 22060/kube-apiserve
3) Deployment of kube-scheduler components
Create the kube-scheduler configuration file
vim /k8s/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"
Remarks on parameters:
address: receives http /metrics requests at port 127.0.0.1:10251; kube-scheduler does not currently support receiving https requests;
- kubeconfig: Specify the path of the kubeconfig file, which is used by kube-scheduler to connect and validate the kube-apiserver;
Lead-elect = true: cluster operation mode, enabling election function; nodes selected as leader are responsible for processing, and other nodes are blocked;
Create the kube-scheduler system D file
vim /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
Startup service
systemctl daemon-reload systemctl enable kube-scheduler.service systemctl start kube-scheduler.service [root@elasticsearch01 bin]# systemctl status kube-scheduler.service ● kube-scheduler.service - Kubernetes Scheduler Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-12-27 15:16:51 CST; 17s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 29026 (kube-scheduler) CGroup: /system.slice/kube-scheduler.service └─29026 /k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect
4) Deploy the kube-controller-manager component
Create the kube-controller-manager configuration file
vim /k8s/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect=true \ --address=127.0.0.1 \ --service-cluster-ip-range=10.254.0.0/16 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem \ --root-ca-file=/k8s/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem"
Create the kube-controller-manager system D file
vim /usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
Startup service
systemctl daemon-reload systemctl enable kube-controller-manager systemctl start kube-controller-manager [root@elasticsearch01 bin]# systemctl status kube-controller-manager ● kube-controller-manager.service - Kubernetes Controller Manager Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-12-27 15:19:19 CST; 11s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 29510 (kube-controller) CGroup: /system.slice/kube-controller-manager.service └─29510 /k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=tru..
4.7 Verify the kubeserver service
Setting environment variables
vim /etc/profile PATH=/k8s/kubernetes/bin:$PATH source /etc/profile
View master service status
kubectl get cs,nodes [root@elasticsearch01 bin]# kubectl get cs,nodes NAME STATUS MESSAGE ERROR componentstatus/controller-manager Healthy ok componentstatus/scheduler Healthy ok componentstatus/etcd-0 Healthy {"health":"true"} componentstatus/etcd-1 Healthy {"health":"true"} componentstatus/etcd-2 Healthy {"health":"true"}
V. Node deployment
kubernetes work The node runs the following components:
docker
kubelet
kube-proxy
flannel
5.1 Docker Environment Installation
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum list docker-ce --showduplicates | sort -r yum install docker-ce -y systemctl start docker && systemctl enable docker
5.2 Deployment of kubelet components
kublet runs on each worker node, receives requests sent by kube-apiserver, manages the Pod container, and executes interactive commands, such as exec, run, logs, etc.
kublet automatically registers node information to kube-apiserver at startup, and the built-in cadvisor counts and monitors node resource utilization.
To ensure security, only open the secure port to receive https requests, authenticate and authorize requests, and deny unauthorized access (such as apiserver, heapster)
1) Install binary files
wget https://dl.k8s.io/v1.13.1/kubernetes-node-linux-amd64.tar.gz tar zxvf kubernetes-node-linux-amd64.tar.gz cd kubernetes/node/bin/ cp kube-proxy kubelet kubectl /k8s/kubernetes/bin/
2) Copy relevant certificates to node node
[root@elasticsearch01 ssl]# scp *.pem 10.2.8.65:$PWD root@10.2.8.65's password: ca-key.pem 100% 1679 914.6KB/s 00:00 ca.pem 100% 1359 1.0MB/s 00:00 kube-proxy-key.pem 100% 1675 1.2MB/s 00:00 kube-proxy.pem 100% 1403 1.1MB/s 00:00 server-key.pem 100% 1679 809.1KB/s 00:00 server.pem
3) Create kubelet bootstrap kubeconfig file
Implementation by script
vim /k8s/kubernetes/cfg/environment.sh #!/bin/bash #Create kubelet bootstrapping kubeconfig BOOTSTRAP_TOKEN=f2c50331f07be89278acdaf341ff1ecc KUBE_APISERVER="https://10.2.8.44:6443" #Setting cluster parameters kubectl config set-cluster kubernetes \ --certificate-authority=/k8s/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig #Setting Client Authentication Parameters kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # Setting context parameters kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # Setting default context kubectl config use-context default --kubeconfig=bootstrap.kubeconfig #---------------------- # Create a kube-proxy kubeconfig file kubectl config set-cluster kubernetes \ --certificate-authority=/k8s/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=/k8s/kubernetes/ssl/kube-proxy.pem \ --client-key=/k8s/kubernetes/ssl/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
Execution script
[root@elasticsearch02 cfg]# sh environment.sh Cluster "kubernetes" set. User "kubelet-bootstrap" set. Context "default" created. Switched to context "default". Cluster "kubernetes" set. User "kube-proxy" set. Context "default" created. Switched to context "default". [root@elasticsearch02 cfg]# ls bootstrap.kubeconfig environment.sh kube-proxy.kubeconfig
4) Create a kubelet parameter configuration template file
vim /k8s/kubernetes/cfg/kubelet.config kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 10.2.8.65 port: 10250 readOnlyPort: 10255 cgroupDriver: systemd clusterDNS: ["10.254.0.10"] clusterDomain: cluster.local. failSwapOn: false authentication: anonymous: enabled: true
5) Create a kubelet configuration file
vim /k8s/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=10.2.8.65 \ --kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \ --config=/k8s/kubernetes/cfg/kubelet.config \ --cert-dir=/k8s/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
6) Create kubelet system D file
vim /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=/k8s/kubernetes/cfg/kubelet ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target
7) Binding kubelet-bootstrap users to system cluster roles
kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap
Note that this default connection to localhost:8080 port can be operated on master
[root@elasticsearch01 ssl]# kubectl create clusterrolebinding kubelet-bootstrap \ > --clusterrole=system:node-bootstrapper \ > --user=kubelet-bootstrap clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
8) Start up services
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
[root@elasticsearch02 cfg]# systemctl status kubelet ● kubelet.service - Kubernetes Kubelet Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-12-27 17:34:30 CST; 18s ago Main PID: 24676 (kubelet) Memory: 88.6M CGroup: /system.slice/kubelet.service └─24676 /k8s/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=10.2.8.44 --kubeconfig=/k8s/kubernetes...
9) Master accepts kubelet CSR requests
You can approve CSR requests manually or automatically. It is recommended to use an automated approach, because from version v1.8 onwards, certificates generated after approve CSR can be automatically rotated, as follows: manual approve CSR request operation method
View the CSR list
[root@elasticsearch01 ssl]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc 102s kubelet-bootstrap Pending
Accept node
[root@elasticsearch01 ssl]# kubectl certificate approve node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc certificatesigningrequest.certificates.k8s.io/node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc approved
Look at CSR again.
[root@elasticsearch01 ssl]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc 5m13s kubelet-bootstrap Approved,Issued
5.3 Deployment of kube-proxy components
kube-proxy runs on all node nodes. It monitors the changes of service and Endpoint in apiserver and creates routing rules to balance service load.
1) Create the kube-proxy configuration file
vim /k8s/kubernetes/cfg/kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=10.2.8.65 \ --cluster-cidr=10.254.0.0/16 \ --kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"
2) Create kube-proxy system D file
vim /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
3) Start up services
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
[root@elasticsearch02 cfg]# systemctl status kube-proxy ● kube-proxy.service - Kubernetes Proxy Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-12-27 18:31:42 CST; 11s ago Main PID: 5376 (kube-proxy) Memory: 40.9M CGroup: /system.slice/kube-proxy.service ‣ 5376 /k8s/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=10.2.8.44 --cluster-cidr=10.254.0.0/...
4) View cluster status
[root@elasticsearch01 cfg]# kubectl get nodes NAME STATUS ROLES AGE VERSION 10.2.8.65 Ready <none> 9m15s v1.13.1
5) Similarly deploy node 10.2.8.34 and authenticate csr. After authentication, kubelet-client certificate will be generated.
Note that if kubelet, kube-proxy configuration errors, such as listening for IP or hostname errors lead to node not found, you need to delete the kubelet-client certificate, restart the kubelet service, restart the authentication csr
[root@elasticsearch03 kubernetes]# ls ssl ca-key.pem kubelet-client-2018-12-27-20-13-52.pem kubelet.crt kube-proxy-key.pem server-key.pem ca.pem kubelet-client-current.pem kubelet.key kube-proxy.pem server.pem [root@elasticsearch01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION 10.2.8.34 Ready <none> 13h v1.13.1 10.2.8.65 Ready <none> 14h v1.13.1
Six Flanneld Network Deployment
By default, there is no flanneld network, and the pod between Node nodes can not communicate, but can only communicate within Node. In order to simplify the deployment steps, flanneld is installed later.
Flannel services need to be started before docker. The flannel service starts with the following steps:
Getting configuration information of network from etcd
Divide subnet and register in etcd
Record subnet information in / run/flannel/subnet.env
6.1 etcd registration segment
[root@elasticsearch02 cfg]# /k8s/etcd/bin/etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://10.2.8.44:2379,https://10.2.8.65:2379,https://10.2.8.34:2379" set /k8s/network/config '{ "Network": "10.254.0.0/16", "Backend": {"Type": "vxlan"}}' { "Network": "10.254.0.0/16", "Backend": {"Type": "vxlan"}}
The current version of flanneld (v0.10.0) does not support etcd v3, so etcd v2 API is used to write configuration key and segment data.
Written Pod segment ${CLUSTER_CIDR} must be / 16 segment address, and must be consistent with the -- cluster-cidr parameter value of kube-controller-manager;
6.2 flannel Installation
1) decompression installation
tar -xvf flannel-v0.10.0-linux-amd64.tar.gz mv flanneld mk-docker-opts.sh /k8s/kubernetes/bin/
2) Configure flanneld
vim /k8s/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=https://10.2.8.44:2379,https://10.2.8.65:2379,https://10.2.8.34:2379 -etcd-cafile=/k8s/etcd/ssl/ca.pem -etcd-certfile=/k8s/etcd/ssl/server.pem -etcd-keyfile=/k8s/etcd/ssl/server-key.pem -etcd-prefix=/k8s/network"
Create flanneld system D file
vim /usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/k8s/kubernetes/cfg/flanneld ExecStart=/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS ExecStartPost=/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target
Be careful
The mk-docker-opts.sh script writes the information of the OD subnet segment assigned to flanneld into the / run/flannel/docker file, which is used to configure the docker 0 bridge when the docker starts.
flanneld communicates with other nodes using the interface where the system default route is located. For nodes with multiple network interfaces (such as intranet and public network), the communication interface can be specified with - iface parameter.
flanneld runtime requires root privileges;
3) Configure Docker to start the specified subnet
Modify EnvironmentFile=/run/flannel/subnet.env, ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
vim /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target
4) Start up services
Note that before starting flannel, close docker and related kubelet s so that flannel can cover the docker 0 Bridge
systemctl daemon-reload systemctl stop docker systemctl start flanneld systemctl enable flanneld systemctl start docker systemctl restart kubelet systemctl restart kube-proxy
5) Verification Service
[root@elasticsearch02 bin]# cat /run/flannel/subnet.env DOCKER_OPT_BIP="--bip=10.254.35.1/24" DOCKER_OPT_IPMASQ="--ip-masq=false" DOCKER_OPT_MTU="--mtu=1450" DOCKER_NETWORK_OPTIONS=" --bip=10.254.35.1/24 --ip-masq=false --mtu=1450"
[root@elasticsearch02 bin]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether 52:54:00:a4:ca:ff brd ff:ff:ff:ff:ff:ff inet 10.2.8.65/24 brd 10.2.8.255 scope global eth0 valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:06:0a:ab:32 brd ff:ff:ff:ff:ff:ff inet 10.254.35.1/24 brd 10.254.35.255 scope global docker0 valid_lft forever preferred_lft forever 4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN link/ether 72:59:dc:2b:0a:21 brd ff:ff:ff:ff:ff:ff inet 10.254.35.0/32 scope global flannel.1 valid_lft forever preferred_lft forever
[root@elasticsearch01 k8s]# kubectl get nodes NAME STATUS ROLES AGE VERSION 10.2.8.34 Ready <none> 16h v1.13.1 10.2.8.65 Ready <none> 18h v1.13.1