K8S1.14 High Availability Production Cluster Deployment Scheme

Keywords: Kubernetes SSL kubelet JSON

system description

System Component Version

  • Operating System: CentOS 7.6
  • Kernel: 4.4
  • Kubernetes: v1.14.1
  • Docker: 18.09 (supported 1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09)
  • Etcd: v3.3.12
  • Flannel: v0.11
  • cni-plugins: v0.7.5
  • CoreDNS: 1.4.0

Schematic diagram

Architecture description:

  1. Use six hosts, three Master nodes, three node nodes
  2. The Kubernetes components deployed on the Master node are kube-apiserver, kube-scheduler, kube-controller-manager,kube-proxy.Deploy network component flannel, data storage cluster Etcd.
  3. Two highly available nodes of Master deploy Haproxy and keepalived
  4. The Kubernetes components deployed by the Node node are Kubelet,kube-proxy.Container component Docker, network component Flannel
  5. Cluster IP and hostname information:
Cluster Roles host name IP
Master master-1 192.168.20.44
Master master-2 192.168.20.45
Master master-3 192.168.20.46
Node k8s-node-1 192.168.20.47
Node k8s-node-2 192.168.20.48
Node k8s-node-3 192.168.20.49
  1. Ceph requires an available Eph cluster

System Initialization

1. Host Initialization

Install the CentOS7 system and do the following:

  • Turn off firewalld, Selinux.
  • Update system package, execute yum update
  • Install the source of elrepo, update the kernel to version 4.4 or above, and restart to take effect
  • Set the hostname separately and resolve it in the local hosts file.
  • Install NTP Service
  • Setting Kernel Parameters

In the section where you set the kernel parameters, be sure to do the following:

# High Availability Master Node Setting Kernel Parameters
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.ipv4.ip_nonlocal_bind = 1    
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_port_range = 10000 65000
fs.file-max = 2000000
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF

# Other Master and Compute Nodes Set Kernel Parameters
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_port_range = 10000 65000
fs.file-max = 2000000
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF

2. Install Docker

Since the supported versions of Kubernetes 1.14 are 1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09, Docker 18.09 is used uniformly here.

Download the rpm package for docker-ce-18.09 and the source for docker-ce.repo on Ariyun and install it directly on all node s:

mv docker-ce.repo /etc/yum.repos.d/
yum install docker-ce-18.09.5-3.el7.x86_64.rpm -y

Start the docker on all node s and configure self-startup:

systemctl start docker
systemctl enable docker

3. Create a directory

Execute the following command on all hosts to create the desired directory:

mkdir -p /opt/kubernetes/{cfg,bin,ssl,log}

4. Prepare the Kubernetes package

Download the kubernetes version 1.14 binary package from github at: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.14.md#server-binaries

Download the following packages:

[root@master-1 tmp]# ll
total 537520
-rw-r--r-- 1 root root 113938518 Jul 24 19:15 kubernetes-node-linux-amd64.tar.gz
-rw-r--r-- 1 root root 433740362 Jul 24 19:09 kubernetes-server-linux-amd64.tar.gz

Unzip:

tar xf kubernetes-server-linux-amd64.tar.gz

5. Prepare etcd and flannel components

Download etcd v3.3.12 and flannel v0.11.0 from github:

wget https://github.com/etcd-io/etcd/releases/download/v3.3.12/etcd-v3.3.12-linux-amd64.tar.gz
wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz

Make CA Certificate

Kubernetes supports the generation of certificates in a variety of ways, using either easyrsa, openssl, or cfssl.
Reference Links

Here you use cfssl to create a CA certificate.

1. Install CFSSL

Generating CA certificates using cfssl requires a separate installation of cfssl.

[root@master-1 ~]# cd /usr/local/src/

curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /opt/kubernetes/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /opt/kubernetes/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /opt/kubernetes/bin/cfssl-certinfo
chmod +x /opt/kubernetes/bin/*

All nodes add the Kubernetes bin directory to the system environment variable:

echo 'PATH=$PATH:/opt/kubernetes/bin' >>/etc/profile
source /etc/profile

2. Generate certificates

  1. Create the required profile:
[root@master-1 ~]# cd /opt/kubernetes/ssl/
[root@master-1 ssl]# cfssl  print-defaults config > config.json
[root@master-1 ssl]# cfssl print-defaults csr > csr.json
[root@master-1 ssl]# ll
total 8
-rw-r--r-- 1 root root 567 Jul 26 00:05 config.json
-rw-r--r-- 1 root root 287 Jul 26 00:05 csr.json
[root@master-1 ssl]# mv config.json ca-config.json
[root@master-1 ssl]# mv csr.json  ca-csr.json
  1. Modify the generated file as follows:
    The ca-config.json file:
[root@master-1 ssl]# vim ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}

The ca-csr.json file:

[root@master-1 ssl]# vim ca-csr.json 
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
          "C": "CN",
          "ST": "BeiJing",
          "L": "BeiJing",
          "O": "k8s",
          "OU": "System"
        }
    ]
}
  1. Generate certificates (ca-key.pem) and secret keys (ca.pem):
[root@master-1 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2018/07/26 00:27:00 [INFO] generating a new CA key and certificate from CSR
2018/07/26 00:27:00 [INFO] generate received request
2018/07/26 00:27:00 [INFO] received CSR
2018/07/26 00:27:00 [INFO] generating key: rsa-2048
2018/07/26 00:27:01 [INFO] encoded CSR
2018/07/26 00:27:01 [INFO] signed certificate with serial number 479065525331838190845576195908271097044538206777
[root@master-1 ssl]# ll
total 20
-rw-r--r-- 1 root root  386 Jul 26 00:16 ca-config.json
-rw-r--r-- 1 root root 1001 Jul 26 00:27 ca.csr
-rw-r--r-- 1 root root  255 Jul 26 00:20 ca-csr.json
-rw------- 1 root root 1679 Jul 26 00:27 ca-key.pem
-rw-r--r-- 1 root root 1359 Jul 26 00:27 ca.pem
  1. Distribute certificates to nodes:
[root@master-1 ssl]# scp ca.csr ca.pem ca-key.pem ca-config.json 192.168.20.45:/opt/kubernetes/ssl
[root@master-1 ssl]# scp ca.csr ca.pem ca-key.pem ca-config.json 192.168.20.46:/opt/kubernetes/ssl
[root@master-1 ssl]# scp ca.csr ca.pem ca-key.pem ca-config.json 192.168.20.47:/opt/kubernetes/ssl
[root@master-1 ssl]# scp ca.csr ca.pem ca-key.pem ca-config.json 192.168.20.48:/opt/kubernetes/ssl
[root@master-1 ssl]# scp ca.csr ca.pem ca-key.pem ca-config.json 192.168.20.49:/opt/kubernetes/ssl

HA Node Deployment

Two Master nodes are selected to deploy Haproxy and keepalived, and a script to monitor haproxy applications needs to be added to keep alived.

keepalived configuration

  1. HA Node Download Install keepalive:
yum install keepalived -y
  1. Configure two virtual IPs, one for the apiserver proxy of the k8s cluster, the other for the nginx ingress entry (which can also be configured separately), and set the status judgment for haproxy. If the end of the haproxy process on a node requires automatic switching of VIP to another node, the main HA configuration is as follows:
# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

vrrp_script check_haproxy {
        script "/etc/keepalived/check_haproxy.sh"
        interval 3
        weight -20
}

vrrp_instance K8S {
    state backup 
    interface eth0
    virtual_router_id 44
    priority 200
    advert_int 5
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.20.50
        192.168.20.60

    }
    track_script {
        check_haproxy
   }

}
  1. Configure from HA as follows:
! Configuration File for keepalived

vrrp_script check_haproxy {
        script "/etc/keepalived/check_haproxy.sh"
        interval 3
        weight -20
}

vrrp_instance K8S {
    state backup
    interface eth0
    virtual_router_id 44
    priority 190
    advert_int 5
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.20.50
        192.168.20.60

    }
    track_script {
        check_haproxy
   }
}
  1. Configure the corresponding monitoring scripts on both nodes:
vim /etc/keepalived/check_haproxy.sh

#!/bin/bash
active_status=`netstat -lntp|grep haproxy|wc -l`
if [ $active_status -gt 0 ]; then
    exit 0
else
    exit 1
fi
  1. Need to add permissions
chmod +x /etc/keepalived/check_haproxy.sh

Deploy Haproxy

Official Configuration Manual

  1. You need to confirm that the kernel parameters have been configured:
echo 'net.ipv4.ip_nonlocal_bind = 1'>>/etc/sysctl.conf
echo 'net.ipv4.ip_forward = 1'>>/etc/sysctl.conf

sysctl -p
  1. Install haproxy
yum install haproxy -y
  1. To configure haproxy, we designed a VIP of 192.168.20.50 for the k8s cluster using a 4-tier proxy with the following configuration files:
# cat /etc/haproxy/haproxy.cfg |egrep -v "^#"

global

    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

defaults
    mode                    tcp           # Modify default to four-tier proxy
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

frontend  main 192.168.20.50:6443
    acl url_static       path_beg       -i /static /images /javascript /stylesheets
    acl url_static       path_end       -i .jpg .gif .png .css .js

    default_backend             k8s-node

backend k8s-node
    mode        tcp             # Modify to tcp
    balance     roundrobin
    server  k8s-node-1  192.168.20.44:6443 check     # Three master hosts
    server  k8s-node-2  192.168.20.45:6443 check
    server  k8s-node-3  192.168.20.46:6443 check

Check if IP can switch automatically after configuration is complete.

Deploy ETCD Cluster

1. Install etcd

Execute the following command to complete the etcd installation:

[root@master-1 ~]# cd /tmp/
[root@master-1 tmp]# tar xf etcd-v3.3.12-linux-amd64.tar.gz 
[root@master-1 tmp]# cd etcd-v3.3.12-linux-amd64
[root@master-1 tmp]# cp etcd* /opt/kubernetes/bin/
[root@master-1 tmp]# scp etcd* 192.168.20.45:/opt/kubernetes/bin/
[root@master-1 tmp]# scp etcd* 192.168.20.46:/opt/kubernetes/bin/

2. Generate etcd's proprietary certificate

1. Create etcd certificate signature request

[root@master-1 ~]# vim etcd-csr.json
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
"192.168.20.44",
"192.168.20.45",
"192.168.20.46"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

2. Generate etcd certificates

[root@master-1 ~]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem  \
-ca-key=/opt/kubernetes/ssl/ca-key.pem  \
-config=/opt/kubernetes/ssl/ca-config.json  \
-profile=kubernetes etcd-csr.json | cfssljson -bare etcd

The following files are generated:

[root@master-1 ~]# ll
total 16
-rw-r--r-- 1 root root 1062 Jul 26 01:18 etcd.csr
-rw-r--r-- 1 root root  287 Jul 26 00:50 etcd-csr.json
-rw------- 1 root root 1679 Jul 26 01:18 etcd-key.pem
-rw-r--r-- 1 root root 1436 Jul 26 01:18 etcd.pem
  1. Move the certificate to the ssl directory:
[root@master-1 ~]#  cp etcd*.pem /opt/kubernetes/ssl
[root@master-1 ~]# scp etcd*.pem 192.168.20.45:/opt/kubernetes/ssl
[root@master-1 ~]# scp etcd*.pem 192.168.20.46:/opt/kubernetes/ssl

3. Configure etcd

  1. Configure ETCD Profile

Configuration on master-1 is:

[root@master-1 ~]# vim /opt/kubernetes/cfg/etcd.conf
#[member]
ETCD_NAME="etcd-node-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_SNAPSHOT_COUNTER="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://192.168.20.44:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.20.44:2379,https://127.0.0.1:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.20.44:2380"
# if you use different ETCD_NAME (e.g. test),
# set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="etcd-node-1=https://192.168.20.44:2380,etcd-node-2=https://192.168.20.45:2380,etcd-node-3=https://192.168.20.46:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.20.44:2379"
#[security]
CLIENT_CERT_AUTH="true"
ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"

Configuration on master-2 is:

[root@master-2 tmp]# vim /opt/kubernetes/cfg/etcd.conf
#[member]
ETCD_NAME="etcd-node-2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_SNAPSHOT_COUNTER="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://192.168.20.45:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.20.45:2379,https://127.0.0.1:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.20.45:2380"
# if you use different ETCD_NAME (e.g. test),
# set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="etcd-node-1=https://192.168.20.44:2380,etcd-node-2=https://192.168.20.45:2380,etcd-node-3=https://192.168.20.46:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.20.45:2379"
#[security]
CLIENT_CERT_AUTH="true"
ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"

Configuration on master-3 is:

[root@master-3 ~]# vim /opt/kubernetes/cfg/etcd.conf
#[member]
ETCD_NAME="etcd-node-3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_SNAPSHOT_COUNTER="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://192.168.20.46:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.20.46:2379,https://127.0.0.1:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.20.46:2380"
# if you use different ETCD_NAME (e.g. test),
# set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="etcd-node-1=https://192.168.20.44:2380,etcd-node-2=https://192.168.20.45:2380,etcd-node-3=https://192.168.20.46:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.20.46:2379"
#[security]
CLIENT_CERT_AUTH="true"
ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"
ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"

Create a systemd file of etcd on three nodes:

[root@master-1 ~]# vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos/etcd
Conflicts=etcd.service
Conflicts=etcd2.service

[Service]
Type=notify
Restart=always
RestartSec=5s
LimitNOFILE=40000
TimeoutStartSec=0
WorkingDirectory=/var/lib/etcd
EnvironmentFile=-/opt/kubernetes/cfg/etcd.conf
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /opt/kubernetes/bin/etcd"

[Install]
WantedBy=multi-user.target
  1. Start the ETCD service and execute the following commands at three nodes:
mkdir /var/lib/etcd
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd

Confirm that the etcd service for the node is started.

4. Verify Cluster

[root@master-1 ~]# etcdctl --endpoints=https://192.168.20.44:2379 \
 --ca-file=/opt/kubernetes/ssl/ca.pem \
 --cert-file=/opt/kubernetes/ssl/etcd.pem \
 --key-file=/opt/kubernetes/ssl/etcd-key.pem cluster-health
member 32922a109cfe00b2 is healthy: got healthy result from https://192.168.20.46:2379
member 4fa519fdd3e64a84 is healthy: got healthy result from https://192.168.20.45:2379
member cab6e832332e8b2a is healthy: got healthy result from https://192.168.20.44:2379
cluster is healthy

Master Node Deployment

1. Deploy the Kubernetes package

[root@master-1 ~]# cd /tmp/kubernetes/server/bin/
[root@master-1 bin]# cp kube-apiserver /opt/kubernetes/bin/
[root@master-1 bin]# cp kube-controller-manager /opt/kubernetes/bin/
[root@master-1 bin]# cp kube-scheduler /opt/kubernetes/bin/

2. Generate authentication file for API Server

Reference Links

1. Create a JSON file for CSR generation, where you specify the IP of the HA proxy and Cluster's Cluster's Cluster IP:

[root@master-1 ~]# cd /opt/kubernetes/ssl
[root@master-1 ssl]# vim kubernetes-csr.json
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "192.168.20.50",
    "10.1.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

2. Generate certificates and private keys for Kubernetes

[root@master-1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
   -ca-key=/opt/kubernetes/ssl/ca-key.pem \
   -config=/opt/kubernetes/ssl/ca-config.json \
   -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
  1. Distribute the private key to all other node nodes:
[root@master-1 ssl]# scp kubernetes*.pem 192.168.20.46:/opt/kubernetes/ssl/
...
  1. Create token files used by API Server
[root@master-1 ~]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
197f33fcbbfab2d15603dcc4408358f5
[root@master-1 ~]# vim /opt/kubernetes/ssl/bootstrap-token.csv
197f33fcbbfab2d15603dcc4408358f5,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
  1. Create base user name, password authentication configuration
[root@k8s-node-1 ~]#  vim /opt/kubernetes/ssl/basic-auth.csv
admin,admin,1
readonly,readonly,2
  1. Copy files from the ssl directory to other master nodes
scp -r -p /opt/kubernetes/ssl/*  k8s-node-1:/opt/kubernetes/ssl/
scp -r -p /opt/kubernetes/ssl/*  k8s-node-2:/opt/kubernetes/ssl/
scp -r -p /opt/kubernetes/ssl/*  k8s-node-3:/opt/kubernetes/ssl/

3. Deploy kube-apiserver

  1. Create a systemd file for kube-apiserver
[root@k8s-node-1 ~]#  vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/opt/kubernetes/bin/kube-apiserver \
  --enable-admission-plugins=MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \
  --bind-address=192.168.20.44 \
  --insecure-bind-address=127.0.0.1 \
  --authorization-mode=Node,RBAC \
  --runtime-config=rbac.authorization.k8s.io/v1 \
  --kubelet-https=true \
  --anonymous-auth=false \
  --basic-auth-file=/opt/kubernetes/ssl/basic-auth.csv \
  --enable-bootstrap-token-auth \
  --token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv \
  --service-cluster-ip-range=10.1.0.0/16 \
  --service-node-port-range=20000-40000 \
  --tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \
  --tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \
  --client-ca-file=/opt/kubernetes/ssl/ca.pem \
  --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
  --etcd-cafile=/opt/kubernetes/ssl/ca.pem \
  --etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem \
  --etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem \
  --etcd-servers=https://192.168.20.44:2379,https://192.168.20.45:2379,https://192.168.20.46:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/opt/kubernetes/log/api-audit.log \
  --event-ttl=1h \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
  1. Start the kube-apiserver service
[root@k8s-node-1 ~]# systemctl daemon-reload
[root@k8s-node-1 ~]# systemctl start kube-apiserver
[root@k8s-node-1 ~]# systemctl enable kube-apiserver
  1. Check if the service is in good condition
[root@master-1 ~]# systemctl status kube-apiserver
[root@master-1 ~]# netstat -lntp|grep kube-apiserver
tcp        0      0 192.168.20.44:6443      0.0.0.0:*               LISTEN      4289/kube-apiserver 
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      4289/kube-apiserver 

4. Deploy controller-manager

  1. Generate systemd file for controller-manager
[root@master-1 ~]# vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/kubernetes/bin/kube-controller-manager \
  --bind-address=127.0.0.1 \
  --master=http://127.0.0.1:8080 \
  --allocate-node-cidrs=true \
  --service-cluster-ip-range=10.1.0.0/16 \
  --cluster-cidr=10.2.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
  --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
  --root-ca-file=/opt/kubernetes/ssl/ca.pem \
  --leader-elect=true \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log

Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
  1. Start kube-controller-manager
[root@master-1 ~]# systemctl daemon-reload
[root@master-1 ~]# systemctl start kube-controller-manager
[root@master-1 ~]# systemctl enable kube-controller-manager
  1. View service status
[root@master-1 ~]# systemctl status kube-controller-manager
[root@master-1 ~]# netstat -lntp|grep kube-con
tcp        0      0 127.0.0.1:10252         0.0.0.0:*               LISTEN      4390/kube-controlle 

5. Deploy Kubernetes Scheduler

  1. Create a systemd file:
[root@master-1 ~]# vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/kubernetes/bin/kube-scheduler \
  --address=127.0.0.1 \
  --master=http://127.0.0.1:8080 \
  --leader-elect=true \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log

Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
  1. Start Services
[root@master-1 ~]# systemctl daemon-reload
[root@master-1 ~]# systemctl start kube-scheduler
[root@master-1 ~]# systemctl enable kube-scheduler
  1. View service status
[root@master-1 ~]# systemctl status kube-scheduler
[root@master-1 ~]# netstat -lntp|grep kube-scheduler
tcp        0      0 127.0.0.1:10251         0.0.0.0:*               LISTEN      4445/kube-scheduler

6. Master node deployment kube-proxy (optional)

(See the node deployment section, you need to create a corresponding kube-proxy home directory)

7. Configure master-1 and master-2 using the above approach

  1. Copy the ssl,cfg,bin files on master-1 to the corresponding locations of other master nodes.
  2. Configure the startup files for each service and start it.

8. Deploy the kubectl command line tool

  1. Install Binary Package
[root@master-1 ~]# cd /tmp/kubernetes/node/bin/
[root@master-1 bin]# cp kubectl /opt/kubernetes/bin/

2. Create admin certificate signature

[root@master-1 ~]# vim /opt/kubernetes/ssl/admin-csr.json
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}

3. Generate admin certificate and private key

[root@master-1 ~]# cd /opt/kubernetes/ssl/
[root@master-1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
   -ca-key=/opt/kubernetes/ssl/ca-key.pem \
   -config=/opt/kubernetes/ssl/ca-config.json \
   -profile=kubernetes admin-csr.json | cfssljson -bare admin
  1. Setting cluster parameters
[root@master-1 ~]#  kubectl config set-cluster kubernetes \
    --certificate-authority=/opt/kubernetes/ssl/ca.pem \
    --embed-certs=true \
    --server=https://192.168.20.50:6443
Cluster "kubernetes" set.

5. Set client authentication parameters:

[root@naster-1 ~]# kubectl config set-credentials admin \
    --client-certificate=/opt/kubernetes/ssl/admin.pem \
    --embed-certs=true \
    --client-key=/opt/kubernetes/ssl/admin-key.pem
User "admin" set.

6. Set context parameters

[root@master-1 ~]# kubectl config set-context kubernetes \
    --cluster=kubernetes \
    --user=admin
Context "kubernetes" created.

7. Set the default context:

[root@master-1 ~]# kubectl config use-context kubernetes
Switched to context "kubernetes".

8. Use the Kubectl tool to view the current status:

[root@master-1 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   

Node node deployment

1. Install required services

Unzip the kubernetes-node-linux-amd64.tar.gz package on the node by doing the following

[root@k8s-node-1 ~]# cd /tmp/kubernetes/node/bin
[root@k8s-node-1 bin]# cp kubelet kube-proxy  /opt/kubernetes/bin/
[root@k8s-node-1 bin]# scp kubelet kube-proxy  192.168.20.48:/opt/kubernetes/bin/
[root@k8s-node-1 bin]# scp kubelet kube-proxy  192.168.20.49:/opt/kubernetes/bin/

2. Configure roles and authentication parameters

  1. Create role bindings on master-1
[root@master-1 ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io "kubelet-bootstrap" created
  1. Create a kubelet bootstrapping kubeconfig file and set cluster parameters
[root@master-1 ~]# kubectl config set-cluster kubernetes \
   --certificate-authority=/opt/kubernetes/ssl/ca.pem \
   --embed-certs=true \
   --server=https://192.168.20.50:6443 \
   --kubeconfig=bootstrap.kubeconfig
Cluster "kubernetes" set.

3. Set client authentication parameters

[root@master-1 ~]# kubectl config set-credentials kubelet-bootstrap \
   --token=197f33fcbbfab2d15603dcc4408358f5 \
   --kubeconfig=bootstrap.kubeconfig   

User "kubelet-bootstrap" set.

4. Set context authentication parameters

[root@master-1 ~]# kubectl config set-context default \
    --cluster=kubernetes \
    --user=kubelet-bootstrap \
    --kubeconfig=bootstrap.kubeconfig
Context "default" created.

5. Choose the default context

[root@master-1 ~]# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
Switched to context "default"

6. After doing the above, a bootstrap.kubeconfig config file is generated in the current directory and distributed to the nodes:

[root@k8s-node-1 ~]# cp bootstrap.kubeconfig /opt/kubernetes/cfg/
[root@k8s-node-1 ~]# scp bootstrap.kubeconfig 192.168.20.47:/opt/kubernetes/cfg/
[root@k8s-node-1 ~]# scp bootstrap.kubeconfig 192.168.20.48:/opt/kubernetes/cfg/
[root@k8s-node-1 ~]# scp bootstrap.kubeconfig 192.168.20.49:/opt/kubernetes/cfg/
  1. Copy the updated configuration on master to another master node.

3. Set up support for CNI

The following actions need to be performed on all node s

  1. Set up Kubernetes support for CNI:
[root@k8s-node-2 ~]# mkdir -p /etc/cni/net.d
[root@k8s-node-2 ~]# vim /etc/cni/net.d/10-default.conf
{
        "name": "flannel",
        "type": "flannel",
        "delegate": {
            "bridge": "docker0",
            "isDefaultGateway": true,
            "mtu": 1400
        }
}

4. Configure the Kubelet service

The following actions need to be performed on all node s

  1. Create a kubelet service configuration file
[root@k8s-node-2 ~]# mkdir /var/lib/kubelet
[root@k8s-node-2 ~]# vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \
  --address=192.168.20.48 \
  --hostname-override=192.168.20.48 \
  --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.1 \
  --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
  --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
  --cert-dir=/opt/kubernetes/ssl \
  --network-plugin=cni \
  --cni-conf-dir=/etc/cni/net.d \
  --cni-bin-dir=/opt/kubernetes/bin/cni \
  --cluster-dns=10.1.0.2 \
  --cluster-domain=cluster.local. \
  --hairpin-mode hairpin-veth \
  --allow-privileged=true \
  --fail-swap-on=false \
  --logtostderr=true \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
  1. Start Kubelet
[root@k8s-node-2 ~]# systemctl daemon-reload
[root@k8s-node-2 ~]# systemctl start kubelet
[root@k8s-node-2 ~]# systemctl enable kubelet
[root@k8s-node-2 ~]# systemctl status kubelet
  1. Check on the master node to see if a csr request is received from the node:
[root@master-1 ~]# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-FDH7Y3rghf1WPsEJH2EYnofvOSeyHn2f-l_-4rH-LEk   2m        kubelet-bootstrap   Pending
  1. Approve kubelet's TLS request
[root@master-1 ~]# kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve
certificatesigningrequest.certificates.k8s.io "node-csr-FDH7Y3rghf1WPsEJH2EYnofvOSeyHn2f-l_-4rH-LEk" approved

[root@kmaster-1 ~]# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-FDH7Y3rghf1WPsEJH2EYnofvOSeyHn2f-l_-4rH-LEk   11m       kubelet-bootstrap   Approved,Issued
  1. Then check the node status:
[root@master-1 ~]# kubectl get node
NAME            STATUS    ROLES     AGE       VERSION
192.168.20.48   Ready     <none>    35s       v1.14.1

View kubelet service on node

[root@k8s-node-2 ~]# netstat -lntp|grep kubelet
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      7917/kubelet        
tcp        0      0 192.168.20.32:10250     0.0.0.0:*               LISTEN      7917/kubelet        
tcp        0      0 192.168.20.32:10255     0.0.0.0:*               LISTEN      7917/kubelet        
tcp        0      0 192.168.20.32:4194      0.0.0.0:*               LISTEN      7917/kubelet     

5. Deploy kube-proxy

1. Configure kube-proxy to use LVS and all nodes execute:

yum install -y ipvsadm ipset conntrack

2. Create Certificate Request

[root@master-1 ~]# vim kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

3. Generate certificates

[root@master-1 ~]#  cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
    -ca-key=/opt/kubernetes/ssl/ca-key.pem \
    -config=/opt/kubernetes/ssl/ca-config.json \
    -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy

4. Distribute certificates to all node nodes

[root@master-1 ~]# cp kube-proxy*.pem /opt/kubernetes/ssl/
[root@master-1 ~]# scp kube-proxy*.pem 192.168.20.47:/opt/kubernetes/ssl/
[root@master-1 ~]# scp kube-proxy*.pem 192.168.20.48:/opt/kubernetes/ssl/
[root@master-1 ~]# scp kube-proxy*.pem 192.168.20.49:/opt/kubernetes/ssl/

5. Create a kube-proxy configuration file

[root@k8s-node-2 ~]# kubectl config set-cluster kubernetes  \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem   \
  --embed-certs=true   \
  --server=https://192.168.20.50:6443 \
  --kubeconfig=kube-proxy.kubeconfig
Cluster "kubernetes" set.

6. Create a kube-proxy user:

[root@k8s-node-2 ~]# kubectl config set-credentials kube-proxy \
    --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \
    --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-proxy.kubeconfig
User "kube-proxy" set.

7. Set the default context:

[root@k8s-node-2 ~]# kubectl config set-context default \
    --cluster=kubernetes \
    --user=kube-proxy \
    --kubeconfig=kube-proxy.kubeconfig
Context "default" created.

8. Switch context to default:

[root@k8s-node-2 ~]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
Switched to context "default".

9. Distribute the kube-proxy.kubeconfig configuration file to all

[root@k8s-node-2 ~]# scp kube-proxy.kubeconfig 192.168.20.44:/opt/kubernetes/cfg/
[root@k8s-node-2 ~]# scp kube-proxy.kubeconfig 192.168.20.45:/opt/kubernetes/cfg/
[root@k8s-node-2 ~]# scp kube-proxy.kubeconfig 192.168.20.46:/opt/kubernetes/cfg/
[root@k8s-node-2 ~]# scp kube-proxy.kubeconfig 192.168.20.47:/opt/kubernetes/cfg/
[root@k8s-node-2 ~]# scp kube-proxy.kubeconfig 192.168.20.48:/opt/kubernetes/cfg/
[root@k8s-node-2 ~]# scp kube-proxy.kubeconfig 192.168.20.459/opt/kubernetes/cfg/

10. Create a kube-proxy service configuration file

All nodes execute, note that the IP in the configuration file needs to be modified to its native counterpart

[root@k8s-node-1 ~]# mkdir /var/lib/kube-proxy
[root@k8s-node-1 ~]# vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \
  --bind-address=192.168.20.47 \
  --hostname-override=192.168.20.47 \
  --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig \
  --masquerade-all \
  --feature-gates=SupportIPVSProxyMode=true \
  --proxy-mode=ipvs \
  --ipvs-min-sync-period=5s \
  --ipvs-sync-period=5s \
  --ipvs-scheduler=rr \
  --logtostderr=true \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log

Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

11. Start Services

systemctl start kube-proxy
systemctl enable kube-proxy
systemctl status kube-proxy

12. View service status, lvs status

[root@k8s-node-1 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.0.1:443 rr
  -> 192.168.20.44:6443           Masq    1      0          0         
  -> 192.168.20.45:6443           Masq    1      0          0         
  -> 192.168.20.46:6443           Masq    1      1          0  

When all node s are successfully configured, you can see the following results:

[root@master-1 ~]# kubectl get node
NAME            STATUS   ROLES    AGE     VERSION
192.168.20.47   Ready    <none>   6d21h   v1.14.1
192.168.20.48   Ready    <none>   4d1h    v1.14.1
192.168.20.49   Ready    <none>   4d1h    v1.14.1

Flannel Network Deployment

All nodes need to deploy flannel s.

1. Create Flannel Certificate

1. Generate certificate file

[root@master-1 ~]# vim flanneld-csr.json
{
  "CN": "flanneld",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

2. Generate certificates

[root@master-1 ~]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
    -ca-key=/opt/kubernetes/ssl/ca-key.pem \
    -config=/opt/kubernetes/ssl/ca-config.json \
    -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld

3. Distributing certificates

[root@master-1 ~]# cp flanneld*.pem /opt/kubernetes/ssl/
[root@master-1 ~]# scp flanneld*.pem {all-k8s-node}:/opt/kubernetes/ssl/

2. Deploy flannel

1. Unzip the previously downloaded flannel package and distribute it to other nodes as follows:

cp mk-docker-opts.sh flanneld /opt/kubernetes/bin/
scp mk-docker-opts.sh flanneld {all-k8s-node}:/opt/kubernetes/bin/

2. Create the following file and distribute it to each node:

[root@k8s-node-1 tmp]# vim remove-docker0.sh
#!/bin/bash
# Delete default docker bridge, so that docker can start with flannel network.

# exit on any error
set -e

rc=0
ip link show docker0 >/dev/null 2>&1 || rc="$?"
if [[ "$rc" -eq "0" ]]; then
  ip link set dev docker0 down
  ip link delete docker0
fi
[root@k8s-node-1 tmp]# cp remove-docker0.sh /opt/kubernetes/bin/
[root@k8s-node-1 tmp]# scp remove-docker0.sh 192.168.20.48:/opt/kubernetes/bin/
[root@k8s-node-1 tmp]# scp remove-docker0.sh 192.168.20.49:/opt/kubernetes/bin/

3. Configure flannel

[root@k8s-node-1 ~]# vim /opt/kubernetes/cfg/flannel
FLANNEL_ETCD="-etcd-endpoints=https://192.168.20.31:2379,https://192.168.20.32:2379,https://192.168.20.33:2379"
FLANNEL_ETCD_KEY="-etcd-prefix=/kubernetes/network"
FLANNEL_ETCD_CAFILE="--etcd-cafile=/opt/kubernetes/ssl/ca.pem"
FLANNEL_ETCD_CERTFILE="--etcd-certfile=/opt/kubernetes/ssl/flanneld.pem"
FLANNEL_ETCD_KEYFILE="--etcd-keyfile=/opt/kubernetes/ssl/flanneld-key.pem"
  1. Create flannel service file
[root@k8s-node-1 ~]# vim /usr/lib/systemd/system/flannel.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
Before=docker.service

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/flannel
ExecStartPre=/opt/kubernetes/bin/remove-docker0.sh
ExecStart=/opt/kubernetes/bin/flanneld ${FLANNEL_ETCD} ${FLANNEL_ETCD_KEY} ${FLANNEL_ETCD_CAFILE} ${FLANNEL_ETCD_CERTFILE} ${FLANNEL_ETCD_KEYFILE}
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -d /run/flannel/docker

Type=notify

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service

5. Distribute the created configuration file to each node:

scp /opt/kubernetes/cfg/flannel {all-k8s-node}:/opt/kubernetes/cfg/
scp /usr/lib/systemd/system/flannel.service {all-k8s-node}:/usr/lib/systemd/system/

3. Flannel CNI integration

1. Download CNI Plugins

wget https://github.com/containernetworking/plugins/releases/download/v0.7.5/cni-plugins-amd64-v0.7.5.tgz
[root@k8s-node-1 tmp]# mkdir /opt/kubernetes/bin/cni
[root@k8s-node-1 tmp]# tar xf cni-plugins-amd64-v0.7.5.tgz -C /opt/kubernetes/bin/cni

2. Distribute software to each node:

[root@k8s-node-1 ~]# scp -r /opt/kubernetes/bin/cni/* {all-k8s-node}:/opt/kubernetes/bin/cni/

3. Create key in etcd

[root@master-1 ~]# /opt/kubernetes/bin/etcdctl --ca-file /opt/kubernetes/ssl/ca.pem --cert-file /opt/kubernetes/ssl/flanneld.pem --key-file /opt/kubernetes/ssl/flanneld-key.pem \
     --no-sync -C https://192.168.20.44:2379,https://192.168.20.45:2379,https://192.168.20.46:2379 \
     mk /kubernetes/network/config '{ "Network": "10.2.0.0/16", "Backend": { "Type": "vxlan", "VNI": 1 }}' >/dev/null 2>&1

4. Each node starts flannel

[root@k8s-node-1 ~]# chmod +x /opt/kubernetes/bin/*
[root@k8s-node-1 ~]# systemctl daemon-reload
[root@k8s-node-1 ~]# systemctl start flannel 
[root@k8s-node-1 ~]# systemctl enable flannel 

Configure Docker to use Flannel

1. Modify docker's systemd file:

[Unit] #Modify After and increase Requires under Unit
After=network-online.target firewalld.service flannel.service
Wants=network-online.target
Requires=flannel.service

[Service] #Add EnvironmentFile=-/run/flannel/docker
Type=notify
EnvironmentFile=-/run/flannel/docker
ExecStart=/usr/bin/dockerd $DOCKER_OPTS

2. Other NODE nodes make the same changes

[root@k8s-node-2 ~]# scp /usr/lib/systemd/system/docker.service {k8s-node}:/usr/lib/systemd/system/

3. Restart docker, docker0 network card appears, and at 10.2.0.0/16 network segment, indicating successful configuration

[root@k8s-node-3 ~]# systemctl daemon-reload
[root@k8s-node-3 ~]# systemctl restart docker
[root@k8s-node-3 ~]# ip a| grep -A 3 'docker0'
7: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:e9:2b:36:86 brd ff:ff:ff:ff:ff:ff
    inet 10.2.79.1/24 scope global docker0
       valid_lft forever preferred_lft forever

Plug-in Deployment

1. Create CoreDNS

  1. Create coredns.yaml as follows:
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local. in-addr.arpa ip6.arpa {
            pods insecure
            upstream
            fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        proxy . /etc/resolv.conf
        cache 30
    }
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: coredns
  template:
    metadata:
      labels:
        k8s-app: coredns
    spec:
      serviceAccountName: coredns
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      containers:
      - name: coredns
        image: coredns/coredns:1.4.0
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: coredns
  clusterIP: 10.1.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  1. Execute this file:
[root@master-1 tmp]# kubectl create -f coredns.yaml
  1. Confirm that the DNS service is running:
[root@master-1 ~]# kubectl get pod  -n kube-system  -o wide
NAME                                    READY   STATUS    RESTARTS   AGE    IP          NODE            NOMINATED NODE   READINESS GATES
coredns-76fcfc9f65-9fkfh                1/1     Running   2          3d7h   10.2.45.3   192.168.20.49   <none>           <none>
coredns-76fcfc9f65-zfplt                1/1     Running   1          3d6h   10.2.24.2   192.168.20.48   <none>           <none>

2. Deploy Dashboard

1. Execute yaml in the directory and deploy Dashboard:

[root@master-1 ~]# ll /tmp/dashboard/
total 20
-rw-r--r-- 1 root root  356 Jul 27 03:43 admin-user-sa-rbac.yaml
-rw-r--r-- 1 root root 4253 Jul 27 03:47 kubernetes-dashboard.yaml
-rw-r--r-- 1 root root  458 Jul 27 03:49 ui-admin-rbac.yaml
-rw-r--r-- 1 root root  477 Jul 27 03:50 ui-read-rbac.yaml

[root@master-1 ~]# kubectl create -f /tmp/dashboard/

2. Verify that the service is functioning properly:

[root@master-1 ~]# kubectl get pod -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-76fcfc9f65-9fkfh                1/1     Running   2          3d7h
coredns-76fcfc9f65-zfplt                1/1     Running   1          3d6h
kubernetes-dashboard-68ddcc97fc-w4bxf   1/1     Running   1          3d2h

[root@master-1 ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.20.50:6443
CoreDNS is running at https://192.168.20.50:6443/api/v1/namespaces/kube-system/services/coredns:dns/proxy
kubernetes-dashboard is running at https://192.168.20.50:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

3. From the prompt, use dashboard's url, login, account admin/admin to generate token with the following command:

[root@master-1 ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

4. Copy token and choose token-based login:

3. Heapster deployment (optional)

1. Deploy Heastper using the following file:

[root@master-1 ~]# ll heastper/
total 12
-rw-r--r-- 1 root root 2306 Jul 26 20:28 grafana.yaml
-rw-r--r-- 1 root root 1562 Jul 26 20:29 heapster.yaml
-rw-r--r-- 1 root root 1161 Jul 26 20:29 influxdb.yaml

[root@k8s-node-1 ~]# kubectl create -f heastper/
  1. Log in to dashboard to see a chart of the resource utilization of stones.

  1. Use the kubectl cluster-info command to view the url address of the current service.

Supplementary Instructions

etcd certificateless configuration instructions

In a real production environment, if you are using an intranet environment, you can configure the etd cluster in a certificateless mode, which will be easier to configure and subsequent failover.
etcd certificateless configuration requires http access. To install the above documents, you need to modify the following configuration:

  1. etcd's configuration file commented out the security certificate section and changed all URLs to http:
# cat /opt/kubernetes/cfg/etcd.conf
#[member]
ETCD_NAME="etcd-node-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_SNAPSHOT_COUNTER="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="http://192.168.20.31:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.20.31:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.20.31:2380"
# if you use different ETCD_NAME (e.g. test),
# set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="etcd-node-1=http://192.168.20.31:2380,etcd-node-2=http://192.168.20.32:2380,etcd-node-3=http://192.168.20.33:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.20.31:2379"
#[security]
#CLIENT_CERT_AUTH="true"
#ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"
#ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
#ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
#PEER_CLIENT_CERT_AUTH="true"
#ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"
#ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
#ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
  1. The fannel network section commentes out etcd's certificate configuration parameter and puts the URL at http:
# cat /opt/kubernetes/cfg/flannel 

FLANNEL_ETCD="-etcd-endpoints=http://192.168.20.31:2379,http://192.168.20.32:2379,http://192.168.20.33:2379"
FLANNEL_ETCD_KEY="-etcd-prefix=/kubernetes/network"
#FLANNEL_ETCD_CAFILE="--etcd-cafile=/opt/kubernetes/ssl/ca.pem"
#FLANNEL_ETCD_CERTFILE="--etcd-certfile=/opt/kubernetes/ssl/flanneld.pem"
#FLANNEL_ETCD_KEYFILE="--etcd-keyfile=/opt/kubernetes/ssl/flanneld-key.pem"

3. Remove etcd's certificate configuration from kube-apiserver, this file needs to delete parameters directly and change url to http:

# cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/opt/kubernetes/bin/kube-apiserver \
  --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \
  --bind-address=192.168.20.31 \
  --insecure-bind-address=127.0.0.1 \
  --authorization-mode=Node,RBAC \
  --runtime-config=rbac.authorization.k8s.io/v1 \
  --kubelet-https=true \
  --anonymous-auth=false \
  --basic-auth-file=/opt/kubernetes/ssl/basic-auth.csv \
  --enable-bootstrap-token-auth \
  --token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv \
  --service-cluster-ip-range=10.1.0.0/16 \
  --service-node-port-range=20000-40000 \
  --tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \
  --tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \
  --client-ca-file=/opt/kubernetes/ssl/ca.pem \
  --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
  --etcd-servers=http://192.168.20.31:2379,http://192.168.20.32:2379,http://192.168.20.33:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/opt/kubernetes/log/api-audit.log \
  --event-ttl=1h \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

4. Restart services such as flannel, kubelet, kube-apiserver, etc.

Posted by aaronlzw_21 on Sat, 11 May 2019 01:18:23 -0700