Construction of k8s_v1.15.0_HA Cluster Foundation Environment

Keywords: Operation & Maintenance Docker CentOS Linux yum

The first time I wrote a blog in Yunlu community, I took the process of building Kubernetes high-availability cluster infrastructure to try. Haha.
First, the configuration is briefly introduced: proxy uses ipvs, network uses calico, stacked etcd cluster, and apiserver uses haproxy+keepalived for load balancing

My github link https://github.com/JackWBC/k8s_v1.15.0_HA_cluster

Virtual machine environment preparation

CentOS 7 x86_64 Mini (recommended at least 2 core CPU,2G memory)
Network card ens33 (different servers, the corresponding network card name in the installation process can be changed to their own)
Three master, three node, domain name and IP are as follows

role domain name IP
master master1.k8s 192.168.250.141
master master2.k8s 192.168.250.142
master master3.k8s 192.168.250.143
node node1.k8s 192.168.250.144
node node2.k8s 192.168.250.145
node node3.k8s 192.168.250.146
Virtual IP -- 192.168.250.99

Virtual Machine Foundation Configuration

Operate on all master s and node s

Solving the setLocale problem

cat <<EOF >  /etc/environment
LANG=en_US.UTF-8
LC_ALL=C
EOF

Stop iptables

systemctl stop firewalld.service && systemctl disable  firewalld.service

Set SELinux to disabled mode

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config

Disable swap partitions

swapoff -a && sed -i '/ swap / s/^/#/' /etc/fstab

Setting sysctl

cat <<EOF > /etc/sysctl.conf
fs.file-max=1000000
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
net.ipv4.ip_forward = 1
net.ipv4.tcp_max_tw_buckets = 6000
net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 16384 4194304
net.ipv4.tcp_max_syn_backlog = 16384
net.core.netdev_max_backlog = 32768
net.core.somaxconn = 32768
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_fin_timeout = 20
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syn_retries = 2
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_mem = 94500000 915000000 927000000
net.ipv4.tcp_max_orphans = 3276800
net.ipv4.ip_local_port_range = 1024 65000
net.nf_conntrack_max = 6553500
net.netfilter.nf_conntrack_max = 6553500
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_established = 3600
EOF

Load ipvs

cat << EOF | tee /etc/sysconfig/modules/ipvs.modules
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

Modify yum repo to improve download speed

mkdir /etc/yum.repos.d/bak
mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak/
CentOS-Base.repo
vi /etc/yum.repos.d/CentOS-Base.repo

# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client.  You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the
# remarked out baseurl= line instead.
#
#
[os]
name=Qcloud centos os - $basearch
baseurl=http://mirrors.cloud.tencent.com/centos/$releasever/os/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://mirrors.cloud.tencent.com/centos/RPM-GPG-KEY-CentOS-7

[updates]
name=Qcloud centos updates - $basearch
baseurl=http://mirrors.cloud.tencent.com/centos/$releasever/updates/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://mirrors.cloud.tencent.com/centos/RPM-GPG-KEY-CentOS-7

[centosplus]
name=Qcloud centosplus - $basearch
baseurl=http://mirrors.cloud.tencent.com/centos/$releasever/centosplus/$basearch/
enabled=0
gpgcheck=1
gpgkey=http://mirrors.cloud.tencent.com/centos/RPM-GPG-KEY-CentOS-7

[cloud]
name=Qcloud centos contrib - $basearch
baseurl=http://mirrors.cloud.tencent.com/centos/$releasever/cloud/$basearch/openstack-kilo/
enabled=0
gpgcheck=1
gpgkey=http://mirrors.cloud.tencent.com/centos/RPM-GPG-KEY-CentOS-7

[cr]
name=Qcloud centos cr - $basearch
baseurl=http://mirrors.cloud.tencent.com/centos/$releasever/cr/$basearch/
enabled=0
gpgcheck=1
gpgkey=http://mirrors.cloud.tencent.com/centos/RPM-GPG-KEY-CentOS-7

[extras]
name=Qcloud centos extras - $basearch
baseurl=http://mirrors.cloud.tencent.com/centos/$releasever/extras/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://mirrors.cloud.tencent.com/centos/RPM-GPG-KEY-CentOS-7

[fasttrack]
name=Qcloud centos fasttrack - $basearch
baseurl=http://mirrors.cloud.tencent.com/centos/$releasever/fasttrack/$basearch/
enabled=0
gpgcheck=1
gpgkey=http://mirrors.cloud.tencent.com/centos/RPM-GPG-KEY-CentOS-7
kubernetes.repo
vi /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
docker-ce.repo
vi /etc/yum.repos.d/docker-ce.repo

[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-debuginfo]
name=Docker CE Stable - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-source]
name=Docker CE Stable - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-edge]
name=Docker CE Edge - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-edge-debuginfo]
name=Docker CE Edge - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-edge-source]
name=Docker CE Edge - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/edge
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test]
name=Docker CE Test - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-debuginfo]
name=Docker CE Test - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-source]
name=Docker CE Test - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly]
name=Docker CE Nightly - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-debuginfo]
name=Docker CE Nightly - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug-$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-source]
name=Docker CE Nightly - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
Clear the yum cache and reload it
yum clean all
yum makecache

Install related plug-ins

  • Operate on all nodes
yum install ipset -y

yum install ipvsadm -y

yum install -y docker-ce-18.09.7-3.el7
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
systemctl daemon-reload && systemctl restart docker
systemctl enable docker && systemctl start docker

yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
systemctl daemon-reload && systemctl restart kubelet

Install haproxy + keepalived to implement HA

  • Configure haproxy proxy and keepalived on all master nodes
mkdir /etc/haproxy
cat >/etc/haproxy/haproxy.cfg<<EOF
global
  log 127.0.0.1 local0 err
  maxconn 4096
  uid 99
  gid 99
  #daemon
  nbproc 1
  pidfile haproxy.pid

defaults
  mode http
  log 127.0.0.1 local0 err
  maxconn 4096
  retries 3
  timeout connect 5s
  timeout client 30s
  timeout server 30s
  timeout check 2s

listen admin_stats
  mode http
  bind 0.0.0.0:1080
  log 127.0.0.1 local0 err
  stats refresh 30s
  stats uri     /haproxy-status
  stats realm   Haproxy\ Statistics
  stats auth    baicheng:baicheng
  stats hide-version
  stats admin if TRUE

frontend k8s-https
  bind 0.0.0.0:8443
  mode tcp
  #maxconn 4096
  default_backend k8s-https

backend k8s-https
  mode tcp
  balance roundrobin
  server master1.k8s 192.168.250.141:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
  server master2.k8s 192.168.250.142:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
  server master3.k8s 192.168.250.143:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
EOF

docker run -d --name my-haproxy \
-v /etc/haproxy:/usr/local/etc/haproxy:ro \
-p 8443:8443 \
-p 1080:1080 \
--restart always \
registry.cn-shanghai.aliyuncs.com/baicheng_dev/haproxy:2.0.0

# Pay attention to network card configuration
docker run --net=host --cap-add=NET_ADMIN -d \
-e KEEPALIVED_INTERFACE=ens33 \
-e KEEPALIVED_VIRTUAL_IPS="#PYTHON2BASH:['192.168.250.99']" \
-e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['192.168.250.141','192.168.250.142','192.168.250.143']" \
-e KEEPALIVED_PASSWORD=baicheng \
--name k8s-keepalived \
--restart always \
registry.cn-shanghai.aliyuncs.com/baicheng_dev/keepalived:2.0.16
  • Installation Check of haproxy and keepalived
# view log
docker logs my-haproxy
docker logs k8s-keepalived

# ping virtual IP
ping -c4 192.168.250.99

# Check haproxy status (username baicheng, password baicheng)
http://master1.k8s:1080/haproxy-status
http://master2.k8s:1080/haproxy-status
http://master3.k8s:1080/haproxy-status

Building k8s Cluster Foundation Environment

  • Configure environment variables on all master nodes
vi .bash_profile
export CP0_IP="192.168.250.99"
export CP1_IP="192.168.250.141"
export CP1_HOSTNAME="master1.k8s"
export CP2_IP="192.168.250.142"
export CP2_HOSTNAME="master2.k8s"
export CP3_IP="192.168.250.143"
export CP3_HOSTNAME="master3.k8s"

source .bash_profile

# Check to see if it works
echo $CP0_IP
  • Operating on master 1
cd /etc/kubernetes

cat >kubeadm-config.yaml<<EOF
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.15.0
controlPlaneEndpoint:  $CP0_IP:8443
controllerManagerExtraArgs:
    node-monitor-grace-period: 10s
    pod-eviction-timeout: 10s
networking:
    podSubnet: 10.244.0.0/16
kubeProxy:
    config:
        mode: ipvs
imageRepository: registry.cn-shanghai.aliyuncs.com/baicheng_dev
clusterName: baicheng-k8s-cluster
EOF

sudo kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs
  • According to init output, all remaining master nodes and node nodes are join ed into the cluster
  • Configure and use kubectl based on init output
Installation Inspection
  • Use kubectl get nodes to see if all nodes have joined the cluster and are in a notread state
  • Detecting the status of etcd cluster
docker run --rm -it \
--net host \
-v /etc/kubernetes:/etc/kubernetes registry.cn-shanghai.aliyuncs.com/baicheng_dev/etcd:3.3.10 etcdctl \
--cert-file /etc/kubernetes/pki/etcd/peer.crt \
--key-file /etc/kubernetes/pki/etcd/peer.key \
--ca-file /etc/kubernetes/pki/etcd/ca.crt \
--endpoints https://${CP1_IP}:2379 cluster-health

Configuring the Network Plug-in Calico

  • Operating on any master node
cd /etc/kubernetes
mkdir calico && cd calico

vi kube-calico.yaml
# See my github project directory for the kube-calico.yaml file

kubectl apply -f kube-calico.yaml
  • Use kubectl get po --all-namespaces to view and wait for all pod running & ready
  • Then use kubectl get nodes again, and all nodes are ready
So far, the infrastructure of k8s high availability cluster has been built
Thank you

Posted by Jarl on Wed, 24 Jul 2019 23:40:26 -0700