Super simple kubernetes high availability cluster installation

Keywords: Linux Kubernetes Nginx Docker yum

Precondition

  • System requirements: 64 bit CentOS 7.6
  • Turn off firewall and selinux
  • Turn off the operating system swap partition (it is not recommended to turn it on using k8s)
  • Please pre configure the hostname of each node to ensure no duplicate.
  • Please configure the first master to be able to log in all nodes (including itself) keyless

Environmental description

This manual is suitable for small-scale use.

Multi master mode (at least three), with keepalived installed on each master node

Preparations (to be performed for each node)

Docker and kubernetes software source configuration
# Switch to configuration directory
cd /etc/yum.repos.d/
# Configure docker CE alisource
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Configure kubernetes alisource
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
Configure kernel related parameters
cat <<EOF >  /etc/sysctl.d/ceph.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
Install the appropriate package
# Install kubeadm kubelet kubectl
yum install kubeadm kubectl kubelet -y

# Start kubelet and docker
systemctl enable docker kubelet

# Start docker
systemctl start docker

deploy

Install keepalived (on all Masters)
# If there is LB here, you can omit to use LB address directly.
# During installation, please execute on the initialization master first to ensure that the VIP is attached to the initialization master. Otherwise, please close other keepalived

# After the installation, health monitoring can be realized according to your own business needs
yum install keepalived -y

# Backup the original keepalived file
mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak

# Generate a new keepalived configuration file. Please modify each master in the comment section.
cat <<EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id k8s-master1                      #Host name of the primary register
   vrrp_mcast_group4 224.26.1.1         

}

vrrp_instance VI_1 {
    state BACKUP                          
    interface eth0
    virtual_router_id 66              
    nopreempt                             
    priority 90                         
    advert_int 1
    authentication {
        auth_type PASS                     
        auth_pass 123456                 
    }
    virtual_ipaddress {
        10.20.1.8                            #VIP address statement
    }
}
EOF

# Configure keepalived startup and startup
systemctl enable keepalived
systemctl start keepalived
Generate kubeadm master configuration file
cd && cat <<EOF > kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: stable
apiServer:
  certSANs:
  - "172.29.2.188"  #Request to change to your vip address
controlPlaneEndpoint: "172.29.2.188:6443"  #Request to change to your vip address
imageRepository: registry.cn-hangzhou.aliyuncs.com/peter1009
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
EOF
Initialize the first master
# Use the kubeadm.yaml generated in the previous step
kubeadm init --config kubeadm.yaml
# After performing the previous step, the output is as follows
root@k8s4:~# kubeadm  init --config kubeadm.yaml
I0522 06:20:13.352644    2622 version.go:96] could not fetch a Kubernetes version from 
......... Omit here
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 172.16.10.114:6443 --token v2lv3k.aysjlmg3ylcl3498 \
    --discovery-token-ca-cert-hash sha256:87b69e590e9d59055c5a9c6651e333044c402dba877beb29906eddfeb0998d72 \
    --experimental-control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.10.114:6443 --token v2lv3k.aysjlmg3ylcl3498 \
    --discovery-token-ca-cert-hash sha256:87b69e590e9d59055c5a9c6651e333044c402dba877beb29906eddfeb0998d72
Installation cluster
cat <<EOF > copy.sh
CONTROL_PLANE_IPS="172.16.10.101 172.16.10.102"  # Modify these two ip addresses to your second / Third Master ip address
for host in ${CONTROL_PLANE_IPS}; do
    ssh $host mkdir -p /etc/kubernetes/pki/etcd
    scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/ca.key "${USER}"@$host:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/sa.key "${USER}"@$host:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:/etc/kubernetes/pki/
    scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:/etc/kubernetes/pki/etcd/ca.crt
    scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:/etc/kubernetes/pki/etcd/ca.key
    scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/
done
EOF

# If password free login is not configured, this step fails
bash -x copy.sh
# Execute the prompt content in the current node to enable kubectl to access the cluster
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

# Configure the execution prompt content on other master nodes (after the copy.sh file is executed successfully)
kubeadm join 172.16.10.114:6443 --token v2lv3k.aysjlmg3ylcl3498 \
    --discovery-token-ca-cert-hash sha256:87b69e590e9d59055c5a9c6651e333044c402dba877beb29906eddfeb0998d72 \
    --experimental-control-plane
# Configure execution prompt content on other non master nodes
kubeadm join 172.16.10.114:6443 --token v2lv3k.aysjlmg3ylcl3498 \
    --discovery-token-ca-cert-hash sha256:87b69e590e9d59055c5a9c6651e333044c402dba877beb29906eddfeb0998d72
Installing flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Check if installation is complete
root@k8s4:~# kubectl  get nodes
NAME   STATUS   ROLES    AGE   VERSION
k8s4   Ready    master   20m   v1.14.2
root@k8s4:~# kubectl  get nodes
NAME   STATUS   ROLES    AGE   VERSION
k8s4   Ready    master   20m   v1.14.2
root@k8s4:~# kubectl  get pods --all-namespaces
NAMESPACE     NAME                           READY   STATUS    RESTARTS   AGE
kube-system   coredns-8cc96f57d-cfr4j        1/1     Running   0          20m
kube-system   coredns-8cc96f57d-stcz6        1/1     Running   0          20m
kube-system   etcd-k8s4                      1/1     Running   0          19m
kube-system   kube-apiserver-k8s4            1/1     Running   0          19m
kube-system   kube-controller-manager-k8s4   1/1     Running   0          19m
kube-system   kube-flannel-ds-amd64-k4q6q    1/1     Running   0          50s
kube-system   kube-proxy-lhjsf               1/1     Running   0          20m
kube-system   kube-scheduler-k8s4            1/1     Running   0          19m
Test whether the cluster can be used normally
# Remove the node stain, so that the master can be scheduled normally. For k8s4, please change it to the nodename of your own cluster.
kubectl  taint node k8s4 node-role.kubernetes.io/master:NoSchedule-

# Create nginx deploy
root@k8s4:~# kubectl  create deploy nginx --image nginx
deployment.apps/nginx created

root@k8s4:~# kubectl  get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-65f88748fd-9sk6z   1/1     Running   0          2m44s

# Expose nginx outside the cluster
root@k8s4:~# kubectl  expose deploy nginx --port=80 --type=NodePort
service/nginx exposed
root@k8s4:~# kubectl  get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        25m
nginx        NodePort    10.104.109.234   <none>        80:32129/TCP   5s
root@k8s4:~# curl 127.0.0.1:32129
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Posted by gotDNS on Sat, 26 Oct 2019 09:00:26 -0700