Preface
The steps are the same as the previous installation of version 1.13
The difference lies in the configuration file of kubeadm init
At present, kubeadm init with configuration file is in the beta stage, and has entered the V1 beta 2 version in version 1.15.
Although the GA version is not yet available, kubeadm not only simplifies the steps, but also reduces the error probability of manual deployment compared with manual configuration of k8s cluster. Why not
Environment introduction:
System version: CentOS 7.6 Kernel: 4.18.7-1.el7.elrepo.x86 ʄ Kubernetes: v1.14.1 Docker-ce: 18.09 Keepalived to ensure high IP availability of apierever server Haproxy realizes load balancing of apiserver Master X3 & & etcd X3 guarantees k8s cluster availability 192.168.1.1 master 192.168.1.2 master2 192.168.1.3 master3 192.168.1.4 Keepalived + Haproxy 192.168.1.5 Keepalived + Haproxy 192.168.1.6 etcd1 192.168.1.7 etcd2 192.168.1.8 etcd3 192.168.1.9 node1 192.168.1.10 node2 Address of 192.168.1.100 VIP, apiserver
I. preparations
For ease of operation, all operations are performed by the root user
The following operations are only performed on the kubernetes cluster node
- Turn off selinux and firewall
sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config setenforce 0 systemctl disable firewalld systemctl stop firewalld
- Close swap
swapoff -a
- Configure forwarding related parameters, otherwise errors may occur
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 #vm.swappiness=0 EOF sysctl --system
- Load ipvs module
cat << EOF > /etc/sysconfig/modules/ipvs.modules #!/bin/bash ipvs_modules_dir="/usr/lib/modules/\`uname -r\`/kernel/net/netfilter/ipvs" for i in \`ls \$ipvs_modules_dir | sed -r 's#(.*).ko.*#\1#'\`; do /sbin/modinfo -F filename \$i &> /dev/null if [ \$? -eq 0 ]; then /sbin/modprobe \$i fi done EOF chmod +x /etc/sysconfig/modules/ipvs.modules bash /etc/sysconfig/modules/ipvs.modules
- Install cfssl
#Install in the master node!!! wget -O /bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget -O /bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget -O /bin/cfssl-certinfo https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 for cfssl in `ls /bin/cfssl*`;do chmod +x $cfssl;done;
- Install kubernetes alicloud image
cat << EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF yum install -y kubelet-1.14.1 kubeadm-1.14.1 kubectl-1.14.1
- Install docker and kill the docker0 Bridge
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum install -y docker-ce mkdir /etc/docker/ cat << EOF > /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"], "registry-mirrors": ["https://registry.docker-cn.com"], "live-restore": true, "default-shm-size": "128M", "bridge": "none", "max-concurrent-downloads": 10, "oom-score-adjust": -1000, "debug": false } EOF #Restart docker systemctl daemon-reload systemctl enable docker systemctl restart docker
- Configure hosts file
#Configure the hosts file for all nodes 192.168.1.1 master 192.168.1.2 master2 192.168.1.3 master3 192.168.1.4 lb1 192.168.1.5 lb2 192.168.1.6 etcd1 192.168.1.7 etcd2 192.168.1.8 etcd3 192.168.1.9 node1 192.168.1.10 node2
II. Configure etcd
- Configure certificates for etcd
mkdir -pv $HOME/ssl && cd $HOME/ssl cat << EOF > ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "87600h" } } } } EOF cat << EOF > etcd-ca-csr.json { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Shenzhen", "L": "Shenzhen", "O": "etcd", "OU": "Etcd Security" } ] } EOF cat << EOF > etcd-csr.json { "CN": "etcd", "hosts": [ "127.0.0.1", "192.168.1.6", "192.168.1.7", "192.168.1.8" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Shenzhen", "L": "Shenzhen", "O": "etcd", "OU": "Etcd Security" } ] } EOF #Generate and copy certificates to other etcd nodes cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare etcd-ca cfssl gencert -ca=etcd-ca.pem -ca-key=etcd-ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd mkdir -pv /etc/etcd/ssl cp etcd*.pem /etc/etcd/ssl mkdir -pv /etc/kubernetes/pki/etcd cp etcd*.pem /etc/kubernetes/pki/etcd scp -r /etc/etcd 192.168.1.6:/etc/ scp -r /etc/etcd 192.168.1.7:/etc/ scp -r /etc/etcd 192.168.1.8:/etc/
- etcd1 host starts etcd
yum install -y etcd cat << EOF > /etc/etcd/etcd.conf #[Member] #ETCD_CORS="" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #ETCD_WAL_DIR="" ETCD_LISTEN_PEER_URLS="https://192.168.1.6:2380" ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.6:2379" #ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" ETCD_NAME="etcd1" #ETCD_SNAPSHOT_COUNT="100000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" #ETCD_QUOTA_BACKEND_BYTES="0" #ETCD_MAX_REQUEST_BYTES="1572864" #ETCD_GRPC_KEEPALIVE_MIN_TIME="5s" #ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s" #ETCD_GRPC_KEEPALIVE_TIMEOUT="20s" # #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.6:2380" ETCD_ADVERTISE_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.6:2379" #ETCD_DISCOVERY="" #ETCD_DISCOVERY_FALLBACK="proxy" #ETCD_DISCOVERY_PROXY="" #ETCD_DISCOVERY_SRV="" ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.6:2380,etcd2=https://192.168.1.7:2380,etcd3=https://192.168.1.8:2380" ETCD_INITIAL_CLUSTER_TOKEN="BigBoss" #ETCD_INITIAL_CLUSTER_STATE="new" #ETCD_STRICT_RECONFIG_CHECK="true" #ETCD_ENABLE_V2="true" # #[Proxy] #ETCD_PROXY="off" #ETCD_PROXY_FAILURE_WAIT="5000" #ETCD_PROXY_REFRESH_INTERVAL="30000" #ETCD_PROXY_DIAL_TIMEOUT="1000" #ETCD_PROXY_WRITE_TIMEOUT="5000" #ETCD_PROXY_READ_TIMEOUT="0" # #[Security] ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem" ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem" #ETCD_CLIENT_CERT_AUTH="false" ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem" #ETCD_AUTO_TLS="false" ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem" ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem" #ETCD_PEER_CLIENT_CERT_AUTH="false" ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem" #ETCD_PEER_AUTO_TLS="false" # #[Logging] #ETCD_DEBUG="false" #ETCD_LOG_PACKAGE_LEVELS="" #ETCD_LOG_OUTPUT="default" # #[Unsafe] #ETCD_FORCE_NEW_CLUSTER="false" # #[Version] #ETCD_VERSION="false" #ETCD_AUTO_COMPACTION_RETENTION="0" # #[Profiling] #ETCD_ENABLE_PPROF="false" #ETCD_METRICS="basic" # #[Auth] #ETCD_AUTH_TOKEN="simple" EOF chown -R etcd.etcd /etc/etcd systemctl enable etcd systemctl start etcd
- etcd2 host starts etcd
yum install -y etcd cat << EOF > /etc/etcd/etcd.conf #[Member] #ETCD_CORS="" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #ETCD_WAL_DIR="" ETCD_LISTEN_PEER_URLS="https://192.168.1.7:2380" ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.7:2379" #ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" ETCD_NAME="etcd2" #ETCD_SNAPSHOT_COUNT="100000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" #ETCD_QUOTA_BACKEND_BYTES="0" #ETCD_MAX_REQUEST_BYTES="1572864" #ETCD_GRPC_KEEPALIVE_MIN_TIME="5s" #ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s" #ETCD_GRPC_KEEPALIVE_TIMEOUT="20s" # #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.7:2380" ETCD_ADVERTISE_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.7:2379" #ETCD_DISCOVERY="" #ETCD_DISCOVERY_FALLBACK="proxy" #ETCD_DISCOVERY_PROXY="" #ETCD_DISCOVERY_SRV="" ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.6:2380,etcd2=https://192.168.1.7:2380,etcd3=https://192.168.1.8:2380" ETCD_INITIAL_CLUSTER_TOKEN="BigBoss" #ETCD_INITIAL_CLUSTER_STATE="new" #ETCD_STRICT_RECONFIG_CHECK="true" #ETCD_ENABLE_V2="true" # #[Proxy] #ETCD_PROXY="off" #ETCD_PROXY_FAILURE_WAIT="5000" #ETCD_PROXY_REFRESH_INTERVAL="30000" #ETCD_PROXY_DIAL_TIMEOUT="1000" #ETCD_PROXY_WRITE_TIMEOUT="5000" #ETCD_PROXY_READ_TIMEOUT="0" # #[Security] ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem" ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem" #ETCD_CLIENT_CERT_AUTH="false" ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem" #ETCD_AUTO_TLS="false" ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem" ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem" #ETCD_PEER_CLIENT_CERT_AUTH="false" ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem" #ETCD_PEER_AUTO_TLS="false" # #[Logging] #ETCD_DEBUG="false" #ETCD_LOG_PACKAGE_LEVELS="" #ETCD_LOG_OUTPUT="default" # #[Unsafe] #ETCD_FORCE_NEW_CLUSTER="false" # #[Version] #ETCD_VERSION="false" #ETCD_AUTO_COMPACTION_RETENTION="0" # #[Profiling] #ETCD_ENABLE_PPROF="false" #ETCD_METRICS="basic" # #[Auth] #ETCD_AUTH_TOKEN="simple" EOF chown -R etcd.etcd /etc/etcd systemctl enable etcd systemctl start etcd
- etcd3 host starts etcd
yum install -y etcd cat << EOF > /etc/etcd/etcd.conf #[Member] #ETCD_CORS="" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #ETCD_WAL_DIR="" ETCD_LISTEN_PEER_URLS="https://192.168.1.8:2380" ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.8:2379" #ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" ETCD_NAME="etcd3" #ETCD_SNAPSHOT_COUNT="100000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" #ETCD_QUOTA_BACKEND_BYTES="0" #ETCD_MAX_REQUEST_BYTES="1572864" #ETCD_GRPC_KEEPALIVE_MIN_TIME="5s" #ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s" #ETCD_GRPC_KEEPALIVE_TIMEOUT="20s" # #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.8:2380" ETCD_ADVERTISE_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.8:2379" #ETCD_DISCOVERY="" #ETCD_DISCOVERY_FALLBACK="proxy" #ETCD_DISCOVERY_PROXY="" #ETCD_DISCOVERY_SRV="" ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.6:2380,etcd2=https://192.168.1.7:2380,etcd3=https://192.168.1.8:2380" ETCD_INITIAL_CLUSTER_TOKEN="BigBoss" #ETCD_INITIAL_CLUSTER_STATE="new" #ETCD_STRICT_RECONFIG_CHECK="true" #ETCD_ENABLE_V2="true" # #[Proxy] #ETCD_PROXY="off" #ETCD_PROXY_FAILURE_WAIT="5000" #ETCD_PROXY_REFRESH_INTERVAL="30000" #ETCD_PROXY_DIAL_TIMEOUT="1000" #ETCD_PROXY_WRITE_TIMEOUT="5000" #ETCD_PROXY_READ_TIMEOUT="0" # #[Security] ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem" ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem" #ETCD_CLIENT_CERT_AUTH="false" ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem" #ETCD_AUTO_TLS="false" ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem" ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem" #ETCD_PEER_CLIENT_CERT_AUTH="false" ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem" #ETCD_PEER_AUTO_TLS="false" # #[Logging] #ETCD_DEBUG="false" #ETCD_LOG_PACKAGE_LEVELS="" #ETCD_LOG_OUTPUT="default" # #[Unsafe] #ETCD_FORCE_NEW_CLUSTER="false" # #[Version] #ETCD_VERSION="false" #ETCD_AUTO_COMPACTION_RETENTION="0" # #[Profiling] #ETCD_ENABLE_PPROF="false" #ETCD_METRICS="basic" # #[Auth] #ETCD_AUTH_TOKEN="simple" EOF chown -R etcd.etcd /etc/etcd systemctl enable etcd systemctl start etcd
- Check etcd cluster
etcdctl --endpoints "https://192.168.1.6:2379,https://192.168.1.7:2379,https://192.168.1.8:2379" --ca-file=/etc/etcd/ssl/etcd-ca.pem \ --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem cluster-health [root@node3 ~]# etcdctl --endpoints "https://192.168.1.6:2379,https://192.168.1.7:2379,https://192.168.1.8:2379" --ca-file=/etc/etcd/ssl/etcd-ca.pem \ > --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem cluster-health member 3639deb1869a1bda is healthy: got healthy result from https://127.0.0.1:2379 member b75e13f1faa57bd8 is healthy: got healthy result from https://127.0.0.1:2379 member e31fec5bb4c882f2 is healthy: got healthy result from https://127.0.0.1:2379
Configure keepalived
- Configure on lb1 machine
yum install -y keepalived cat << EOF > /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { root@localhost #Send mailbox } notification_email_from keepalived@localhost #E-mail address smtp_server 127.0.0.1 #Mail server address smtp_connect_timeout 30 router_id node1 #Host name, each node is different vrrp_mcast_group4 224.0.100.100 #Multicast address } vrrp_instance VI_1 { state MASTER #BACKUP on another node interface eth0 #Network card to which IP address drifts virtual_router_id 6 #Multiple nodes must be the same priority 100 #Priority, the value of the standby node must be lower than the value of the primary node advert_int 1 #Notification interval 1 second authentication { auth_type PASS #Preshared key authentication auth_pass 571f97b2 #secret key } virtual_ipaddress { 192.168.1.100/24 #VIP address } } EOF systemctl enable keepalived systemctl start keepalived
- Host configuration in lb2
yum install -y keepalived cat << EOF > /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { root@localhost #Send mailbox } notification_email_from keepalived@localhost #E-mail address smtp_server 127.0.0.1 #Mail server address smtp_connect_timeout 30 router_id node2 #Host name, each node is different vrrp_mcast_group4 224.0.100.100 #Multicast address } vrrp_instance VI_1 { state BACKUP #MASTER on another node interface eth0 #Network card to which IP address drifts virtual_router_id 6 #Multiple nodes must be the same priority 80 #Priority, the value of the standby node must be lower than the value of the primary node advert_int 1 #Notification interval 1 second authentication { auth_type PASS #Preshared key authentication auth_pass 571f97b2 #secret key } virtual_ipaddress { 192.168.1.100/24 #Drifting IP address } } EOF systemctl enable keepalived systemctl start keepalived
Configure Haproxy
- On the lb1 host
yum install -y haproxy cat << EOF > /etc/haproxy/haproxy.cfg global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon defaults mode tcp log global retries 3 timeout connect 10s timeout client 1m timeout server 1m frontend kubernetes bind *:6443 mode tcp default_backend kubernetes-master backend kubernetes-master balance roundrobin server master 192.168.1.1:6443 check maxconn 2000 server master2 192.168.1.2:6443 check maxconn 2000 server master3 192.168.1.3:6443 check maxconn 2000 EOF systemctl enable haproxy systemctl start haproxy
- On the lb2 host
yum install -y haproxy cat << EOF > /etc/haproxy/haproxy.cfg global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon defaults mode tcp log global retries 3 timeout connect 10s timeout client 1m timeout server 1m frontend kubernetes bind *:6443 mode tcp default_backend kubernetes-master backend kubernetes-master balance roundrobin server master 192.168.1.1:6443 check maxconn 2000 server master2 192.168.1.2:6443 check maxconn 2000 server master3 192.168.1.3:6443 check maxconn 2000 EOF systemctl enable haproxy systemctl start haproxy
Initialize master
- Initialize master1
#kubeadm init Profile reference: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file cd $HOME cat << EOF > /root/kubeadm-init.yaml apiVersion: kubeadm.k8s.io/v1beta1 kind: InitConfiguration # Local api server listening address and port localAPIEndpoint: advertiseAddress: 192.168.1.1 bindPort: 6443 # The registration information of this node joining the cluster, that is, the information seen by kubectl get node nodeRegistration: # If the name field is not filled in, the host name is used by default, and the name is preferably unique to the cluster # name: master1 criSocket: /var/run/dockershim.sock # Stain, NoSchedule means no Pod is scheduled to this node # For details, please refer to: https://kubernetes.io/docs/concepts/configuration/taint-and-tolerance/ taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration # Cluster name clusterName: kubernetes # Address of controller accessing api server # If it is a cluster of multiple master s, the address of the front-end lb should be written here controlPlaneEndpoint: "192.168.1.100:6443" apiServer: # Fill in all the master IP and lbip and other addresses and domain names or host names that you may need to access through it certSANs: - "master" - "master2" - "master3" - "192.168.1.1" - "192.168.1.2" - "192.168.1.3" - "192.168.1.4" - "192.168.1.5" - "192.168.1.100" - "127.0.0.1" timeoutForControlPlane: 4m0s certificatesDir: /etc/kubernetes/pki dns: type: CoreDNS etcd: # For k8s to start etcd on its own, the cluster with multiple master s must use external # local: # imageRepository: "k8s.gcr.io" # dataDir: "/var/lib/etcd" # External etcd, all API servers should be connected external: endpoints: - "https://192.168.1.6:2379" - "https://192.168.1.7:2379" - "https://192.168.1.8:2379" caFile: "/etc/kubernetes/pki/etcd/etcd-ca.pem" certFile: "/etc/kubernetes/pki/etcd/etcd.pem" keyFile: "/etc/kubernetes/pki/etcd/etcd-key.pem" imageRepository: k8s.gcr.io kubernetesVersion: v1.14.1 networking: # service network segment serviceSubnet: "10.96.0.0/12" # pod network segment podSubnet: "10.100.0.1/24" dnsDomain: "cluster.local" --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: "ipvs" --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: "systemd" # If swap is enabled, whether kubelet is judged as startup failure failSwapOn: false EOF systemctl enable kubelet kubeadm config images pull --config kubeadm-init.yaml kubeadm init --config /root/kubeadm-init.yaml mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config cat << EOF > /etc/profile.d/kubernetes.sh source <(kubectl completion bash) EOF source /etc/profile.d/kubernetes.sh scp -r /etc/kubernetes/pki 192.168.1.2:/etc/kubernetes/ scp -r /etc/kubernetes/pki 192.168.1.3:/etc/kubernetes/
- Initialize master2
cd /etc/kubernetes/pki/ rm -fr apiserver.crt apiserver.key cd $HOME cat << EOF > /root/kubeadm-init.yaml apiVersion: kubeadm.k8s.io/v1beta1 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.1.2 bindPort: 6443 nodeRegistration: # name: master1 criSocket: /var/run/dockershim.sock taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration clusterName: kubernetes controlPlaneEndpoint: "192.168.1.100:6443" apiServer: certSANs: - "master" - "master2" - "master3" - "192.168.1.1" - "192.168.1.2" - "192.168.1.3" - "192.168.1.4" - "192.168.1.5" - "192.168.1.100" - "127.0.0.1" timeoutForControlPlane: 4m0s certificatesDir: /etc/kubernetes/pki dns: type: CoreDNS etcd: external: endpoints: - "https://192.168.1.6:2379" - "https://192.168.1.7:2379" - "https://192.168.1.8:2379" caFile: "/etc/kubernetes/pki/etcd/etcd-ca.pem" certFile: "/etc/kubernetes/pki/etcd/etcd.pem" keyFile: "/etc/kubernetes/pki/etcd/etcd-key.pem" imageRepository: k8s.gcr.io kubernetesVersion: v1.14.1 networking: serviceSubnet: "10.96.0.0/12" podSubnet: "10.100.0.1/24" dnsDomain: "cluster.local" --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: "ipvs" --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: "systemd" failSwapOn: false EOF systemctl enable kubelet kubeadm config images pull --config kubeadm-init.yaml kubeadm init --config /root/kubeadm-init.yaml mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config cat << EOF > /etc/profile.d/kubernetes.sh source <(kubectl completion bash) EOF source /etc/profile.d/kubernetes.sh
- Initialize master3
cd /etc/kubernetes/pki/ rm -fr apiserver.crt apiserver.key cd $HOME cat << EOF > /root/kubeadm-init.yaml apiVersion: kubeadm.k8s.io/v1beta1 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.1.3 bindPort: 6443 nodeRegistration: # name: master1 criSocket: /var/run/dockershim.sock taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration clusterName: kubernetes controlPlaneEndpoint: "192.168.1.100:6443" apiServer: certSANs: - "master" - "master2" - "master3" - "192.168.1.1" - "192.168.1.2" - "192.168.1.3" - "192.168.1.4" - "192.168.1.5" - "192.168.1.100" - "127.0.0.1" timeoutForControlPlane: 4m0s certificatesDir: /etc/kubernetes/pki dns: type: CoreDNS etcd: external: endpoints: - "https://192.168.1.6:2379" - "https://192.168.1.7:2379" - "https://192.168.1.8:2379" caFile: "/etc/kubernetes/pki/etcd/etcd-ca.pem" certFile: "/etc/kubernetes/pki/etcd/etcd.pem" keyFile: "/etc/kubernetes/pki/etcd/etcd-key.pem" imageRepository: k8s.gcr.io kubernetesVersion: v1.14.1 networking: serviceSubnet: "10.96.0.0/12" podSubnet: "10.100.0.1/24" dnsDomain: "cluster.local" --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: "ipvs" --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: "systemd" failSwapOn: false EOF systemctl enable kubelet kubeadm config images pull --config kubeadm-init.yaml kubeadm init --config /root/kubeadm-init.yaml mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config # Configure kubectl command prompt cat << EOF > /etc/profile.d/kubernetes.sh source <(kubectl completion bash) EOF source /etc/profile.d/kubernetes.sh
Join all node nodes to the cluster
- Get the token to join the cluster
#Execute the get join command on the master kubeadm token create --print-join-command [root@master ~]# kubeadm token create --print-join-command kubeadm join 192.168.1.100:6443 --token zpru0r.jkvrdyy2caexr8kk --discovery-token-ca-cert-hash sha256:a45c091dbd8a801152aacd877bcaaaaf152697bfa4536272c905a83612b3bf22