Complete Kubernetes binary deployment step by step -- component installation (single node)

Keywords: Kubernetes kubelet SSL JSON

Complete Kubernetes binary deployment step by step (3) - component installation (single node)

Preface

In the previous two articles, we have completed the basic environment construction, including etcd cluster (including Certificate creation), flannel network setting, docker engine installation and deployment, etc. in this paper, we will complete the Kubernetes cluster deployed in binary mode of single node on three servers.

Configure on the master node

1. Create working directory

[root@master01 k8s]# mkdir -p /opt/kubernetes/{cfg,bin,ssl}

2. Deploying the apiserver component

2.1 make apiserver certificate

2.1.1 create API server certificate directory and write certificate generation script

[root@master01 k8s]# mkdir k8s-cert
[root@master01 k8s]# cd k8s-cert/

[root@master01 k8s-cert]# cat k8s-cert.sh 
#This kind of text has been introduced and explained before when etcd cluster is set up. Here, we will not elaborate on the following plan writing of address part
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json <<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#-----------------------

cat > server-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.0.128", #master01
      "192.168.0.131", #master02
      "192.168.0.100", #Drift address VIP
      "192.168.0.132", #Load balancing server address
      "192.168.0.133", #Load balancing server address
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

#-----------------------

cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

#-----------------------

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

2.1.2 execute the script and copy the communication certificate to the ssl directory of the working directory just created

[root@master01 k8s-cert]# bash k8s-cert.sh 
#View related files after script execution
[root@master01 k8s-cert]# ls
admin.csr       admin.pem       ca-csr.json  k8s-cert.sh          kube-proxy-key.pem  server-csr.json
admin-csr.json  ca-config.json  ca-key.pem   kube-proxy.csr       kube-proxy.pem      server-key.pem
admin-key.pem   ca.csr          ca.pem       kube-proxy-csr.json  server.csr     
#Store the certificate required before installing the apiserver component in the working directory
[root@master01 k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/
[root@master01 k8s-cert]# ls /opt/kubernetes/ssl/
ca-key.pem  ca.pem  server-key.pem  server.pem

2.2 extract the Kubernetes package and copy the command tool to the bin directory of the working directory path

Package link:
Link: https://pan.baidu.com/s/1COp94_Y47TU0G8-QSYb5Nw
Extraction code: ftzq

[root@master01 k8s]# ls
apiserver.sh  controller-manager.sh  etcd-v3.3.10-linux-amd64         k8s-cert                              master.zip
cfssl.sh      etcd-cert              etcd-v3.3.10-linux-amd64.tar.gz  kubernetes-server-linux-amd64.tar.gz  scheduler.sh
[root@master01 k8s]# tar zxf kubernetes-server-linux-amd64.tar.gz 
[root@master01 k8s]# ls
apiserver.sh           etcd-cert                        k8s-cert                              master.zip
cfssl.sh               etcd-v3.3.10-linux-amd64         kubernetes                            scheduler.sh
controller-manager.sh  etcd-v3.3.10-linux-amd64.tar.gz  kubernetes-server-linux-amd64.tar.gz
[root@master01 k8s]# ls kubernetes/ -R
kubernetes/:
addons  kubernetes-src.tar.gz  LICENSES  server

kubernetes/addons:

kubernetes/server:
bin

kubernetes/server/bin:
apiextensions-apiserver              kube-apiserver.docker_tag           kube-proxy
cloud-controller-manager             kube-apiserver.tar                  kube-proxy.docker_tag
cloud-controller-manager.docker_tag  kube-controller-manager             kube-proxy.tar
cloud-controller-manager.tar         kube-controller-manager.docker_tag  kube-scheduler
hyperkube                            kube-controller-manager.tar         kube-scheduler.docker_tag
kubeadm                              kubectl                             kube-scheduler.tar
kube-apiserver                       kubelet                             mounter

#Enter the command directory and move the required command tools to the bin directory of the previously created working directory
[root@master01 k8s]# cd kubernetes/server/bin/
[root@master01 bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/

2.3 making token token

#Execute the command to generate a random serial number, and write the serial number to token.csv
[root@master01 k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
7f42570ec314322c3d629868855d406f

[root@master01 k8s]# cat /opt/kubernetes/cfg/token.csv
7f42570ec314322c3d629868855d406f,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
#Comma interval, indicating serial number, user name, id and role respectively

2.4 enable the apiserver service

Script apiserver

[root@master01 k8s]# vim apiserver.sh
#!/bin/bash

MASTER_ADDRESS=$1
ETCD_SERVERS=$2

#Generate Kube API server configuration file in k8s working directory
cat <<EOF >/opt/kubernetes/cfg/kube-apiserver

KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=4 \\
--etcd-servers=${ETCD_SERVERS} \\
--bind-address=${MASTER_ADDRESS} \\
--secure-port=6443 \\
--advertise-address=${MASTER_ADDRESS} \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--kubelet-https=true \\
--enable-bootstrap-token-auth \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"

EOF

#Generate startup script
cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

#Start the apiserver component
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver
[root@master01 k8s]# bash apiserver.sh 192.168.0.128 https://192.168.0.128:2379,https://192.168.0.129:2379,https://192.168.0.130:2379
#Check if the process started successfully
[root@master01 k8s]# ps aux | grep kube
root      56487 36.9 16.6 397952 311740 ?       Ssl  19:42   0:07 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.0.128:2379,https://192.168.0.129:2379,https://192.168.0.130:2379 --bind-address=192.168.0.128 --secure-port=6443 --advertise-address=192.168.0.128 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem
root      56503  0.0  0.0 112676   984 pts/4    R+   19:43   0:00 grep --color=auto kube

View profile

[root@master01 k8s]# cat /opt/kubernetes/cfg/kube-apiserver 

KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.0.128:2379,https://192.168.0.129:2379,https://192.168.0.130:2379 \
--bind-address=192.168.0.128 \
--secure-port=6443 \
--advertise-address=192.168.0.128 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"

#View the listening https port

[root@master01 k8s]# netstat -natp | grep 6443
tcp        0      0 192.168.0.128:6443      0.0.0.0:*               LISTEN      56487/kube-apiserve 
tcp        0      0 192.168.0.128:6443      192.168.0.128:45162     ESTABLISHED 56487/kube-apiserve 
tcp        0      0 192.168.0.128:45162     192.168.0.128:6443      ESTABLISHED 56487/kube-apiserve 
[root@master01 k8s]# netstat -natp | grep 8080
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      56487/kube-apiserve 
[root@master01 k8s]# 

3. Start the scheduler service

[root@master01 k8s]# ./scheduler.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.

The script of scheduler.sh is as follows:

#!/bin/bash

MASTER_ADDRESS=$1

cat <<EOF >/opt/kubernetes/cfg/kube-scheduler

KUBE_SCHEDULER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler

Check progress

[root@master01 k8s]# ps aux | grep kube-scheudler
root      56652  0.0  0.0 112676   988 pts/4    S+   19:49   0:00 grep --color=auto kube-scheudler

4. Start the controller manager service

Start by script

[root@master01 k8s]#  ./controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.

The script is as follows

#!/bin/bash

MASTER_ADDRESS=$1

cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect=true \\
--address=127.0.0.1 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager

View the master node status

[root@master01 k8s]# /opt/kubernetes/bin/kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   

If the status is healthy, there is no problem with the current configuration

What we need next is the deployment on the node

Node node deployment

First of all, some files or command tools need to be copied remotely to the node, so some files need to be copied remotely from the master node

1. Copy kubelet and Kube proxy on the master node to the node node

[root@master01 bin]# pwd
/root/k8s/kubernetes/server/bin
[root@master01 bin]# scp kubelet kube-proxy root@192.168.0.129:/opt/kubernetes/bin/
root@192.168.0.129's password: 
kubelet                                                                                 100%  168MB  84.2MB/s   00:02    
kube-proxy                                                                              100%   48MB 104.6MB/s   00:00    
[root@master01 bin]# scp kubelet kube-proxy root@192.168.0.130:/opt/kubernetes/bin/
root@192.168.0.130's password: 
kubelet                                                                                 100%  168MB 123.6MB/s   00:01    
kube-proxy                                                                              100%   48MB 114.6MB/s   00:00    

2. Create configuration directory on master node and write configuration script

[root@master01 k8s]# mkdir kubeconfig
[root@master01 k8s]# cd kubeconfig/

[root@master01 kubeconfig]# cat kubeconfig 
APISERVER=$1
SSL_DIR=$2

# Create kubelet bootstrapping kubeconfig 
export KUBE_APISERVER="https://$APISERVER:6443"

# Set cluster parameters
kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

# Set client authentication parameters
kubectl config set-credentials kubelet-bootstrap \
  --token=7f42570ec314322c3d629868855d406f \
  --kubeconfig=bootstrap.kubeconfig

# Set context parameters
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# Set default context
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#----------------------

# Create the Kube proxy kubeconfig file

kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=$SSL_DIR/kube-proxy.pem \
  --client-key=$SSL_DIR/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
[root@master01 kubeconfig]# 

Setting environment variables

[root@master01 kubeconfig]# vim /etc/profile
#Write the line command to the end of this file
export PATH=$PATH:/opt/kubernetes/bin/
[root@master01 kubeconfig]# source /etc/profile
[root@master01 kubeconfig]# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/opt/kubernetes/bin/
#View cluster status
[root@master01 kubeconfig]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
controller-manager   Healthy   ok                  

2. Build profile

[root@master01 k8s-cert]# cd -
/root/k8s/kubeconfig
[root@master01 kubeconfig]#  bash kubeconfig 192.168.0.128 /root/k8s/k8s-cert/
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
#View generated profiles (two)
[root@master01 kubeconfig]# ls
bootstrap.kubeconfig  kubeconfig  kube-proxy.kubeconfig

3. Copy the two configuration files to the node node

[root@master01 kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.0.129:/opt/kubernetes/cfg/
root@192.168.0.129's password: 
bootstrap.kubeconfig                                                    100% 2166     1.2MB/s   00:00    
kube-proxy.kubeconfig                                                   100% 6268     8.1MB/s   00:00    
[root@master01 kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.0.130:/opt/kubernetes/cfg/
root@192.168.0.130's password: 
bootstrap.kubeconfig                                                                    100% 2166     1.4MB/s   00:00    
kube-proxy.kubeconfig                                                                   100% 6268     7.4MB/s   00:00    
[root@master01 kubeconfig]# 

4. Creating a bootstrap role gives permission to connect to the apiserver to request signature, which is a critical step

[root@master01 kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
#The results are as follows
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

Operations on node nodes

Enable kubelet service on both nodes

[root@node01 opt]# bash kubelet.sh 192.168.0.129 #The second is 130
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

[root@node01 opt]# ps aux | grep kubelet
root      73575  1.0  1.0 535312 42456 ?        Ssl  20:14   0:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.168.0.129 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root      73651  0.0  0.0 112676   984 pts/3    R+   20:15   0:00 grep --color=auto kubelet

Verify on master node

[root@master01 kubeconfig]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-gt0pU-SbuA0k8z53lmir1q6m-i7Owo3JC8eKm2oujUk   8s    kubelet-bootstrap   Pending
node-csr-i4n0MgQnmFT7NT_VszB8DXohWN1ilhJKnyQJq_9rodg   24s   kubelet-bootstrap   Pending
[root@master01 kubeconfig]# 

PS: pending means waiting for the cluster to issue a certificate to the node

[root@master01 kubeconfig]# kubectl certificate approve node-csr-gt0pU-SbuA0k8z53lmir1q6m-i7Owo3JC8eKm2oujUk
certificatesigningrequest.certificates.k8s.io/node-csr-gt0pU-SbuA0k8z53lmir1q6m-i7Owo3JC8eKm2oujUk approved
[root@master01 kubeconfig]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-gt0pU-SbuA0k8z53lmir1q6m-i7Owo3JC8eKm2oujUk   3m46s   kubelet-bootstrap   Approved,Issued
node-csr-i4n0MgQnmFT7NT_VszB8DXohWN1ilhJKnyQJq_9rodg   4m2s    kubelet-bootstrap   Pending

PS: Approved,Issued indicates that it has been allowed to join the cluster

#View the cluster node and successfully join node02 node

[root@master01 kubeconfig]#  kubectl get node
NAME            STATUS   ROLES    AGE   VERSION
192.168.0.130   Ready    <none>   69s   v1.12.3

By the way, node01 will be finished

[root@master01 kubeconfig]# kubectl certificate approve node-csr-i4n0MgQnmFT7NT_VszB8DXohWN1ilhJKnyQJq_9rodg
certificatesigningrequest.certificates.k8s.io/node-csr-i4n0MgQnmFT7NT_VszB8DXohWN1ilhJKnyQJq_9rodg approved
[root@master01 kubeconfig]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-gt0pU-SbuA0k8z53lmir1q6m-i7Owo3JC8eKm2oujUk   6m20s   kubelet-bootstrap   Approved,Issued
node-csr-i4n0MgQnmFT7NT_VszB8DXohWN1ilhJKnyQJq_9rodg   6m36s   kubelet-bootstrap   Approved,Issued
[root@master01 kubeconfig]#  kubectl get node
NAME            STATUS   ROLES    AGE     VERSION
192.168.0.129   Ready    <none>   7s      v1.12.3
192.168.0.130   Ready    <none>   2m55s   v1.12.3

Starting the proxy proxy service on two nodes

[root@node01 opt]# bash proxy.sh 192.168.0.129
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
#Check proxy service status
[root@node01 opt]# systemctl status kube-proxy.service
● kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since I. 2020-05-04 20:45:26 CST; 1min 9s ago
 Main PID: 77325 (kube-proxy)
   Memory: 7.6M
   CGroup: /system.slice/kube-proxy.service
           ‣ 77325 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168....

So far, the single node Kubernetes cluster has been configured. I divided it into three articles to do it step by step.

Finally, I'd like to show you the configuration file content of node nodes in the cluster

node01 node
[root@node01 cfg]# cat kubelet

KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.0.129 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

[root@node01 cfg]# cat kube-proxy

KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.0.129 \
--cluster-cidr=10.0.0.0/24 \
--proxy-mode=ipvs \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

node02 node
[root@node02 cfg]# cat kubelet

KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.0.130 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

[root@node02 cfg]# cat kube-proxy

KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.0.130 \
--cluster-cidr=10.0.0.0/24 \
--proxy-mode=ipvs \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

Posted by davelr459 on Tue, 05 May 2020 06:42:52 -0700