High Availability Practice in Kubernetes 1.9 Production Environment--Installing kubelet and proxy in 005-node

Keywords: Attribute kubelet Kubernetes Ruby

This article follows Kubernetes 1.9 Production Environment High Availability Practice - Installing flannel Network Plug-ins in 004-node.

This paper mainly talks about how to install kubelet and proxy in kubernetes 1.9 in the server yds-dev-svc02-node01.
In the process of configuration, I will copy all the output of the execution command for your reference. It can also let you know which server this command is executed on.

01 Preparatory Documents

01.01 Download the required files

We have downloaded all binary files for cluster installation in Kubernetes 1.9 production environment High Availability Practice - 002. The download address is: uuuuuuuuuuuu https://pan.baidu.com/s/1wyhV_kBpIqZ_MdS2Ghb8sg

In this section, we use the following files: kubelet and kube-proxy

Next, we begin to configure.

02 configuration kubelet

02.01 Prepare kubelet

Put the kubelet binary file in the directory / usr/bin / directory.

[root@yds-dev-svc02-node01 ~]# cp kubelet /usr/bin/
[root@yds-dev-svc02-node01 ~]# chmod +x /usr/bin/kubelet 

02.02 Download the pod-infrastructure image

[root@yds-dev-svc02-node01 ssl]# yum install *rhsm*
[root@yds-dev-svc02-node01 ssl]# docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest

02.02 Preparation of Certificate Documents

We need to create proxy's certificate file again.
As before, it's time to go back to the server yds-dev-svc01-etcd01 and create it.

Create kube-proxy-csr.json

[root@yds-dev-svc01-etcd01 key]# cat kube-proxy-csr.json 
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "chengdu",
      "L": "chengdu",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

Create certificates using cfssl command

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy

View the created certificate

[root@yds-dev-svc01-etcd01 key]# ls kube-proxy*
kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem
[root@yds-dev-svc01-etcd01 key]# pwd
/tmp/key

02.03 Create proxy kubeconfig configuration file

* Configuration cluster*

kubectl config set-cluster kubernetes \
  --certificate-authority=/tmp/key/ca.pem \
  --embed-certs=true \
  --server=https://192.168.3.55:6443 \
  --kubeconfig=kube-proxy.kubeconfig

* Configure client authentication*

kubectl config set-credentials kube-proxy \
  --client-certificate=/tmp/key/kube-proxy.pem \
  --client-key=/tmp/key/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

* Configuration association*

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

* Configure default associations*

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

When the configuration is complete, the kube-proxy.kubeconfig file is generated. Next. We copy this file to the node's / etc/kubernetes directory.

02.04 Create bootstrap configuration file

This step will be executed on yds-dev-svc01-master 01 where kubectl is installed.

To send a TLS bootstrapping request to kube-apiserver when kubelet starts, the user of kubelet-bootstrap in the bootstrap token file needs to be given the system:node-bootstrapper cluster role first, and then kubelet has the right to create authentication requests:

[root@yds-dev-svc01-master01 ~]# cd /etc/kubernetes/
[root@yds-dev-svc01-master01 kubernetes]# ls
apiserver  config  controller-manager  scheduler  ssl  token.csv
[root@yds-dev-svc01-master01 kubernetes]# kubectl create clusterrolebinding kubelet-bootstrap \
>   --clusterrole=system:node-bootstrapper \
>   --user=kubelet-bootstrap
clusterrolebinding "kubelet-bootstrap" created

View the creation results:

[root@yds-dev-svc01-master01 kubernetes]# kubectl get clusterrolebinding
NAME                                                   AGE
cluster-admin                                          8d
kubelet-bootstrap                                      3m
system:aws-cloud-provider                              8d
system:basic-user                                      8d
system:controller:attachdetach-controller              8d
system:controller:certificate-controller               8d
system:controller:clusterrole-aggregation-controller   8d
system:controller:cronjob-controller                   8d
system:controller:daemon-set-controller                8d
system:controller:deployment-controller                8d
system:controller:disruption-controller                8d
system:controller:endpoint-controller                  8d
system:controller:generic-garbage-collector            8d
system:controller:horizontal-pod-autoscaler            8d
system:controller:job-controller                       8d
system:controller:namespace-controller                 8d
system:controller:node-controller                      8d
system:controller:persistent-volume-binder             8d
system:controller:pod-garbage-collector                8d
system:controller:replicaset-controller                8d
system:controller:replication-controller               8d
system:controller:resourcequota-controller             8d
system:controller:route-controller                     8d
system:controller:service-account-controller           8d
system:controller:service-controller                   8d
system:controller:statefulset-controller               8d
system:controller:ttl-controller                       8d
system:discovery                                       8d
system:kube-controller-manager                         8d
system:kube-dns                                        8d
system:kube-scheduler                                  8d
system:node                                            8d
system:node-proxier                                    8d

View Description:

[root@yds-dev-svc01-master01 kubernetes]# kubectl describe clusterrolebinding kubelet-bootstrap
Name:         kubelet-bootstrap
Labels:       <none>
Annotations:  <none>
Role:
  Kind:  ClusterRole
  Name:  system:node-bootstrapper
Subjects:
  Kind  Name               Namespace
  ----  ----               ---------
  User  kubelet-bootstrap  

View the content:

[root@yds-dev-svc01-master01 kubernetes]# kubectl edit clusterrolebinding kubelet-bootstrap

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  creationTimestamp: 2018-04-17T08:01:22Z
  name: kubelet-bootstrap
  resourceVersion: "528680"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kubelet-bootstrap
  uid: 851e77fc-4215-11e8-b786-000c2948d8a8
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: kubelet-bootstrap

02.05 Create kubelet configuration file

The configuration file address is: / etc/kubernetes/kubelet

[root@yds-dev-svc02-node01 ~]# cat /etc/kubernetes/kubelet
###
## kubernetes kubelet (minion) config
#
## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=192.168.3.56"
#
## The port for the info server to serve on
#KUBELET_PORT="--port=10250"
#
## You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=yds-dev-svc02-node01"
#
## pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
#
## Add your own!
KUBELET_ARGS="--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice --cgroup-driver=systemd --fail-swap-on=false --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig  --cluster-dns=10.254.0.2 --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local. --hairpin-mode promiscuous-bridge --serialize-image-pulls=false"

KUBELET_ADDRESS: Fill in the IP address of this node.
KUBELET_HOSTNAME: Fill in the host name of this node. The obvious impact of configuration here is the output of the command'kubectl get nodes'.
KUBELET_API_SERVER: Fill in the apiserver address we configured earlier.
cert-dir: Automatic generation of certificate storage path.
tls-cert-file: pointing to apiserver certificate
tls-private-key-file: pointing to apiserver key

02.06 Create config configuration file

[root@yds-dev-svc02-node01 ~]# cat /etc/kubernetes/config 
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"

02.07 Create service Profile

Create a configuration file: / usr / lib / system D / system / kubelet.service

[root@yds-dev-svc02-node01 ~]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBELET_API_SERVER \
            $KUBELET_ADDRESS \
            $KUBELET_PORT \
            $KUBELET_HOSTNAME \
            $KUBE_ALLOW_PRIV \
            $KUBELET_POD_INFRA_CONTAINER \
            $KUBELET_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target

03. Configure kube-proxy

03.01 Create proxy configuration file

[root@yds-dev-svc02-node01 kubernetes]# cat proxy 
KUBE_PROXY_ARGS="--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig"

03.01 Create service File

[root@yds-dev-svc02-node01 kubernetes]# cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_MASTER \
        $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

04 Startup Services

systemctl enable kubelet kube-proxy ; systemctl restart kubelet kube-proxy; systemctl status kubelet kube-proxy

04.01 Send Certificate Signature Request

When kubelet starts for the first time, it sends a certificate signature request to apiserver, through which apiserver joins the Node into the cluster.
The certificate signature request command sent by the viewing node is:
Kubectl get certificate signing requests or
The kubectl get csr commands are the same.

[root@yds-dev-svc01-master01 ~]# kubectl get certificatesigningrequests
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-KHdclgQlIa0kaTz-f5vjijMx_G2vzLUjuQZc8UIc7Oo   11s       kubelet-bootstrap   Pending
[root@yds-dev-svc01-master01 ~]# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-KHdclgQlIa0kaTz-f5vjijMx_G2vzLUjuQZc8UIc7Oo   3m        kubelet-bootstrap   Pending

node-csr-KHdclgQlIa0kaTz-f5vjijMx_G2vzLUjuQZc8UIc7Oo is the name of the sending request.

04.02 Agree to Signature Request

Since apiserver is required to agree to the signature request, we need to execute it through the kubectl tool. Here we execute in the server yds-dev-svc01-master 01.

[root@yds-dev-svc01-master01 ~]# kubectl certificate approve node-csr-KHdclgQlIa0kaTz-f5vjijMx_G2vzLUjuQZc8UIc7Oo
certificatesigningrequest "node-csr-KHdclgQlIa0kaTz-f5vjijMx_G2vzLUjuQZc8UIc7Oo" approved

04.03 Check Certificate Generation

After we agree to the signature request, the node server automatically generates the certificate file, which is stored in the directory of / etc/kubernetes/ssl already configured in our previous configuration file. Now let's look at the generated files in this directory.

[root@yds-dev-svc02-node01 ssl]# ls kubelet*
kubelet-client.crt  kubelet-client.key  kubelet.crt  kubelet.key

04.03 Check Node Information

Remember that we configured the kubectl server yds-dev-svc01-master 01? Now we need to execute commands in this way.

[root@yds-dev-svc01-master01 ~]# kubectl get nodes
NAME                   STATUS    ROLES     AGE       VERSION
yds-dev-svc02-node01   Ready     <none>    5d        v1.9.0

See, the nodes we created are already displayed.

Above all, our node configuration has been completed, if we want to add more than one node, just follow the same operation.

Posted by rizi on Sun, 16 Dec 2018 15:33:04 -0800