Deployment and simple description of Kubernates + doker

Keywords: Kubernetes Docker kubelet yum

Related links:

Introduction: https://www.cnblogs.com/xkops/p/6165565.html 

Construction: https://www.cnblogs.com/xkops/p/6169034.html

What is k8s?

It can be understood as follows: k8s full name: Kubernetes, which can be seen as a distributed system support platform.           

Why do we use k8s cluster?

Fault self healing:

K8s can monitor the operation of the container. We put the project in the container. Because of some external internal reasons, the server can't bear the pressure. If the container on the primary node suddenly hangs, k8s will immediately schedule the service on the host to another node machine

Apply updates:

Update the project online without interrupting the operation of the current project.

There are also some concepts of automatic capacity expansion and capacity reduction. I haven't used them myself. It's hard to say.

Life cycle management of k8s:

When k8s is used to manage applications, the basic steps are: creating clusters, deploying applications, publishing applications, expanding applications, and updating applications.

 

The main components of k8s and what they are mainly used for:

etcd: an open source software. Provide reliable distributed data storage services for persistent storage of K8s cluster configuration and status

apiservice: interface for communication between user programs (such as kubectl) and other components of K8s. Other components of K8s do not communicate directly, but through API server. This can be reflected in the above connection. For example, only API server is connected to etcd, that is, when other components update the status of K8s cluster, they can only read and write the data in etcd through API server.

Scheduler: a scheduling component that assigns work nodes to each deployable component of a user application.

Controller Manager: performs cluster level functions, such as copying components, tracking work node status, processing node failure, etc. The controller manager component is composed of multiple controllers, many of which are divided by K8s resource types, such as Replication Manager (management of replication controller resources), ReplicaSet Controller, and persistent volume controller.

Kube proxy: load balancing network traffic among application components.

kubelet: manage containers on work nodes.

Components of the actual running container, such as contributor runtime docker, rkt, etc

The above are all the components used in k8s cluster. What are these components used for? Let's analyze them carefully.

The required components of 192.168.26.277 on the master are:

etcd: provide a database for distributed data storage, which is used to persist the configuration and status of k8s cluster

Kube API server: api service provides http rest interface, which is the entrance of the whole cluster. Other components of K8s do not communicate directly, but through API server. (only API server is connected to etcd, that is, when other components update the status of K8s cluster, they can only read and write the data in etcd through API server.)

Kube scheduler: scheduler is responsible for resource scheduling

Kube Controller Manager: the management and control center of the whole cluster. This component consists of multiple controllers, such as Replication Manager, ReplicaSet Controller, and persistent volume controller. It is mainly used to copy components, track work node status, and handle failure nodes

 

The required components of 192.168.26.228 on node machines:

flannel: it seems to be used to support network communication

Kube proxy: used to load balance network traffic

kubelet: used to manage containers on node machines

docker: the component that runs the project image container

The whole cluster operation principle of k8s: [key core knowledge is very important]

Kube controller manager on the master is the control management center of the whole cluster. The node controller module in Kube controller manager monitors the status information of the node machine in real time through the monitoring interface provided by apiservice.

When a node machine goes down, controller manager will remove the fault in time and repair it automatically.

 

The kubelet process on the node machine will call the apiservice interface to report its status every other period of time, and the apiservice interface will update the node status to the ectd after receiving these information. Kubelet also listens for pod information through the monitoring interface of apservice. If it is monitored that a new pod copy is scheduled and bound to this node, the creation and start of the corresponding pod container will be executed. If it is monitored that the pod object is deleted, the corresponding pod container of this node will be deleted.

kubernetes basic environment configuration

docker itself does not have too much requirements for the environment, so the following are all for k8s

# Set SELinux to permission mode (disable it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# Turn off swap
swapoff -a
cp -p /etc/fstab /etc/fstab.bak$(date '+%Y%m%d%H%M%S')
sed -i "s/\/dev\/mapper\/centos-swap/\#\/dev\/mapper\/centos-swap/g" /etc/fstab
systemctl daemon-reload

# Turn off the firewall, which is the simplest way to deal with it. When the network environment used is VPC, the internal network is actually safe
systemctl stop firewalld 
systemctl disable firewalld

# Set the underlying network forwarding parameters
echo "net.bridge.bridge-nf-call-ip6tables = 1" >>/etc/sysctl.conf 
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf 
sysctl -p
modprobe br_netfilter
echo "modprobe br_netfilter" >> /etc/rc.local
sysctl -p 

# Set warehouse source of kubernetes
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

Install docker and kubernetes

yum install docker-ce kubelet kubeadm kubectl
systemctl enable docker && systemctl restart docker 
systemctl enable kubelet && systemctl start kubelet  

Note: the version of CentOS installed by default is lower (for example, the version number of my default installation is 13), so when you need the latest version, you need to uninstall and reinstall the latest docker.
After March 1, 2017, the version naming of Docker began to change, and CE version and EE version were separated.
The differences are as follows:
Docker Community Edition (CE): for developers or small teams to create container based applications, share and automate development pipelines with team members. Docker CE provides simple installation and fast installation so that development can start immediately. Docker CE integration and optimization, infrastructure. (free)
Docker Enterprise Edition (EE): who is specially established for enterprise development and IT team. Docker EE provides enterprises with the safest container platform and application centered platform. (payment)

# Out of Service
systemctl stop docker
# Delete docker related
yum erase docker \
                  docker-*

# Add Alibaba cloud's image warehouse, docker.io It's very slow
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager \
--add-repo \
    http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# Install and start docker
yum install docker-ce -y
systemctl start docker
systemctl enable docker

Download the image of kubernetes

kubernetes is developed by Google in go language, so the default is to get the image provided by Google. So we need to find other ways to pull it.
We wrote a script, pull_images.sh , used to hub.docker.com Pull the image.

#!/bin/bash

gcr_name=k8s.gcr.io
hub_name=mirrorgooglecontainers
# define images
images=(
kubernetes-dashboard-amd64:v1.10.1
kube-apiserver:v1.15.0
kube-controller-manager:v1.15.0
kube-scheduler:v1.15.0
kube-proxy:v1.15.0
pause:3.1
etcd:3.3.10
)

for image in ${images[@]}; do
        docker pull $hub_name/$image
        docker tag $hub_name/$image $gcr_name/$image
        docker rmi $hub_name/$image
done

docker pull coredns/coredns:1.3.1
docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker rmi coredns/coredns:1.3.1

In fact, the principle is very simple, that is, tag again after pulling down, and mark it as k8s gcr.io Domain.
After completion:

[root@k8s-master ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.15.0 d235b23c3570 4 weeks ago 82.4MB
k8s.gcr.io/kube-apiserver v1.15.0 201c7a840312 4 weeks ago 207MB
k8s.gcr.io/kube-controller-manager v1.15.0 8328bb49b652 4 weeks ago 159MB
k8s.gcr.io/kube-scheduler v1.15.0 2d3813851e87 4 weeks ago 81.1MB
k8s.gcr.io/coredns 1.3.1 eb516548c180 6 months ago 40.3MB
k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 7 months ago 258MB
k8s.gcr.io/kubernetes-dashboard-amd64 v1.10.0 0dab2435c100 10 months ago 122MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 19 months ago 742kB

Initialize k8s master

Now that the k8s installation is actually completed, we need to initialize the master node and execute the command directly

# Since we use virtual machine, which is basically a cpu, k8s suggests that there should be two CPUs, so we ignore this error
kubeadm init --kubernetes-version v1.15.0 --pod-network-cidr 10.244.0.0/16 --ignore-preflight-errors=NumCPU


Pay attention to the following information after execution, and prompt us for the follow-up operation of cluster building

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.91.132:6443 --token zf26v4.3u5z3g09ekm4owt3 \
    --discovery-token-ca-cert-hash sha256:fce98cb6779dbcc73408d1faad50c9d8f86f154ed88a5380c08cece5e08aba58

Add node node

Just execute the corresponding join command on your node node

kubeadm join 192.168.91.132:6443 --token zf26v4.3u5z3g09ekm4owt3 \
    --discovery-token-ca-cert-hash sha256:fce98cb6779dbcc73408d1faad50c9d8f86f154ed88a5380c08cece5e08aba58

After execution, execute kubectl get nodes on the master node. You can see that

NAME STATUS ROLES AGE VERSION
k8s-master Ready master 21h v1.15.0
k8s-node1 Ready <none> 127m v1.15.0

Launch Kubernetes dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl proxy

http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

complete

Exception summary

Unable to start after k8s restart.

Use journalctl -f to view logs

-- The start-up result is done.
Jul 19 10:27:34 k8s-node1 kubelet[9831]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 19 10:27:34 k8s-node1 kubelet[9831]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 19 10:27:34 k8s-node1 kubelet[9831]: I0130 10:27:34.877299 9831 server.go:407] Version: v1.15.0
Jul 19 10:27:34 k8s-node1 kubelet[9831]: I0130 10:27:34.877538 9831 plugins.go:103] No cloud provider specified.
Jul 19 10:27:34 k8s-node1 kubelet[9831]: I0130 10:27:34.892361 9831 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jul 19 10:27:34 k8s-node1 kubelet[9831]: I0130 10:27:34.926248 9831 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Jul 19 10:27:34 k8s-node1 kubelet[9831]: F0130 10:27:34.926665 9831 server.go:261] failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename Type Size Used Priority /swapfile file 2097148 0 -2]
Jul 19 10:27:34 k8s-node1 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Jul 19 10:27:34 k8s-node1 systemd[1]: Unit kubelet.service entered failed state.
Jul 19 10:27:34 k8s-node1 systemd[1]: kubelet.service failed.

At first, I thought it was a flag -- CGroup driver has been deprecated section. Later, I found that it should be * * failed to run Kubelet: Running with swap on is not supported, please disable swap! *. This log is very obvious.
The solution is to shut down Swap and restart it

swapoff -a
cp -p /etc/fstab /etc/fstab.bak$(date '+%Y%m%d%H%M%S')
sed -i "s/\/dev\/mapper\/centos-swap/\#\/dev\/mapper\/centos-swap/g" /etc/fstab
systemctl daemon-reload
systemctl restart kubelet

So I added this to it

kubernetes part of pod has not been running normally.

For example:

[root@k8s-master ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5c98db65d4-b2rgr 1/1 Running 0 20h
kube-system coredns-5c98db65d4-l6x97 1/1 Running 0 20h
kube-system etcd-k8s-master 1/1 Running 4 20h
kube-system kube-apiserver-k8s-master 1/1 Running 15 20h
kube-system kube-controller-manager-k8s-master 1/1 Running 27 20h
kube-system kube-flannel-ds-amd64-k5kjg 1/1 Running 2 110m
kube-system kube-flannel-ds-amd64-z7lcn 1/1 Running 20 88m
kube-system kube-proxy-992ql 1/1 Running 4 20h
kube-system kube-proxy-ss9r6 1/1 Running 0 27m
kube-system kube-scheduler-k8s-master 1/1 Running 29 20h
kube-system kubernetes-dashboard-7d75c474bb-s7fwq 0/1 ErrImagePull 0 102s

Finally, kubernetes-dashboard-7d75c474bb-s7fwq displays ErrImagePull.
We execute the command kubectl describe pod kubernetes-dashboard-7d75c474bb-s7fwq - n Kube system
(note that - n must be added to specify the namespace, otherwise error from server (not found): pods "XXXXXXX" not found) will appear in the default space.)
Finally, the Events section is as follows:

Events:
  Type Reason Age From Message
  ---- ------ ---- ---- -------
  Normal Scheduled 119s default-scheduler Successfully assigned kube-system/kubernetes-dashboard-7d75c474bb-s7fwq to k8s-node1
  Normal Pulling 50s (x3 over 118s) kubelet, k8s-node1 Pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
  Warning Failed 33s (x3 over 103s) kubelet, k8s-node1 Failed to pull image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1": rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning Failed 33s (x3 over 103s) kubelet, k8s-node1 Error: ErrImagePull
  Normal BackOff 6s (x4 over 103s) kubelet, k8s-node1 Back-off pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
  Warning Failed 6s (x4 over 103s) kubelet, k8s-node1 Error: ImagePullBackOff

The problem found from this log is that kubernetes dashboard AMD64: the version required by v1.10.1 is higher than the current version in docker (the current version is v1.10.0), so you can pull a new image

Posted by aidude111 on Sun, 07 Jun 2020 20:22:19 -0700