Installation and use of kubernetes

Keywords: Docker Kubernetes Microservices Container

1 Introduction to kubernetes

1.1 kubernetes function

Automatic packing

  • Based on the resource configuration requirements of the application running environment, the application container is automatically deployed

Self healing (self healing)

  • When the container fails, the container is restarted
  • When there is a problem with the deployed Node, the container will be redeployed and rescheduled
  • When the container fails the monitoring check, the container is closed
  • No external service will be provided until the container is in normal operation

Horizontal expansion

  • The application container can be scaled up or scaled up through simple commands, user UI interface or CPU based resource usage

Service discovery

  • Users can realize service discovery and load balancing based on Kubernetes' own capabilities without using additional service discovery mechanism

Rolling update

  • The application running in the application container can be modified according to the changes of the application
    One time or batch update

Version fallback

  • According to the application deployment, the historical version of the application running in the application container can be immediately rolled back

Key and configuration management

  • Without rebuilding the image, you can deploy and update the key and application configuration, similar to hot deployment.

Storage orchestration

  • Automatic storage system mounting and application, especially data persistence for stateful applications, is very important
  • Storage systems can come from local directories, network storage (NFS, Gluster, Ceph, Cinder, etc.), public cloud storage services, etc

1.2 kubernetes architecture

Application deployment classification

No central node architecture

  • GlusterFS

Central node architecture

  • HDFS
  • K8S

Node role function

Master Node

  • k8s cluster control node is used to dispatch and manage the cluster and accept the cluster operation requests from users outside the cluster;
  • The Master Node is composed of API Server, Scheduler, Cluster State Store (ETCD database) and Controller MangerServer;

Worker Node

  • Cluster work node, running user business application container;
  • The Worker Node includes kubelet, kube proxy and ContainerRuntime;

1.3 kubernetes components

1.3.1 NameSpace

  • Namespace
  • Function: realize resource isolation in the case of multi tenancy
  • It belongs to logical isolation
  • Belonging to management boundary
  • Does not belong to network boundary
  • Resource quotas can be made for each namespace

1.3.2 pod

  • Pod is the smallest unit that kubernetes cluster can schedule
  • Pod is the encapsulation of the container

1.3.3 controller

  • controller
  • It is used to monitor the resource object of application running
  • When there is a problem with the Pod, it will be pulled up again to achieve the user's expected state

Common controller

Controller nameeffect
DeploymentDeclarative update controller for publishing stateless applications
ReplicaSetReplica set controller, which is used to expand or trim the replica size of Pod
StatefulSetStateful replica set for publishing stateful applications
DaemonSetRun a replica on each Node of the k8s cluster to publish applications such as monitoring or log collection
JobRun a one-time job task
CronJobRun recurring job tasks

Introduction to Deployment controller

  • It has the functions of online deployment, rolling upgrade, replica creation, rollback to a previous version (successful / stable), etc
  • Deployment includes ReplicaSet, which is recommended unless you need to customize the upgrade function or upgrade Pod at all
    Deployment without using Replica Set directly.

1.3.4 Service

Service Introduction

  • Not an entity service
  • Is a forwarding rule for iptables or ipvs

Service role

  • Provide pod access methods for pod clients through Service, that is, the client accesses the pod portal
  • Service is associated with Pod through Pod tag

Service type

  • ClusterIP: by default, a virtual IP that can be accessed within the cluster is assigned
  • NodePort: assign a port on each Node as an external access entry
  • LoadBalancer: works on specific cloud providers, such as Google Cloud, AWS and OpenStack
  • ExternalName: refers to the introduction of services outside the cluster into the cluster, that is, the communication between pod inside the cluster and services outside the cluster is realized

Service parameters

  • Port the port used by the access service
  • Container port in targetPort Pod
  • Node port enables external network users to access k8s cluster services through nodes (30000-32767)

Service creation

  • There are two ways to create a Service: command line creation and YAML file creation

2 kubernetes installation

2.1 system description

Host nameIP addressInstalled software
k8s-master192.168.0.1kube-apiserver,kube-controller-manager,kube-scheduler,docker,etcd,calico
k8s-node1192.168.0.2kubelet,kubeproxy,docker
k8s-node2192.168.0.3kubelet,kubeproxy,docker

2.2 cluster installation

2.2.1 operation steps required for all three machines

1. Modify the hostname and hosts files

#Self modifying hostname
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node1 
hostnamectl set-hostname k8s-node2

#Modify hosts file
vim /etc/hosts
#Additional content
192.168.66.101 k8s-master 
192.168.66.102 k8s-node1
192.168.66.103 k8s-node2

**2. * * installation dependency package

yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wgetvimnet-tools git

3. Set the firewall to Iptables and set empty rules

#Turn off firewall
systemctl  stop firewalld  &&  systemctl  disable firewalld

#Set empty rule
yum -y install iptables-services  &&  systemctl  start iptables  &&  systemctl  enable iptables&&  iptables -F  &&  service iptables save

4. Turn off the firewall and SELinux

#Temporarily shut down SELinux
setenforce 0 

#Permanently close SELinux
vi /etc/sysconfig/selinux 
#Modification content
SELINUX=disabled

5. Set system parameters

Setting allows routing forwarding and does not process the data of the bridge

#create a file
vi /etc/sysctl.d/k8s.conf
#content
net.bridge.bridge-nf-call-ip6tables = 1 
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1 
net.ipv4.tcp_tw_recycle = 0
vm.swappiness = 0 # The use of swap space is prohibited and is allowed only when the system is OOM
vm.overcommit_memory = 1 # Do not check whether the physical memory is sufficient
vm.panic_on_oom = 0 # Open OOM
fs.inotify.max_user_instances = 8192
fs.inotify.max_user_watches = 1048576
fs.file-max = 52706963
fs.nr_open = 52706963
net.ipv6.conf.all.disable_ipv6 = 1
net.netfilter.nf_conntrack_max = 2310720

Executive document

sysctl -p /etc/sysctl.d/k8s.conf

6. Adjust the system time zone

# Set the system time zone to China / Shanghai
timedatectl set-timezone Asia/Shanghai

# Writes the current UTC time to the hardware clock
timedatectl set-local-rtc 0

# Restart system time dependent services
systemctl restart rsyslog
systemctl restart crond

7. Shut down unnecessary services

systemctl stop postfix && systemctl disable postfix

8. Preconditions for Kube proxy to enable ipvs

vim /etc/sysconfig/modules/ipvs.modules
#content
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4

#Execute command
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

9. Close swap for all nodes

#Temporarily Closed
swapoff -a 

#Permanent shutdown
vi /etc/fstab 
#Comment out the following fields
/dev/mapper/cl-swap swap swap defaults 0 0

10. Set rsyslogd and SYSTEMd journal

mkdir /var/log/journal 
# Directory where logs are persisted
mkdir /etc/systemd/journald.conf.d

#Edit file
vim /etc/systemd/journald.conf.d/99-prophet.conf
#content
[Journal]
# Persistent save to disk
Storage=persistent
# Compress history log
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
# Maximum occupied space 10G
SystemMaxUse=10G
# The maximum size of a single log file is 200M
SystemMaxFileSize=200M
# Log retention time: 2 weeks
MaxRetentionSec=2week
# Do not forward logs to syslog
ForwardToSyslog=no

#restart
systemctl restart systemd-journald

11. Upgrade the system kernel to 4.44

rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
# After installation, check whether the corresponding kernel menuentry in / boot/grub2/grub.cfg contains initrd16 configuration. If not, install it again!
yum --enablerepo=elrepo-kernel install -y kernel-lt

# Set boot from new kernel
grub2-set-default 'CentOS Linux (4.4.189-1.el7.elrepo.x86_64) 7 (Core)'

12. Install kubelet, kubedm and kubectl

  • Kubedm: the instruction used to initialize the cluster
  • kubelet: used to start the pod and container on each node in the cluster
  • kubectl: command line tool for communicating with cluster

Empty yum cache

yum clean all

Set yum installation source

vim /etc/yum.repos.d/kubernetes.repo
 content
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

Installation:

yum install -y kubelet-1.17.2 kubeadm-1.17.2 kubectl-1.17.2

Prompt: public key is not installed. Solution: add -- nogpgcheck after yum install xxxx command to skip public key check installation

kubelet sets startup (Note: if you do not start it first, an error will be reported if you start it now)

systemctl enable kubelet

View version

kubelet --version

2.2.2 the master node needs to be completed

1. Run the initialization command

kubeadm init --kubernetes-version=1.17.2 --apiserver-advertise-address=192.168.0.1 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16

Note: apiserver advertisement address must be the IP address of the master machine

Common errors:
Error 1: [WARNING IsDockerSystemdCheck]: detected "cgroups" as the Docker cgroup driver
As Docker cgroup driver., The Docker driver recommended by Kubernetes is "systemd"
Solution: modify the configuration of Docker: vi /etc/docker/daemon.json, and add

{
"exec-opts":["native.cgroupdriver=systemd"]
}

Then restart Docker

You must write down the commands that prompt you for node installation

kubeadm join 192.168.0.1:6443 --token 754xxxx.xxxxxxxxxxxxxxxx \
--discovery-token-ca-cert-hash
sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

2. Start kubelet

systemctl restart kubelet

3. Configure kubectl tool

mkdir -p $HOME/.kube
 cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 chown $(id -u):$(id -g) $HOME/.kube/config

4. Install Calico

mkdir k8s
cd k8s
wget https://docs.projectcalico.org/v3.10/gettingstarted/kubernetes/installation/hosted/kubernetes-datastore/caliconetworking/1.7/calico.yaml

sed -i 's/192.168.0.0/10.244.0.0/g' calico.yaml

kubectl apply -f calico.yaml

5. Wait a few minutes and check the status of all pods to ensure that all pods are in Running status

kubectl get pod --all-namespaces -o wide

2.2.3 Slave node needs to be completed

1. Let all nodes in the cluster environment

Join the cluster using the command generated by the Master node

kubeadm join 192.168.0.1:6443 --token 754xxxx.xxxxxxxxxxxxxxxx \
--discovery-token-ca-cert-hash
sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
#Viewing join commands from master
kubeadm token create --print-join-command

2. Start kubelet

systemctl  start kubelet

3. Go back to the Master node to view the cluster setup status

If the Status is all Ready, the cluster environment is successfully built

kubectl get nodes

4.kubectl common commands

kubectl get nodes View the status of all master and slave nodes
kubectl get ns Get all namespace resources
kubectl get pods -n {$nameSpace} Get specified namespace of pod
kubectl describe pod Name of -n {$nameSpace} View a pod Implementation process of
kubectl logs --tail=1000 pod Name of | less view log
kubectl create -f xxx.yml Create a cluster resource object from the configuration file
kubectl delete -f xxx.yml Delete a cluster resource object through the configuration file
kubectl delete pod name -n {$nameSpace} adopt pod Delete cluster resources
kubectl get service -n {$nameSpace} see pod of service situation

2.3 kubernetes unloading

#Restore the changes made to the node before using kubedm init or kubedm join
kubeadm reset

#Close kubelet service
systemctl stop kubelet

#Close docker
systemctl stop docker

#Delete configuration
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1

#Start docker
systemctl start docker

#Uninstall the related installation of kubernetes
yum remove -y kubelet-1.17.2 kubeadm-1.17.2 kubectl-1.17.2

3 kubernetes use

3.1 kubernetes connects to docker warehouse

kubectl create secret docker-registry my-secret --docker-server=192.168.0.1:85 --docker-username=admin --docker-password=Harbor12345 --docker-email=admin@163.com

3.2 kubernetes deployment service

3.2.1 create namespace

kubectl create namespace my-namespace

3.2.2 write a Pod configuration file

apiVersion: v1
kind: Service
metadata:
  name: auth
  namespace: my-namespace
  labels:
    app: auth
spec:
  type: NodePort
  ports:
  - port: 9200
    targetPort: 9200
    protocol: TCP
    name: http
  selector:
    app: auth-pod
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: auth-deployment
  namespace: my-namespace
spec:
  selector:
    matchLabels:
      app: auth-pod
  replicas: 1
  template:
    metadata:
      labels:
        app: auth-pod
    spec:
      containers:
      - name: container
        image: 192.168.105.224:85/library-name/auth:1.0.0  #Image name + version of local private image library
        ports:
        - containerPort: 9200

3.2.3 create Deployment and Service resources

#Start the project deployment.yaml to create the pod file you just wrote
kubectl create -f deployment.yaml

# View the details of the pod, that is, you can view which node the pod is running on (ip address information)
kubectl get pod -o wide

# View the details of the service, showing the service name, type, cluster ip, port, time and other information
kubectl get svc
kubectl get svc -n kube-system

4 common commands

4.1 kubedm common commands

kubeadm init Boot a Kubernetes Master node
kubeadm join Boot a Kubernetes Work node and join it to the cluster
kubeadm upgrade to update Kubernetes Cluster to new version
kubeadm config If you use kubeadm v1.7.x Or earlier, you need to configure your cluster to use kubeadmupgrade command
kubeadm token use kubeadm join To manage tokens
kubeadm reset Use before restore kubeadm init perhaps kubeadm join Changes made to nodes
kubeadm version Print out kubeadm edition
kubeadm alpha Preview a set of new features available to gather feedback from the community
kubeadm token create --print-join-command establish node Add password

4.2 kubectl common commands

4.2.1 basic command

**Create command: * * create resources based on files or input

#Create Deployment and Service resources
kubectl create -f demo-deployment.yaml
kubectl create -f demo-service.yaml

**Delete command: * * delete resources

# Delete the corresponding resources according to the yaml file, but the yaml file will not be deleted, which is more efficient
kubectl delete -f demo-deployment.yaml 
kubectl delete -f demo-service.yaml
# You can also delete resources by specific resource names. Use this to delete resources and delete deployment and service resources at the same time
kubectl delete Specific resource name

**Get command: * * get resource information

# View all resource information
kubectl get all
kubectl get --all-namespaces

# View pod list
kubectl get pod

# Displays the label information of the pod node
kubectl get pod --show-labels

# Match to the specific pod according to the specified tag
kubectl get pods -l app=example

# View node list
kubectl get node 

# Displays label information for node nodes
kubectl get node --show-labels

# View the details of the pod, that is, you can view which node the pod is running on (ip address information)
kubectl get pod -o wide

# View the details of the service, showing the service name, type, cluster ip, port, time and other information
kubectl get svc
kubectl get svc -n kube-system

# View namespace
kubectl get ns
kubectl get namespaces

# View the namespace to which all pod s belong
kubectl get pod --all-namespaces

# View the namespace to which all pod s belong and see which nodes are running on
kubectl get pod --all-namespaces  -o wide

# View all current replica set s, and display the number of replicas of all pod s, their available quantity, status and other information
kubectl get rs

# View all the deployed applications to see the container and the image, label and other information used by the container
kubectl get deploy -o wide
kubectl get deployments -o wide

**Run command: * * create and run one or more container images in the cluster.

# For example, run a container instance with the name of nginx, the number of copies is 3, the label is app=example, the image is nginx:1.10 and the port is 80
kubectl run nginx --replicas=3 --labels="app=example" --image=nginx:1.10 --port=80
# For example, run a container instance with the name of nginx, the number of copies is 3, the label is app=example, the image is nginx:1.10 and the port is 80, and bind it to k8s-node1
kubectl run nginx --image=nginx:1.10 --replicas=3 --labels="app=example" --port=80 --overrides='{"apiVersion":"apps/v1","spec":{"template":{"spec":{"nodeSelector":{"kubernetes.io/hostname":"k8s-node1"}}}}}'

Expose command: create a service and expose the port for external access

# Create an nginx service and expose the port so that the outside world can access it
kubectl expose deployment nginx --port=88 --type=NodePort --target-port=80 --name=nginx-service

set command: configure some specific resources of the application, or modify the existing resources of the application

kubectl set resources command: this command is used to set some range limits of resources

# Limit the cpu of nginx container of deployment to "200m" and set the memory to "512Mi"
kubectl set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi

# Set Requests and Limits in all nginx containers
kubectl set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi

# Delete the calculated resource value of the container in nginx
kubectl set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0

**kubectl set selector command: * * set the selector of the resource. If a selector already exists before calling the "set selector" command, the newly created selector overwrites the original selector.

kubectl set image command: used to update the container image of existing resources.

# Set the nginx container image in deployment to "nginx: 1.9.1"
kubectl set image deployment/nginx busybox=busybox nginx=nginx:1.9.1

# The nginx container image of all deployment s and rc is updated to "nginx: 1.9.1"
kubectl set image deployments,rc nginx=nginx:1.9.1 --all

# Update all container images of daemon ABC to "nginx: 1.9.1"
kubectl set image daemonset abc *=nginx:1.9.1

# Update nginx container image from local file
kubectl set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml

explain command: used to display resource document information

kubectl explain rs

Edit command: used to edit resource information

# Edit some information of Deployment nginx
kubectl edit deployment nginx

# Edit some information of nginx of service type
kubectl edit service/nginx

4.2.2 setting command

label command: used to update (add, modify, or delete) labels on resources

# Add label unhealthy=true to the Pod named foo
kubectl label pods foo unhealthy=true

# Modify the label of Pod named foo to 'status' / value' unhealthy ', and overwrite the existing value
kubectl label --overwrite pods foo status=unhealthy

# Add label s to all pod s in the namespace
kubectl label  pods --all status=unhealthy

# The label on the Pod named foo is updated only when resource version = 1
kubectl label pods foo status=unhealthy --resource-version=1

# Delete the label named "bar". (connected with "-" minus sign)
kubectl label pods foo bar-

annotate command: updates the Annotations information for one or more resources. That is, annotation information. You can easily view what operations have been done.

# Update the Pod "foo" and set the value "my frontend" of the annotation "description". If the same annotation is set multiple times, only the last set value will be used
kubectl annotate pods foo description='my frontend'

# Update the annotation of pod according to the type and name in "pod.json"
kubectl annotate -f pod.json description='my frontend'

# Update the Pod"foo", set the value "my frontend running nginx" of the annotation "description", and overwrite the existing value
kubectl annotate --overwrite pods foo description='my frontend running nginx'

# Update all pod s in the namespace
kubectl annotate pods --all description='my frontend running nginx'

# Update pod 'foo' only when resource version is 1
kubectl annotate pods foo description='my frontend running nginx' --resource-version=1

# Update pod 'foo' by deleting annotations named "description".
# - overwrite flag is not required.
kubectl annotate pods foo description-

Completion command: used to set automatic completion of kubectl command

# To set the automatic completion of the current shell in Bash, install bash completion package first
source <(kubectl completion bash)

# Permanently add auto completion in your bash shell
echo "source <(kubectl completion bash)" >> ~/.bashrc 

4.2.3 deployment command

rollout command: used to manage resources

# grammar
kubectl rollout SUBCOMMAND

# Rollback to previous deployment
kubectl rollout undo deployment/abc

# View the status of the daemon
kubectl rollout status daemonset/foo

Rolling update command: executes the rolling update of the specified ReplicationController.

# Update the pod of frontend-v1 with the new RC data in frontend-v2.json
kubectl rolling-update frontend-v1 -f frontend-v2.json

# Update the pod of frontend-v1 with JSON data
cat frontend-v2.json | kubectl rolling-update frontend-v1 -f -

# Some other rolling updates
kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2

kubectl rolling-update frontend --image=image:v2

kubectl rolling-update frontend-v1 frontend-v2 --rollback

scale command: expand or shrink the number of pods in a Deployment, ReplicaSet, Replication Controller, or Job

# Set the number of pod copies named foo to 3.
kubectl scale --replicas=3 rs/foo
kubectl scale deploy/nginx --replicas=30

# Set the Pod resource copy identified by the resource object and name specified in the "foo.yaml" configuration file to 3
kubectl scale --replicas=3 -f foo.yaml

# If the current number of replicas is 2, expand it to 3.
kubectl scale --current-replicas=2 --replicas=3 deployment/mysql

# Set the number of Pod copies in multiple RC
kubectl scale --replicas=5 rc/foo rc/bar rc/baz

autoscale command: This is more powerful than scale. It is also an elastic scaling strategy. It automatically expands or shrinks according to the amount of traffic.

# Use the Deployment "foo" setting and the default auto scaling policy to specify the target CPU utilization so that the number of pods is between 2 and 10
kubectl autoscale deployment foo --min=2 --max=10

# Use the RC "foo" setting to make the number of pods between 1 and 5, and maintain the CPU utilization at 80%
kubectl autoscale rc foo --max=5 --cpu-percent=80

4.2.4 cluster management commands

Certificate command: used for certificate resource management, authorization, etc

# For example, if a node wants to request from the master, the master node must be authorized
kubectl certificate approve node-csr-81F5uBehyEyLWco5qavBsxc1GzFcZk3aFM3XW5rT3mw node-csr-Ed0kbFhc_q7qx14H3QpqLIUs0uKo036O2SnFpIheM18

Cluster info command: displays cluster information

kubectl cluster-info

top command: used to view the utilization of resources such as cpu, memory and disk

# heapster was required before, and then replaced with metrics server
kubectl top pod --all-namespaces

cordon command: used to mark that a node is not schedulable

uncordon command: used to label nodes that can be scheduled

drain command: used to exclude nodes during maintenance.

taint command: used to set a stain on a Node

4.2.5 cluster troubleshooting and debugging commands

describe command: displays the details of a specific resource

Logs command: used to print the logs of a container in a pod. If there is only one container in the pod, the container name can be omitted

# Returns a log snapshot of pod nginx that contains only one container
kubectl logs nginx

# Returns the log snapshot of the stopped container web-1 in pod ruby
kubectl logs -p -c ruby web-1

# Continuously output the log of web-1 container in pod ruby
kubectl logs -f -c ruby web-1

# Only the last 20 logs in pod nginx are output
kubectl logs --tail=20 nginx

# Output all logs generated in the last hour in pod nginx
kubectl logs --since=1h nginx

exec command: enter the container for interaction and execute the command in the container

# Enter the nginx container and execute some commands
kubectl exec -it nginx-deployment-58d6d6ccb8-lc5fp bash

attach command: connect to a running container.

# Get the output of the running pod 123456-7890, which is connected to the first container by default
kubectl attach 123456-7890

# Get the output of ruby container in pod 123456-7890
kubectl attach 123456-7890 -c ruby-container

# Switch to the terminal mode, send the console input to the "bash" command of the ruby container of pod 123456-7890, and output it to the console/
# Error console information is sent back to the client.
kubectl attach 123456-7890 -c ruby-container -i -t

cp command: copy files or directories to the pod container

4.2.6 other commands

api services command: print supported api version information

# Print the api version supported by the current cluster
kubectl api-versions

Help command: used to view command help

# Displays all command help prompts
kubectl --help

# Specific subcommand help, such as
kubectl create --help

config command: used to modify kubeconfig configuration file (used to access api, such as configuring authentication information)

# Print client and server version information
kubectl version

Version command: print client and server version information

# Print client and server version information
kubectl version

plugin command: run a command line plug-in

4.2.7 advanced commands

Apply command: apply configuration to resources by file name or standard input

# Apply the configuration in pod.json to pod
kubectl apply -f ./pod.json

# Apply the JSON configuration entered by the console to the Pod
cat pod.json | kubectl apply -f -

Patch command: use patch modification to update the fields of resources, that is, modify some contents of resources

# Partially update a node using strategic merge patch
kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}'

# Update a container's image; spec.containers[*].name is required because it's a merge key
kubectl patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}'

Replace command: replace the original resource through file or standard input

# Replace a pod using the data in pod.json.
kubectl replace -f ./pod.json

# Replace a pod based on the JSON passed into stdin.
cat pod.json | kubectl replace -f -

# Update a single-container pod's image version (tag) to v4
kubectl get pod mypod -o yaml | sed 's/\(image: myimage\):.*$/\1:v4/' | kubectl replace -f -

# Force replace, delete and then re-create the resource
kubectl replace --force -f ./pod.json

Convert command: convert configuration files between different versions

# Convert 'pod.yaml' to latest version and print to stdout.
kubectl convert -f pod.yaml

# Convert the live state of the resource specified by 'pod.yaml' to the latest version
# and print to stdout in json format.
kubectl convert -f pod.yaml --local -o json

# Convert all files under current directory to latest version and create them all.
kubectl convert -f . | kubectl create -f -

5 kubenetes extension

5.1 kubenetes integration rancher

Download Image

#The stable version is downloaded here
docker pull rancher/rancher:stable

Note: rancher is a separate server and is not in the k8s cluster.

Installing the Rancher

docker run -d --restart=always --name rancher -p 80:80 -p 443:443 --privileged rancher/rancher:stable

Note: you must use http to access the page. Even if = use http to access, you will be forced to jump to https

rancher configuring k8s clusters

Click Add cluster > use existing Kubernetes cluster > import

6 node allocable resource limit

6.1 introduction

k8s node health status

stateinterpretation
NodeHasSufficientMemoryThe node has enough memory
NodeHasNoDiskPressureNode has no disk pressure
NodeHasSufficientPIDNode has enough PID
NodeNotReadyNode not ready

View the Capacity and Allocatable of the node

 kubectl describe node <node_name>

6.2 configuring cgroup drivers

Confirm the cgroup driver of docker

docker info | grep "Cgroup Driver"
#If you confirm that the Cgroup Driver of docker is not cgroups, you can configure it through the following methods.

docker configures cgroup driver as cgroupfs

#Edit vim /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=cgroupfs"],
  "registry-mirrors": ["http://hub-mirror.c.163.com"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "5"
  },
  "insecure-registries":["192.168.0.1:5000"]
}

Modify kubelet cgroup driver systemd to cgroupfs

KUBELET_KUBEADM_ARGS="--cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-container-image=nexus.10010sh.cn/pause:3.1"

6.3 Kubelet Node Allocatable

Configuration modification / var / lib / kubelet / kubedm-flags.env

KUBELET_KUBEADM_ARGS="--cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-container-image=nexus.10010sh.cn/pause:3.1 \
    --enforce-node-allocatable=pods,kube-reserved,system-reserved \
    --kube-reserved-cgroup=/system.slice/kubelet.service \
    --system-reserved-cgroup=/system.slice \
    --kube-reserved=cpu=1,memory=1Gi \
    --system-reserved=cpu=1,memory=1Gi  \
    --eviction-hard=memory.available<5%,nodefs.available<10%,imagefs.available<10% \
    --eviction-soft=memory.available<10%,nodefs.available<15%,imagefs.available<15% \
    --eviction-soft-grace-period=memory.available=2m,nodefs.available=2m,imagefs.available=2m \
    --eviction-max-pod-grace-period=30 \
    --eviction-minimum-reclaim=memory.available=0Mi,nodefs.available=500Mi,imagefs.available=500Mi"

Configuration resolution

  1. Enable the function of reserving resources for kube components and system daemons
--enforce-node-allocatable=pods,kube-reserved,system-reserved
  1. Set k8s component cgroup
--kube-reserved-cgroup=/system.slice/kubelet.service
  1. Set cgroup of system daemon
--system-reserved-cgroup=/system.slice
  1. Configure the size of resources reserved for k8s components, including CPU and MEM
--kube-reserved=cpu=1,memory=1G
  1. Configure the size of reserved resources for system processes (such as sshd, udev and other system daemons), CPU and MEM
--system-reserved=cpu=1,memory=1Gi
  1. Configuration of eviction pod: hard threshold (ensure 95% memory utilization)
--eviction-hard=memory.available<5%,nodefs.available<10%,imagefs.available<10%
  1. Configuration of expulsion pod: soft threshold
--eviction-soft=memory.available<10%,nodefs.available<15%,imagefs.available<15%
  1. Define how long the duration exceeds after the soft threshold is reached
--eviction-soft-grace-period=memory.available=2m,nodefs.available=2m,imagefs.available=2m
  1. Maximum waiting time before expulsion of pod = min (pod.spec.terminationgraceperiodseconds, occurrence Max pod grace period), unit: seconds
--eviction-max-pod-grace-period=30
  1. At least how much resources will be recovered before the expulsion is stopped
--eviction-minimum-reclaim=memory.available=0Mi,nodefs.available=500Mi,imagefs.available=500Mi

6.4 modify Kubelet startup service file

Open / lib/systemd/system/kubelet.service

[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/home/

[Service]
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/cpuset/system.slice/kubelet.service
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/hugetlb/system.slice/kubelet.service
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target

6.5 restart kubelet and docker services

load configuration

systemctl daemon-reload

Restart kubectl and docker

systemctl restart docker && systemctl restart kubelet

Posted by sneakyimp on Mon, 08 Nov 2021 08:03:36 -0800