1. Labels
Labels are "key value" types of data that can be specified directly at the time of resource creation or added on demand at any time, and then checked for matching by the tag selector to complete resource selection.
You need to be aware of:
- An object can have more than one tag, and the same tag can be added to more than one resource.
- We can attach tags of different latitudes to resources to achieve flexible resource grouping management functions, such as version tags, environment tags, and so on, to cross-identify different versions and environments of the same resource.
2. Label selector
Label selectors are used to express query criteria or selection criteria for labels and support two selectors:
1. Based on equivalence
=, ==and!=three
2. Based on set relationships
in,not in etc.
Here I add a common label-based query command:
#label's help command query kubectl label -h #label-based related query commands [root@centos-1 dingqishi]# kubectl get pod --show-labels NAME READY STATUS RESTARTS AGE LABELS ngx-new-cb79d555-gqwf8 1/1 Running 0 4h57m app=ngx-new,pod-template-hash=cb79d555 ngx-new-cb79d555-hcdr9 1/1 Running 0 5h9m app=ngx-new,pod-template-hash=cb79d555 [root@centos-1 dingqishi]# kubectl get pod --show-labels -A -l app=flannel NAMESPACE NAME READY STATUS RESTARTS AGE LABELS kube-system kube-flannel-ds-amd64-bc56m 1/1 Running 7 2d23h app=flannel,controller-revision-hash=67f65bfbc7,pod-template-generation=1,tier=node kube-system kube-flannel-ds-amd64-ltp9p 1/1 Running 0 2d23h app=flannel,controller-revision-hash=67f65bfbc7,pod-template-generation=1,tier=node kube-system kube-flannel-ds-amd64-v9gmq 1/1 Running 10 2d23h app=flannel,controller-revision-hash=67f65bfbc7,pod-template-generation=1,tier=node [root@centos-1 dingqishi]# kubectl get pod -A -l app=flannel -L app NAMESPACE NAME READY STATUS RESTARTS AGE APP kube-system kube-flannel-ds-amd64-bc56m 1/1 Running 7 2d23h flannel kube-system kube-flannel-ds-amd64-ltp9p 1/1 Running 0 2d23h flannel kube-system kube-flannel-ds-amd64-v9gmq 1/1 Running 10 2d23h flannel
3. Resource annotation
Unlimited by the number of characters, it is important to note and distinguish that resource annotations are not used for tag filtering but only for providing "metadata" information to resources
#Help Command Query for Resource Annotations kubectl annotate -h
4. Probes
Probes are related components of a Pod container declaration cycle that are vital to health or not.
1.liveness
A health check to check the health of the Pod. Subsequent actions will restart the Pod
1) You can use explain to query the field configuration instructions of the liveness probe (this is useful!)
kubectl explain pods.spec.containers.livenessProbe
2) We edit liveness-exec.yaml, which adds a liveness Probe to detect the existence of the / tmp/health file, and then use the apply-f command to generate the Pod
apiVersion: v1 kind: Pod metadata: labels: test: liveness-exec name: liveness-exec spec: containers: - name: liveness-demo image: busybox args: - /bin/sh - -c - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600 livenessProbe: exec: command: - test - -e - /tmp/healthy
3) Observe the pod and find that within 30 seconds, the pod cannot detect / TMP / health file and restart operation, RESTARTS is 1
[root@centos-1 chapter4]# kubectl get pods NAME READY STATUS RESTARTS AGE liveness-exec 1/1 Running 1 2m39s ngx-new-cb79d555-gqwf8 1/1 Running 0 2d2h ngx-new-cb79d555-hcdr9 1/1 Running 0 2d2h describe pod liveness-exec State: Running Started: Sat, 30 Nov 2019 14:17:31 +0800 Last State: Terminated Reason: Error Exit Code: 137 Started: Sat, 30 Nov 2019 14:16:03 +0800 Finished: Sat, 30 Nov 2019 14:17:25 +0800 Ready: True Restart Count: 1 Liveness: exec [test -e /tmp/healthy] delay=0s timeout=1s period=10s #success=1 #failure=3
4) Similarly, you can use the liveness-http.yaml For learning and practice, I have provided the related configuration list.
All you need is: 1. Read the yaml listing first 2.apply test
2.readiness
Ready status check, no Pod restart rights, used as basis for Service traffic distribution
1) You can use explain to query the field configuration instructions of readiness probes (this is useful!)
kubectl explain pods.spec.containers.readinessProbe
2) Edit readiness-exec.yaml and use apply-f to generate a Pod.Here we add a readiness probe to test the existence of the/tmp/read file. The probe first detects for 5 seconds, and the detection period is 5 seconds.
apiVersion: v1 kind: Pod metadata: labels: test: readiness-exec name: readiness-exec spec: containers: - name: readiness-demo image: busybox args: ["/bin/sh", "-c", "while true; do rm -f /tmp/ready; sleep 30; touch /tmp/ready; sleep 300; done"] readinessProbe: exec: command: ["test", "-e", "/tmp/ready"] initialDelaySeconds: 5 periodSeconds: 5
3) It was observed that readiness-exec did not go directly into the ready state after it was started, but only became 1/1 after detecting/tmp/read files.
readiness-exec 0/1 Pending 0 0s <none> <none> <none> <none> readiness-exec 0/1 Pending 0 0s <none> centos-2.shared <none> <none> readiness-exec 0/1 ContainerCreating 0 0s <none> centos-2.shared <none> <none> readiness-exec 0/1 Running 0 11s 10.244.1.34 centos-2.shared <none> <none> readiness-exec 1/1 Running 0 43s 10.244.1.34 centos-2.shared <none> <none>
5. Phase of Pod object
Pod has five states, Pending, Running, Succeeded, Failed, and Unknown, where:
Pending: Pod Unfinished scheduling, usually because there is no one that meets the scheduling requirements node node Running: Pod Scheduled successfully and has been kubelet Creation complete Succeeded: Pod All containers in have succeeded and will not be restarted Failed: Pod At least one container failed to terminate Unknown: Apiserver Unable to get Pod Object's state information, usually because it cannot be associated with the kubelet Communication Causes
6.Pod Security
The security context of a Pod object is used to set permissions and access control functions for a Pod or container. Common properties that support settings include the following:
1) Control access to objects such as files based on user ID (UID) and group ID (GID) 2) Run privileged or unprivileged 3) Provide partial privileges through Linux Capabilities 4) System calls based on the Seccomp filtering process 5) SELENUX-based security labels 6) Ability to upgrade permissions
There are two security levels:
Two levels: kubectl explain pod.spec.securityContext kubectl explain pod.spec.containers.[].securityContext.capabilities
Finally, take a look at a configuration list: Run the busybox container with uid 1000 for an unprivileged user and disable privilege upgrades
apiVersion: v1 kind: Pod metadata: name: pod-with-securitycontext spec: containers: - name: busybox image: busybox command: ["/bin/sh","-c","sleep 86400"] securityContext: runAsNonRoot: true runAsUser: 1000 allowPrivilegeEscalation: false
The tests are as follows:
[root@centos-1 ~]# kubectl exec -it pod-with-securitycontext -- /bin/sh / $ ps -ef|grep busy 25 1000 0:00 grep busy / $ mkdir 1 mkdir: can't create directory '1': Permission denied
7.Pod Resource Quota
1) Configuration document query for resource coordination
kubectl explain pod.spec.containers.resources
2) Description of parameters
limits: Upper quota, maximum amount of resources to eat requests: Lower limit, below which pod will fail to start
3) Frequent occurrences at the OOM system level
Too little node memory limits limit too small
4) Resource Quota demo
apiVersion: v1 kind: Pod metadata: name: stress-pod spec: containers: - name: stress image: ikubernetes/stress-ng command: ["/usr/bin/stress-ng", "-c 1", "-m 1", "--metrics-brief"] resources: requests: memory: "128Mi" cpu: "200m" limits: memory: "512Mi" cpu: "400m"
8.Pod Quality of Service Category (QoS Class)
kubectl describe pod can view the corresponding quality of service categories, there are three types:
1.Guaranteed
Guaranteed: It must be guaranteed that both requests and limits are set equal, with the highest priority
2.Burstable
Burstable: As satisfied as possible, requests or limits have a set, medium priority
3.BestEffort
BestEffort: pod resource with no requests or limits property set, lowest priority
9.Pod Interrupts Budget
PDB (PodDisruptionBudget) Interruption Budget was introduced by version k8s1.4 to plan budgets for those voluntary interruptions.
Limit the maximum number of Pod copies that can be interrupted voluntarily or ensure the minimum number of Pod copies available to ensure high availability of services.
1) Configure field query commands
kubectl get pdb
2) demo reference
apiVersion: policy/v1beta1 kind: PodDisruptionBudget metadata: name: ngx-new spec: minAvailable: 1 selector: matchLabels: app: ngx-new
1. Involuntary interruption
Pod interrupts caused by uncontrollable external factors exit operations, such as hardware or system failures, network failures, node failures, etc.
2. Voluntary interruption
Pod interruptions caused by user-specific administrative operations, such as emptying nodes, artificially deleting Pod objects, and so on
10. Notes
The original address of this article is in my Github , I will update all the topics one after another, including docker, k8s, ceph, istio and prometheus, to share all the technical knowledge and practical process of the cloud. If it is useful to you, follow, star My github, which is also the motivation for me to update and share. Thank you~