Hello everyone, our Kubernetes learning notes have finally been updated after two months. The Deployment, Service and stateful we introduced earlier believe that the students who have read the articles have understood their respective abilities and use scenarios. If they have no impression or haven't seen them, it is recommended to read the articles about them before.
- K8s getting started notes -- Deployment completes service Deployment
- Deeply understand stateful set and use Kubernetes to arrange stateful applications
- K8s getting started notes -- exposing services with Service
The Deployment, Service and stateful set controllers in K8s are closest to our application layer, so we have more opportunities to contact them. Today, I'd like to introduce another controller, DaemonSet, which can be called the hero behind the cluster.
As an analogy, you can immediately understand the role of daemon set. If the K8s cluster is a large House, Deployment, Service and stateful, they do soft decoration for the House. You can use them to make the House into functional rooms such as dining room, living room and bedroom, and DamonSet does hard decoration such as water and electricity network and wall painting for the House, For K8s cluster, its hydropower network is -- network, storage, monitoring and so on?
Well, after we understand that DaemonSet is the role of hard loading the K8s cluster, let's talk about its features and how to use it.
What is a daemon set
The function of DaemonSet is to ensure that a daemon Pod can be run on all available nodes. This Pod has the following three characteristics
- This Pod runs on every Node in the K8s cluster.
- There is only one Pod instance created by the specified DamonSet on each node (of course, we can create damonsets with different functions, and each node will create a corresponding Pod instance).
- When a new node joins the Kubernetes cluster, the Pod will be automatically created on the new node, and when the old node is deleted, the Pod on it will be recycled accordingly.
DaemonSet is an essential part of K8s cluster. It allows cluster administrators to easily configure services (pod s) across all or some nodes.
Usage scenario of DaemonSet
DaemonSets can deploy pods on all nodes to allocate tasks to maintain clusters and support service classes, so as to improve the performance of K8s clusters. It is ideal for long-running services, such as monitoring or log collection. The following are some usage scenarios for DaemonSet:
- Mount cluster storage on each node, such as glusterd and ceph, and easy to operate Volume directories.
- Run a daemon on each node to collect logs, such as fluent D and logstash.
- Run a daemon on each node to monitor the node, such as the Agent of the protocol node exporter, collected or Datadog (service on AWS).
Scheduling of DaemonSet Pod
By default, the node on which the Pod runs is determined by the K8s scheduler. However, the DaemonSet pod is created and scheduled by the DaemonSet controller. Using the DaemonSet controller will lead to inconsistent Pod behavior and Pod priority preemption.
To solve this problem, K8s allows users to schedule the DaemonSet Pod to the target node using the K8s scheduler by setting NodeAffinity for the DaemonSet Pod.
The following is an example NodeAffinity configuration:
apiVersion: v1 kind: Pod metadata: name: nginx spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: disktype operator: In values: - ssd containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent
Above, we configure NodeAffinity so that Pod can only be scheduled to nodes with "disktype=ssd" tag.
- Requiredduringschedulingignored duringexecution: it means that this NodeAffinity must be considered during each scheduling. You can also set it not to consider this NodeAffinity in some cases.
- The disktype In (ssd) specified in nodeSelectorTerms indicates that this Pod is only allowed to run on the node "metadata. Labels. Disktype = SSD" in the future.
For a more detailed explanation of node intimacy, please refer to the article I wrote before Play K8s Pod scroll update Pay attention to the content of NodeAffinity chapter in.
In addition, the daemon set will automatically add node.kubernetes.io/unscheduled: noschedule tolerance to Pod tolerances.
apiVersion: v1 kind: Pod metadata: name: nginx spec: tolerations: - key: node.kubernetes.io/unschedulable operator: Exists effect: NoSchedule
The newly added nodes in the K8s cluster will be marked with taints before the preparatory work is completed, and the Pod is not allowed to be scheduled to the node. However, the daemon set will automatically add the configuration that tolerates the taint to its Pod, so that the Pod created by it has the opportunity to enter in advance. After various "hard installs" are built on the node, it can be scheduled by the cluster.
Create YAML template for DaemonSet
Like other components in K8s, DaemonSets are configured using YAML files. Let's look at the structure of the daemon set configuration file.
apiVersion: apps/v1 kind: DaemonSet metadata: name: test-daemonset namespace: test-daemonset-namespace Labels: app-type: test-app-type spec: template: metadata: labels: name: test-daemonset-container image: "image to use for container" selector: matchLabels: name: test-daemonset-container
Like other component configurations, apiVersion, kind and metadata are mandatory fields in each K8s component configuration.
- Template: also known as pod template. We have seen it in the configuration of Deployment and StatefulSet. Here is the definition of pod to be deployed to each node. The pod template in the DaemonSet must set its RestartPolicy to "Always". If no RestartPolicy is specified, it is "Always" by default.
- Selector: the tag selector of the Pod managed by the daemon set. The value must be a label specified in the Pod template. (in the above example, we used the name: Test daemon container as the selector.) this value is fixed and cannot be changed after the initial creation of the daemon set.
Other optional configuration fields are:
- template.spec.affinity – configure the node Affinity of the Pod. At the beginning of the article, it is said that the pods created by the DaemonSet controller can only be scheduled by the scheduler to the nodes that meet the Affinity configuration through the NodeAffinity configuration.
Create DaemonSet
Now let's create a sample DaemonSet. It will manage a Pod that uses the "fluent d-elastic search" image to run the container. The created Pod will run on each node of the K8s cluster, and the logs in the Docker container on the node will be forwarded to ElasticSearch through fluent D.
apiVersion: apps/v1 kind: DaemonSet metadata: name: fluentd-elasticsearch-test namespace: default # Name Space labels: k8s-app: fluentd-logging spec: selector: # Selector matchLabels: name: fluentd-elasticsearch-test-deamonset template: # Pod Template metadata: labels: name: fluentd-elasticsearch-test-deamonset spec: tolerations: # Tolerations - key: node-role.kubernetes.io/master effect: NoSchedule containers: # Container Details - name: fluentd-elasticsearch image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2 resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi volumeMounts: - name: varlog mountPath: /var/log - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true terminationGracePeriodSeconds: 30 volumes: - name: varlog hostPath: path: /var/log - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers
After having the YAML configuration file of the DaemonSet, let's use the kubectl create command to create the DaemonSet and retrieve the DaemonSet and the Pod information it creates:
Create DaemonSet
➜ ✗ kubectl apply -f daemonset.yaml daemonset.apps/fluentd-elasticsearch-test created
Check whether the creation is successful
➜ ✗ kubectl get pods NAME READY STATUS RESTARTS AGE fluentd-elasticsearch-test-pb5qw 0/1 ContainerCreating 0 18s ➜ ✗ kubectl describe pod fluentd-elasticsearch-test-pb5qw ...... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> Successfully assigned default/fluentd-elasticsearch-test-pb5qw to docker-desktop Normal Pulling 32s kubelet, docker-desktop Pulling image "quay.io/fluentd_elasticsearch/fluentd:v2.5.2" Normal Pulled 6s kubelet, docker-desktop Successfully pulled image "quay.io/fluentd_elasticsearch/fluentd:v2.5.2" in 25.3910795s Normal Created 6s kubelet, docker-desktop Created container fluentd-elasticsearch Normal Started 5s kubelet, docker-desktop Started container fluentd-elasticsearch ➜ ✗ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES fluentd-elasticsearch-test-pb5qw 1/1 Running 0 5m7s 10.1.0.38 docker-desktop <none> <none>
From the above output, we can see that our daemon set has been successfully deployed. It can be seen that we do not specify the number of pods in the definition file through replicas like Deployment, because DaemonSet will automatically expand pods to the available nodes specified in the configuration according to the available nodes on the cluster. Here, because I am a K8s cluster with Docker desktop application on my computer, and there is only one node, DaemonSet only creates one Pod.
Update and delete DaemonSet
The method of updating and deleting the DaemonSet is no different from that of other components. I won't talk about it here. Just use the command
## apply directly after updating the configuration file kubectl apply -f daemonset.yaml ## delete kubectl delete daemonset <<daemonset-name>>
summary
Well, that's all for this article about DaemonSet. If it's not for K8s related work, DaemonSet may not be accessible at ordinary times. This article is also a popular science. Remember my metaphor: Deployment and StatefulSet controllers are soft installed for K8s cluster, while DaemonSet is hard installed for the big House of cluster, Therefore, the DaemonSet will enter the site for "decoration" before the "House" of the node is ready, and then let the Deployment do "soft decoration".