Kubernetes Pod Controller

Keywords: Linux Kubernetes Nginx Tomcat kubelet

In robotic technology and automation, the control ring is an endless loop of control system state

This is an example of a control ring:'Automatic temperature regulator in a room'
When you set the temperature and tell the thermostat your "expected state", the actual temperature of the room is "current state".By switching the device on and off, the temperature auto-regulator makes its current state infinitely close to the expected state.
The controller monitors the public state of the cluster through the k8s apiserver and is committed to changing the current state to the desired state.

Pod Controller for kubernetes

Controller is the control used in kubernetes to manage Pod, which allows Pod to remain in a state that users originally set or expected.If a node is down or a Pod dies for other reasons, an identical Pod is set up in the other nodes to replace the Pod.

  • A common type of built-in controller that typically interacts with cluster API servers:
    ReplicaSet: An upgraded version of Replication Controller, the difference is support for selectors;
    Deployments: Manage RS and provide functions such as updating Pod. It is recommended to use it to manage RS unless customizing the update arrangement;
    DaemonSet: Used to ensure that each node in the cluster runs only one copy of the Pod, typically for system-level background tasks;
    StatefulSets: Usually used to manage stateful applications;
    Job: One-time task execution;
    Crontab: Timed task execution;
    Any controller learning method can refer to the official Chinese site: Kubernetes built-in controller
  • For instance:
    Before running Pod, download the image locally as much as possible, import and export using docker save load based on Pod scheduling results
[root@node1 controllers]# cat replicaset-demo.yaml 
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myapp
  namespace: default
spec:
  replicas: 1
  selector: 
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
      name: myapp-httpd
      labels:
        app: myapp
        release: canary
        environment: qa
    spec:
      containers:
      - name: myapp-container
        image: httpd
                imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80
[root@node1 controllers]# 

Kubernetes Minimum Run Unit Pod, understand what Pod is You need to know containers

Containers are essentially a process, a process in which views are isolated and resources are limited.
The container is designed as a "single process" model, not as a single process in the container, because the application of the container equals a process and can only manage processes with PID=1.(The "single process" model for Linux containers refers to the life cycle of a container equal to that of a process with PID=1, not that multiple processes cannot be created in the container), or that multiple processes can be started, but only processes with PID=1 are managed by the container, while other processes are in the tuoguan state.What if this process with PID=1 has a problem and is kill ed or fail ed and nobody knows, then what about other process resources?No one to take care of, no one to recycle...
So it is often difficult to run a complex program in a container, but the controller of kubernetes comes out, which establishes its historical position in running multiple applications in a container, at least at this stage.

What is Pod

The basic component that enables the Kubernetes cluster to run is called Pod.

  • Pod is the basic unit of execution for Kubernetes applications, that is, it is the smallest and simplest unit created or deployed in the Kubernetes object model.A Pod represents a process running on a cluster.
  • A Pod can be an option that encapsulates a container (or containers) for a single application, a storage resource, a unique network IP, and controls how the container should operate.Pod represents a deployment unit, "a single instance of an application in Kubernetes", which may consist of a single container or a small number of tightly coupled and resource-sharing containers.
  • Docker is the most common container runtime in Kubernetes Pod, but Pod can also support other containers.

Pod = "Process Group"

Inside Kubernetes, Pod is actually a concept that the Kubernetes project has abstracted for you that can be analogized to a process group.
The simple point is that kubernets defines multiple applications as containers, and then runs multiple containers within a Pod resource. You can also say that a combination of containers is called a Pod.When Kubernetes runs a container defined in a Pod combination, you will see multiple containers running.They share certain resources at the bottom of the system (nets, uts, ipc s, etc., all of which belong to Pod) at the same time.
Pod has only one logical unit in Kubernetes, and Pod is a unit in which Kubernetes allocates resources.Because the containers inside share certain resources, Pod is also the atomic dispatch unit of Kubernetes.

Pod's working characteristics

  • Autonomous Pod, Autonomous Management;
  • Connect containers and make them Abstract encapsulation.
  • A Pod contains multiple containers that share the same underlying UTS, IPC, Network, etc.
  • Pod simulates a traditional virtual machine, and a Pod suggests running only one container.
  • Shared storage volume, no longer a container but a Pod;
  • Pod runs on each node, depending on its node tolerance;
  • Pod controller: Replication Controller, ReplicaSet, DeployMent, StatefulSet, DaemonSet, Job;

How Pod works in Kubernetes

  • Pod Running Single Container
    The "one container per Pod" model is the most common Kubernetes use case, the one-container-per-Pod model.In this case, you can think of Pod as a wrapper for a single container, and Kubernetes manages Pod directly, not a container.
  • Pod running multiple collaborative containers
    Pod may encapsulate applications that are composed of multiple tightly coupled coexisting containers that need to share resources.In sidecar mode, Pod encapsulates a tightly coupled, resource-sharing, collaborative addressing set of containers as a snap-in as a manageable entity.
    For instance:
    Self-contained container design Sidecar example, mirror can create a private repository to package and upload to github
[root@node1 controllers]# cat pod-tomcat-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: web-2
  namespace: default
spec:
  initContainers:
  - image: ik8s.io/sample:v2
    imagePullPolicy: IfNotPresent
    name: war
    command: [ "cp", "/sample.war", "/app" ]
    volumeMounts:
    - mountPath: /app
      name: app-volume
  containers:
  - image: ik8s.io/tomcat:8.0
    imagePullPolicy: IfNotPresent
    name: tomcat8
    command: [ "sh", "-c", "/root/apache-tomcat-8.0.5/bin/start.sh" ]
    volumeMounts:
    - mountPath: /root/apache-tomcat-8.0.5/webapps
      name: app-volume
    ports:
    - containerPort: 8080
      hostPort: 8008
  volumes:
  - name: app-volume
    emptyDir: {}
[root@node1 controllers]# 

Rolling Update and Grayscale Application Publishing with DeployMents Controller

  • Any application creation must satisfy three core components:
    User-expected Pod copies, label selectors, Pod templates (the number of existing Pods is not enough for the expected Pods defined in the copy)
    Command Help:
[root@node1 controllers]# kubectl explain deploy
[root@node1 controllers]# kubectl explain deploy.spec
[root@node1 controllers]# kubectl explain deploy.spec.strategy

This example shows you how to implement roll-back updates, version rollback, Pod number updates, and so on for a set of applications.

[root@node1 controllers]# cat deployment-myapp-demo.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deployment
  namespace: default
spec:
  replicas: 2
  selector: 
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
      labels:
        app: myapp
        release: canary
    spec:
      containers:
      - name: myapp
        image: nginx
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80
[root@node1 controllers]# 
[root@node1 controllers]# kubectl apply -f deployment-myapp-demo.yaml 
deployment.apps/myapp-deployment created
[root@node1 controllers]# 
[root@node1 controllers]# kubectl delete -f deployment-myapp-demo.yaml 
deployment.apps "myapp-deployment" deleted
[root@node1 controllers]# 
[root@node1 controllers]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
myapp-deployment-5b776d9cf7-29s7b   1/1     Running   0          9m26s
myapp-deployment-5b776d9cf7-8hb8c   1/1     Running   0          9m26s
[root@node1 controllers]# 
[root@node1 controllers]# kubectl logs myapp-deployment-5b776d9cf7-8hb8c
[root@node1 controllers]# 
[root@node1 controllers]# kubectl describe pods myapp-deployment-5b776d9cf7-8hb8c
.............
Events:
Type    Reason     Age        From               Message
    ----    ------     ----       ----               -------
    Normal  Scheduled  <unknown>  default-scheduler  Successfully assigned default/myapp-deployment-5b776d9cf7-8hb8c to node2
    Normal  Pulled     10h        kubelet, node2     Container image "nginx" already present on machine
    Normal  Created    10h        kubelet, node2     Created container myapp
    Normal  Started    10h        kubelet, node2     Started container myapp
[root@node1 controllers]# 
  • Update Pod number, scroll update
    Updates can be patched or directly edited, with a default value of 1
# Update the current replica set to 5
[root@node1 controllers]# kubectl patch deployment myapp-deploy -p '{"spec":{"relicas":5}}' 
# View update status in real time
[root@node1 controllers]# kubectl get pods -w 
  • Update image version number, scroll update
    Commands used: kubectl set, kubectl edit, kubectl apply, kubectl rollout
# Update container image version to latest
[root@node1 controllers]# kubectl set image deployment/myapp-deployment nginx=nginx:latest 
# View scrolling update history
[root@node1 controllers]# kubectl rollout history deployment myapp-deployment 
# View ReplicaSet spatial image version number status
[root@node1 controllers]# kubectl get rs -l app=myapp -o wide
# View image fields
[root@node1 controllers]# kubectl describe pods myapp-deployment-5b776d9cf7-8hb8c | grep 'Image' 
# See how its update process was updated and what it did
[root@node1 controllers]# kubectl describe deployment/myapp-deployment 
  • Simulate Canary Release
    Change maxSurge and maxUnavailable update policies
# Command Help
[root@node1 controllers]# kubectl explain deploy.spec.strategy.rollingUpdate
# Patch first, change current update policy to simulate Canary release
[root@node1 controllers]# kubectl patch deployment myapp-deployment -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":0}}}}'
# See
[root@node1 controllers]# kubectl describe deploy myapp-deployment
# Update again (Canary Release)
[root@node1 controllers]# kubectl set image deployment myapp-deployment myapp=nginx:v1  && kubectl rollout pause deployment myapp-
Waiting for deployment "myapp-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
# View historical versions and use the'--record'parameter to see the commands used by each version
[root@node1 controllers]# kubectl rollout history deployment myapp-deployment 
# Rollback without version number, default fallback to previous version
[root@node1 controllers]# kubectl rollout undo deployment myapp-deployment
# Use "--to-reversion=[N]" to roll back to the specified version
[root@node1 controllers]# kubectl rollout undo deploy myapp-deployment --to-reversion=1
# If the above is OK, perform the update again
[root@node1 controllers]# kubectl rollout resume deploy myapp-deployment

Pod and ReplicaSet usage for more DeployMents controllers is available for reference ---> Official DeployMnet Controller in Chinese

Specify a Pod copy using the DaemonSet run, using a system directory as a storage volume

More DaemonSet controllers are available on the official website ---> Chinese official DaemonSet controller using

Add more...

Posted by joseph on Mon, 02 Mar 2020 17:09:55 -0800