#IT star is not a dream diagram kubernetes container exploration mechanism core implementation of state management

Keywords: Linux kubelet

k8s builds a Manager component for the management of container exploration worker. This component is responsible for the management of underlying exploration worker, caching the current state of container, and synchronizing the current state of container. Today, we will analyze some of its core components

1. Implementation of core principles


The status of the Manager cache is mainly consumed by the kubelet and the status component. When the Pod is synchronized, the ready and start status of the Pod container will be updated through the detection status in the current Manager. Let's take a look at some key implementations of the Manager

2. Management of exploration results

That is, the probe / results / results & manager component. Its main function is to store probe results and notify probe results

2.1 core data structure

Cache is responsible for the storage of the container's detection results, and updates is responsible for the subscription of the external update status. By comparing the new results with the status in cache, it can decide whether to notify the external

// Manager implementation.
type manager struct {
    // Protect cache
    sync.RWMutex
    // Container ID - > probe result
    cache map[kubecontainer.ContainerID]Result
    // Update pipeline
    updates chan Update
}

2.2 update cache notification event

When updating the cache, whether to publish the change event is compared with the status before and after, so as to inform the kubelet core process of the external subscription container change

func (m *manager) Set(id kubecontainer.ContainerID, result Result, pod *v1.Pod) {
    // Modify internal status
    if m.setInternal(id, result) {
        // Synchronize update events
        m.updates <- Update{id, result, pod.UID}
    }
}

Synchronous implementation of internal state modification and judgment

// If the previous cache does not exist or the status is inconsistent, true will be returned to trigger the update
func (m *manager) setInternal(id kubecontainer.ContainerID, result Result) bool {
    m.Lock()
    defer m.Unlock()
    prev, exists := m.cache[id]
    if !exists || prev != result {
        m.cache[id] = result
        return true
    }
    return false
}

2.3 external renewal pipeline

func (m *manager) Updates() <-chan Update {
    return m.updates
}

3. Detection manager

The probe Manager refers to the Manager component of the probe / probe) Manager, which is responsible for the management of the probe component on the current kubelet, caching and synchronizing the probe status results, and internally synchronizing the apiserver status through the status Manager

3.1 container detection Key

Each probe Key contains the target information to be detected: ID, container name and probe type of pod

type probeKey struct {
    podUID        types.UID
    containerName string
    probeType     probeType
}

3.2 core data structure

The statesmanager component will be analyzed in detail in the following chapters. livenessManager is the result of exploration. If a container fails to be detected, it will be processed locally by kubelet. readlinessManager and startupManager need to synchronize with apiserver through statesmanager

type manager struct {
    //Probe Key and worker mapping
    workers map[probeKey]*worker
    // Read write lock
    workerLock sync.RWMutex

    //The statusManager cache provides pod IP and container id for probes.
    statusManager status.Manager

    // Store readness probe results
    readinessManager results.Manager

    // Store liveness detection results
    livenessManager results.Manager

    // Store startup probe results
    startupManager results.Manager

    // Perform probe operation
    prober *prober
}

3.3 synchronous startup detection results

func (m *manager) updateStartup() {
    // Get data from pipeline for synchronization
    update := <-m.startupManager.Updates()

    started := update.Result == results.Success
    m.statusManager.SetContainerStartup(update.PodUID, update.ContainerID, started)
}

3.4 synchronous readiness detection results

func (m *manager) updateReadiness() {
    update := <-m.readinessManager.Updates()

    ready := update.Result == results.Success
    m.statusManager.SetContainerReadiness(update.PodUID, update.ContainerID, ready)
}

3.5 start the background task of synchronous detection results

func (m *manager) Start() {
    // Start syncing readiness.
    go wait.Forever(m.updateReadiness, 0)
    // Start syncing startup.
    go wait.Forever(m.updateStartup, 0)
}

3.6 add Pod detection

When adding a Pod, it will traverse all the containers of the Pod and build the corresponding probe worker according to the probe type

func (m *manager) AddPod(pod *v1.Pod) {
    m.workerLock.Lock()
    defer m.workerLock.Unlock()

    key := probeKey{podUID: pod.UID}
    for _, c := range pod.Spec.Containers {
        key.containerName = c.Name

        // Construction of probe task for startupProbe
        if c.StartupProbe != nil && utilfeature.DefaultFeatureGate.Enabled(features.StartupProbe) {
            key.probeType = startup
            if _, ok := m.workers[key]; ok {
                klog.Errorf("Startup probe already exists! %v - %v",
                    format.Pod(pod), c.Name)
                return
            }
            // Build a new worker
            w := newWorker(m, startup, pod, c)
            m.workers[key] = w
            go w.run()
        }

        // Construction of detection task for ReadinessProbe
        if c.ReadinessProbe != nil {
            key.probeType = readiness
            if _, ok := m.workers[key]; ok {
                klog.Errorf("Readiness probe already exists! %v - %v",
                    format.Pod(pod), c.Name)
                return
            }
            w := newWorker(m, readiness, pod, c)
            m.workers[key] = w
            go w.run()
        }

        // Construction of detection task for LivenessProbe
        if c.LivenessProbe != nil {
            key.probeType = liveness
            if _, ok := m.workers[key]; ok {
                klog.Errorf("Liveness probe already exists! %v - %v",
                    format.Pod(pod), c.Name)
                return
            }
            w := newWorker(m, liveness, pod, c)
            m.workers[key] = w
            go w.run()
        }
    }
}

3.7 update Pod status

Updating the Pod status is mainly to update the status of the corresponding container in the Pod according to the previous status information cached in the current Manager. These statuses are the latest detection status of the container in the Pod. Obtaining these statuses is to detect whether the current container is ready and started, and make basic data for the subsequent update process

3.7.1 vessel status update

    for i, c := range podStatus.ContainerStatuses {
        var ready bool
        // Detect container status
        if c.State.Running == nil {
            ready = false
        } else if result, ok := m.readinessManager.Get(kubecontainer.ParseContainerID(c.ContainerID)); ok {
            // Check the status in readinessMnager. If it is successful, it is ready
            ready = result == results.Success
        } else {
            // Check for probes that are not running. Ready as long as it exists
            _, exists := m.getWorker(podUID, c.Name, readiness)
            ready = !exists
        }
        podStatus.ContainerStatuses[i].Ready = ready

        var started bool
        if c.State.Running == nil {
            started = false
        } else if !utilfeature.DefaultFeatureGate.Enabled(features.StartupProbe) {
            // Container is running, if the StartupProbe feature is disabled, it is assumed to be started
            started = true
        } else if result, ok := m.startupManager.Get(kubecontainer.ParseContainerID(c.ContainerID)); ok {
            // If the status in startupManager is successful, it is considered to have been started
            started = result == results.Success
        } else {
            // Check if there are probes that are not running.
            _, exists := m.getWorker(podUID, c.Name, startup)
            started = !exists
        }
        podStatus.ContainerStatuses[i].Started = &started
    }

3.7.2 initialize container status update

The initialization container is considered ready if the initialization container primary container has terminated and the exit status code is 0

    for i, c := range podStatus.InitContainerStatuses {
        var ready bool
        if c.State.Terminated != nil && c.State.Terminated.ExitCode == 0 {
            // Container state
            ready = true
        }
        podStatus.InitContainerStatuses[i].Ready = ready
    }

3.8 notification of survival status

The notification of survival status is mainly carried out in the core process cycle of kubelet. If the status of the container is detected to fail, the synchronization of the corresponding pod's container status will be carried out immediately, so as to decide what to do next

    case update := <-kl.livenessManager.Updates():
        // If the probe state fails
        if update.Result == proberesults.Failure {
            // Ellipsis code
            handler.HandlePodSyncs([]*v1.Pod{pod})
        }

The overall design of the probe is probably like this. Next, we will stage its statusManager component, that is, to realize the synchronization of the probe state and the apiserver. Read the e-book address in k8s source code: https://www.yuque.com/baxiaoshi/tyado3

Posted by oracle259 on Wed, 12 Feb 2020 02:20:44 -0800