Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize Pod expulsion by Eviction Manager in K8S

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

Today, I will talk to you about how to achieve Pod expulsion in Eviction Manager in K8S. Many people may not know much about it. In order to make you understand better, the editor has summarized the following content for you. I hope you can get something according to this article.

In order to ensure the stability of Node nodes, when there is a shortage of resources (memory/storage), kubelet will actively choose to expel some Pods to release resources. The component that implements this function is Eviction Manager.

When expelling a Pod, kubelet will kill all containers in the pod and set the status of the pod to Failed. The pod dropped by kill may be dispatched to other node.

Thresholds can be artificially defined to tell Kubelet under what circumstances to expel pods. There are two types of thresholds:

Soft Eviction Thresholds-when the threshold is reached, the eviction will not be triggered immediately, but will wait for a user-configured grace period to be triggered.

Hard Eviction Thresholds-immediately Kill Pods.

Realize

The Eviction Manager-related code is in the package / pkg/kubelet/eviction, and the core logic is the managerImpl.synchronize method. EvictionManager periodically calls the synchronize method in a separate co-program to achieve eviction.

The synchronize method mainly includes the following steps:

1. Initialize configuration

Func (m * managerImpl) synchronize (diskInfoProvider DiskInfoProvider, podFunc ActivePodsFunc, capacityProvider CapacityProvider) [] * v1.Pod {

/ / 1. Read all Thresholds from the configuration

Thresholds: = m.config.Thresholds

If len (thresholds) = = 0 {

Return nil

}

....

/ / 2. Initialize rank funcs/reclaim funcs, etc.

If m.dedicatedImageFs = = nil {

HasImageFs, ok: = diskInfoProvider.HasDedicatedImageFs ()

If ok! = nil {

Return nil

}

M.dedicatedImageFs = & hasImageFs

M.resourceToRankFunc = buildResourceToRankFunc (hasImageFs)

M.resourceToNodeReclaimFuncs = buildResourceToNodeReclaimFuncs (m.imageGC, m.containerGC, hasImageFs)

}

/ / 3. Get the Pods by initializing the incoming func

ActivePods: = podFunc ()

/ / 4. Get current resource usage through summary provider

Observations, statsFunc, err: = makeSignalObservations (m.summaryProvider, capacityProvider, activePods)

....

/ / 5. Accelerated memory usage notification via memcg, entering only when notifiersInitialized is false

If m.config.KernelMemcgNotification & &! m.notifiersInitialized {

....

M.notifiersInitialized = true / / initialization complete

Err = startMemoryThresholdNotifier (m.config.Thresholds, observations, true, func (desc string) {

/ / callback function. The notification of memcg immediately triggers the synchronize function.

Glog.Infof ("hard memory eviction threshold crossed at% s", desc)

M.synchronize (diskInfoProvider, podFunc, capacityProvider)

})

....

}

....

}

two。 Calculate Thresholds

Func (m * managerImpl) synchronize (diskInfoProvider DiskInfoProvider, podFunc ActivePodsFunc, capacityProvider CapacityProvider) [] * v1.Pod {

....

/ / 1. Calculate the excess thresholds based on the configured thresholds parameters and the current resource usage

Thresholds = thresholdsMet (thresholds, observations, false)

/ / 2. Merge the last calculated thresholds results

If len (m.thresholdsMet) > 0 {

ThresholdsNotYetResolved: = thresholdsMet (m.thresholdsMet, observations, true)

Thresholds = mergeThresholds (thresholds, thresholdsNotYetResolved)

}

/ / 3. Filter soft thresholds that is not really activated

Now: = m.clock.Now ()

ThresholdsFirstObservedAt: = thresholdsFirstObservedAt (thresholds, m.thresholdsFirstObservedAt, now)

....

Thresholds = thresholdsMetGracePeriod (thresholdsFirstObservedAt, now)

/ / 4. Update calculation results

M.Lock ()

M.nodeConditions = nodeConditions

M.thresholdsFirstObservedAt = thresholdsFirstObservedAt

M.nodeConditionsLastObservedAt = nodeConditionsLastObservedAt

M.thresholdsMet = thresholds

/ / determine the set of thresholds whose stats have been updated since the last sync

Thresholds = thresholdsUpdatedStats (thresholds, observations, m.lastObservations)

DebugLogThresholdsWithObservation ("thresholds-updated stats", thresholds, observations)

M.lastObservations = observations

M.Unlock ()

...

}

3. Calculate the Resource of this round of Eviction inspection

In each round of Eviction, kubelet will only kill at most one Pod. Since Eviction Manager will deal with the shortage of multiple resources (memory/storage) at the same time, before selecting Pod, you will first select the resource type referenced by the current round of Eviction, and then sort the amount of resources used by Pods to select the Pod that kill has dropped.

Func (m * managerImpl) synchronize (diskInfoProvider DiskInfoProvider, podFunc ActivePodsFunc, capacityProvider CapacityProvider) [] * v1.Pod {

....

/ / 1. Collect all resource types that are currently in short supply

StarvedResources: = getStarvedResources (thresholds)

If len (starvedResources) = = 0 {

Glog.V (3) .Infof (eviction manager: no resources are starved)

Return nil

}

/ / 2. Sort and select one of the resource

Sort.Sort (byEvictionPriority (starvedResources))

ResourceToReclaim: = starvedResources [0]

/ / determine if this is a soft or hard eviction associated with the resource

SoftEviction: = isSoftEvictionThresholds (thresholds, resourceToReclaim)

.

/ / 3. Sort the Pods according to the usage of the selected Resource

Rank, ok: = m.resourceToRankFunc [resourceToReclaim]

....

Rank (activePods, statsFunc)

....

}

4. Kill Pod

Select the first Kill out of the sorted Pods:

Func (m * managerImpl) synchronize (diskInfoProvider DiskInfoProvider, podFunc ActivePodsFunc, capacityProvider CapacityProvider) [] * v1.Pod {

....

For I: = range activePods {

Pod: = activePods [I]

...

Status: = v1.PodStatus {

Phase: v1.PodFailed

Message: fmt.Sprintf (message, resourceToReclaim)

Reason: reason

}

....

GracePeriodOverride: = int64 (0)

If softEviction {

GracePeriodOverride = m.config.MaxPodGracePeriodSeconds

}

/ / Real KillPod

Err: = m.killPodFunc (pod, status, & gracePeriodOverride)

If err! = nil {

Glog.Warningf ("eviction manager: error while evicting pod% s:% v", format.Pod (pod), err)

}

Return [] * v1.Pod {pod}

}

Glog.Infof ("eviction manager: unable to evict any pods from the node")

Return nil

}

After reading the above, do you have any further understanding of how Eviction Manager implements Pod expulsion in K8S? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report