Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What's the use of Kubernetes Scheduler?

2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article is to share with you about the usefulness of Kubernetes Scheduler. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.

Let's take a look at the basic functions of Kubernetes Scheduler.

The role of Kubernetes Scheduler is to schedule pod to a specified work node (Node) according to a specific scheduling algorithm, a process also known as bind. The input of the Scheduler is the information of the Pod to be scheduled and the node (Node) that can be scheduled, outputs the Node selected for the scheduling algorithm, and sends the pod bind to the Node.

The scheduling algorithm in Kubernetes Scheduler is divided into two phases:

Pre-selection: filter out those Nodes that do not meet the Policies based on the configured Predicates Policies (default is the default predicates policies collection defined in DefaultProvider), and the rest of the Nodes is used as the preferred input.

Preferred: the pre-selected Nodes is rated and ranked according to the configured Priorities Policies (default is the default priorities policies set defined in DefaultProvider). The Node with the highest score is regarded as the most suitable Node, and the Pod is Bind to this Node.

Detailed description of preselection rules

Pre-rules are mainly used to filter out the Node nodes that do not conform to the rules, and the rest of the nodes are used as the preferred input. In version 1.6.1, preselected rules include:

Detailed description of the rules:

(1) NoDiskConflict: check for volume conflicts on this host. If this host already has a volume mounted, other Pod using this volume cannot be scheduled to this host. The rules used by GCE, Amazon EBS, and Ceph RBD are as follows:

1. GCE allows multiple volumes to be mounted at the same time, as long as they are read-only.

2. Amazon EBS does not allow different Pod to mount the same volume.

3. Ceph RBD does not allow any two pods to share the same monitor,match pool and image.

Note: ISCSI, like GCE, allows two volumes with the same IQN to be mounted when both volumes are read-only.

(2) NoVolumeZoneConflict: check whether there are volume conflicts in the deployment of Pod on this host under the given zone limit. It currently refers to checking PV resources (NewVolumeZonePredicate object predicate function).

(3) MaxEBSVolumeCount: make sure that the mounted EBS storage volume does not exceed the maximum value set. The default value is 39. It examines the storage volumes that are used directly and the PVC that indirectly uses this type of storage. Calculate the headings of different volumes, and if the number of previous volumes on the new Pod deployment will exceed the set maximum, then Pod cannot be dispatched to this host.

(4) MaxGCEPDVolumeCount: make sure that the mounted GCE storage volume does not exceed the maximum value set. The default value is 16. The rules are the same as MaxEBSVolumeCount.

(5) MaxAzureDiskVolumeCount: make sure that the mounted Azure storage volume does not exceed the maximum value set. The default value is 16. The rules are the same as MaxEBSVolumeCount.

(6) CheckNodeMemoryPressure: determine whether the node has entered the memory pressure state. If so, only Pod marked with memory 0 is allowed to be scheduled.

(7) CheckNodeDiskPressure: determine whether the node has entered the disk pressure state, and if so, no new Pod is scheduled.

(8) whether PodToleratesNodeTaints: Pod satisfies some conditions for node tolerance.

(9) MatchInterPodAffinity: node affinity screening.

(10) GeneralPredicates: contains some basic filtering rules (PodFitsResources, PodFitsHostPorts, HostName, MatchNodeSelector).

(11) PodFitsResources: check whether the free resources (CPU, Memory, GPU resources) on the node meet the needs of Pod.

(12) PodFitsHostPorts: check whether the HostPort required by each container in the Pod has been occupied by other containers. If the required HostPort does not meet the requirements, then the Pod cannot be dispatched to this host.

(13) check whether the host name is the HostName specified by Pod.

(14) check whether the label of the host meets the nodeSelector attribute requirements of Pod.

Detailed description of preference rules

The preference rule scores the list of hosts that meet the requirements, and finally selects the host with the highest score to deploy Pod. Kubernetes uses a set of priority functions to handle each selected host. Each priority function returns a score of 0-10, the higher the score, the better the host, and each function corresponds to a value that represents the weight. The score of the final host is calculated using the following formula:

FinalScoreNode = (weight1 priorityFunc1) + (weight2 priorityFunc2) + … + (weightn * priorityFuncn)

Detailed description of the rules:

(1) SelectorSpreadPriority: Pod belonging to the same service and replication controller should be distributed on different hosts as far as possible. If a zone is specified, the Pod will be spread across different hosts in different regions as much as possible. When scheduling a Pod, first look for the service or replication controller of the Pod, and then find the Pod that already exists in the service or replication controller. The fewer existing Pod running on the host, the higher the score of the host.

(2) LeastRequestedPriority: if a new pod is to allocate a node, the priority of this node is determined by the ratio of the idle part of the node to the total capacity ((total capacity-total capacity of the pod on the node-capacity of the new pod) / total capacity). The weights of CPU and memory are equal, and the node with the highest ratio has the highest score. It is important to note that this priority function plays the role of allocating pods across nodes according to resource consumption. The calculation formula is as follows:

Cpu ((capacity-sum (requested)) 10 / capacity) + memory ((capacity-sum (requested)) 10 / capacity) / 2

(3) BalancedResourceAllocation: try to choose a machine with more balanced resources after deploying Pod. BalancedResourceAllocation cannot be used alone, and must be used at the same time as LeastRequestedPriority. It calculates the specific gravity of cpu and memory on the host respectively, and the score of the host is determined by the "distance" of the specific gravity of cpu and memory. The calculation formula is as follows: score = 10-abs (cpuFraction-memoryFraction) * 10

(4) NodeAffinityPriority: the affinity mechanism in Kubernetes scheduling. Node Selectors (when scheduling pod is limited to a specified node), supports a variety of operators (In, NotIn, Exists, DoesNotExist, Gt, Lt), and is not limited to the exact matching of node labels. In addition, Kubernetes supports two types of selectors, one is the "hard (requiredDuringSchedulingIgnoredDuringExecution)" selector, which ensures that the selected host meets all Pod rules for hosts. This selector is more like the previous nodeselector, adding a more appropriate presentation syntax to nodeselector. Another "soft (preferresDuringSchedulingIgnoredDuringExecution)" selector, which serves as a hint to the scheduler, tries but does not guarantee that all the requirements of the NodeSelector are met.

(5) InterPodAffinityPriority: the sum is calculated by iterating over the elements of weightedPodAffinityTerm, and if the corresponding PodAffinityTerm is satisfied for that node, then "weight" is added to the sum, and the node with the highest sum is the most preferred.

(6) NodePreferAvoidPodsPriority (weight 1W): if the Anotation of Node does not set key-value:scheduler. Alpha.kubernetes.io/ preferAvoidPods = "...", then the node's score for the policy is 10, plus a weight of 10000, then the node's score for the policy is at least 10W. If the Anotation of the Node is set, scheduler.alpha.kubernetes.io/preferAvoidPods = "...", if the Controller corresponding to the pod is ReplicationController or ReplicaSet, then the node scores 0 for the policy.

(7) TaintTolerationPriority: use the tolerationList in Pod to match the Node node Taint. The more items that are successfully matched, the lower the score.

In addition, among the preferred scheduling rules, there are several rules that are not used by default:

(1) ImageLocalityPriority: it is graded based on whether the host already has an environment for Pod to run. ImageLocalityPriority determines whether there are already mirrors on the host for Pod to run, and returns a score of 0-10 based on the size of the existing images. If the mirror required by Pod does not exist on the host, return 0; if there are some required mirrors on the host, the score is determined according to the size of these mirrors. The larger the mirror, the higher the score.

(2) EqualPriority: EqualPriority is a priority function that gives all nodes an equal weight.

(3) ServiceSpreadingPriority: the function is the same as SelectorSpreadPriority and has been replaced by SelectorSpreadPriority.

(4) MostRequestedPriority: in ClusterAutoscalerProvider, replace LeastRequestedPriority to give higher priority to nodes that use multiple resources. The calculation formula is: (cpu (10sum (requested) / capacity) + memory (10sum (requested) / capacity) / 2

Thank you for reading! This is the end of this article on "what's the use of Kubernetes Scheduler?". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report