In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces the relevant knowledge of "what is the use of Predicates Policies". In the operation of actual cases, many people will encounter such a dilemma. Then let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
# # Predicates Policies Analysis implements the following preselected policies in / plugin/pkg/scheduler/algorithm/predicates.go:
NoDiskConflict: check for volume conflicts on this host. If this host already has a volume mounted, other Pod that also uses this volume cannot be scheduled to this host. The rules used by GCE,Amazon EBS and and Ceph RBD are as follows:
GCE allows multiple volumes to be mounted at the same time, as long as they are read-only.
Amazon EBS does not allow different Pod to mount the same volume.
Ceph RBD does not allow any two pods to share the same monitor,match pool and image.
NoVolumeZoneConflict: check for volume conflicts if Pod is deployed on this host, subject to a given zone limit. Assuming that some volumes may have zone scheduling constraints, VolumeZonePredicate evaluates whether the pod meets the conditions according to the needs of the volumes itself. The necessary condition is that the zone-labels of any volumes must exactly match the zone-labels on the node. There can be multiple zone-labels constraints on the node (for example, a hypothetical replication volume may allow zone-wide access). Currently, this is only supported for PersistentVolumeClaims, and tags are only found within the scope of PersistentVolume. Dealing with the volumes defined in the attributes of the Pod (that is, not using PersistentVolume) may become more difficult because to determine the zone of the volume during the scheduling process, it is likely to require a call to the cloud provider.
PodFitsResources: check whether the resources of the host meet the requirements of Pod. Schedule according to the actual amount of resources that have been allocated, rather than using the amount of resources actually used.
PodFitsHostPorts: check whether the HostPort required by each container in the Pod has been occupied by other containers. If the required HostPort does not meet the demand, then the Pod cannot be dispatched to this host.
HostName: check whether the host name is the HostName specified by Pod.
MatchNodeSelector: check whether the label of the host meets the nodeSelector attribute requirements of Pod.
MaxEBSVolumeCount: ensure that the mounted EBS storage volume does not exceed the set maximum. The default value is 39. It examines the storage volumes that are used directly and the PVC that indirectly uses this type of storage. Calculate the headings of different volumes, if the number of previous volumes on the new Pod deployment will exceed the set maximum, then Pod cannot be dispatched to this host.
MaxGCEPDVolumeCount: ensure that the mounted GCE storage volume does not exceed the set maximum. The default value is 16. The rules are the same as above.
The following is the code implementation of NoDiskConflict. Similar to other Predicates Policies implementations, the following function prototypes are obtained: type FitPredicate func (pod * v1.Pod, meta interface {}, nodeInfo * schedulercache.NodeInfo) (bool, [] PredicateFailureReason, error)
Func NoDiskConflict (pod * v1.Pod, meta interface {}, nodeInfo * schedulercache.NodeInfo) (bool, [] algorithm.PredicateFailureReason, error) {for _, v: = range pod.Spec.Volumes {for _, ev: = range nodeInfo.Pods () {if isVolumeConflict (v, ev) {return false, [] algorithm.PredicateFailureReason {ErrDiskConflict} Nil} return true, nil, nil} func isVolumeConflict (volume v1.Volume, pod * v1.Pod) bool {/ / fast path if there is no conflict checking targets. If volume.GCEPersistentDisk = = nil & & volume.AWSElasticBlockStore = = nil & & volume.RBD = = nil & & volume.ISCSI = = nil {return false} for _, existingVolume: = range pod.Spec.Volumes {... If volume.RBD! = nil & & existingVolume.RBD! = nil {mon, pool, image: = volume.RBD.CephMonitors, volume.RBD.RBDPool, volume.RBD.RBDImage emon, epool, eimage: = existingVolume.RBD.CephMonitors, existingVolume.RBD.RBDPool, existingVolume.RBD.RBDImage / / two RBDs images are the same if they share the same Ceph monitor, are in the same RADOS Pool And have the same image name / / only one read-write mount is permitted for the same RBD image. / / same RBD image mounted by multiple Pods conflicts unless all Pods mount the image read-only if haveSame (mon, emon) & & pool = = epool & & image = = eimage & & (volume.RBD.ReadOnly & & existingVolume.RBD.ReadOnly) {return true} return false}
# # Priorities Policies Analysis
The priority functions supported now include the following:
LeastRequestedPriority: if a new pod is to be assigned to a node, the priority of that node is determined by the ratio of the idle part of the node to the total capacity (i.e. (total capacity-total capacity of the pod on the node-capacity of the new pod) / total capacity). The weights of CPU and memory are equal, and the node with the highest ratio has the highest score. It is important to note that this priority function plays the role of allocating pods across nodes according to resource consumption. The calculation formula is as follows: cpu ((capacity-sum (requested)) * 10 / capacity) + memory ((capacity-sum (requested)) * 10 / capacity) / 2
BalancedResourceAllocation: try to choose a machine with more balanced resources after deploying Pod. BalancedResourceAllocation cannot be used alone, and must be used at the same time as LeastRequestedPriority. It calculates the specific gravity of cpu and memory on the host respectively, and the score of the host is determined by the "distance" of the specific gravity of cpu and memory. The calculation formula is as follows: score = 10-abs (cpuFraction-memoryFraction) * 10
SelectorSpreadPriority: Pod belonging to the same service and replication controller should be distributed on different hosts as far as possible. If a zone is specified, the Pod will be spread across different hosts in different regions as much as possible. When scheduling a Pod, first look for the service or replication controller of the Pod, and then find the Pod that already exists in the service or replication controller. The fewer existing Pod running on the host, the higher the score of the host.
CalculateAntiAffinityPriority: for Pod belonging to the same service, spread out on different hosts with specified tags as far as possible.
ImageLocalityPriority: score based on whether the host already has the environment in which Pod is running. ImageLocalityPriority determines whether there are already mirrors on the host for Pod to run, and returns a score of 0-10 based on the size of the existing images. If the mirror required by Pod does not exist on the host, return 0; if there are some required mirrors on the host, the score is determined according to the size of these mirrors. The larger the mirror, the higher the score.
NodeAffinityPriority (New feature in Kubernetes1.2 experiment): affinity mechanism in Kubernetes scheduling. Node Selectors (when scheduling pod is limited to a specified node), supports a variety of operators (In, NotIn, Exists, DoesNotExist, Gt, Lt), and is not limited to the exact matching of node labels. In addition, Kubernetes supports two types of selectors, one is the "hard (requiredDuringSchedulingIgnoredDuringExecution)" selector, which ensures that the selected host must meet all Pod rules for hosts. This selector is more like the previous nodeselector, adding a more appropriate presentation syntax to nodeselector. The other is the "soft (preferresDuringSchedulingIgnoredDuringExecution)" selector, which serves as a hint to the scheduler, which tries but does not guarantee that all the requirements of the NodeSelector are met.
The following is the code implementation of ImageLocalityPriority. Similar to other Priorities Policies implementations, the following function prototypes are obtained: type PriorityMapFunction func (pod * v1.Pod, meta interface {}, nodeInfo * schedulercache.NodeInfo) (schedulerapi.HostPriority, error)
Func ImageLocalityPriorityMap (pod * v1.Pod, meta interface {}, nodeInfo * schedulercache.NodeInfo) (schedulerapi.HostPriority, error) {node: = nodeInfo.Node () if node = = nil {return schedulerapi.HostPriority {}, fmt.Errorf ("node not found")} var sumSize int64 for I: = range pod.Spec.Containers {sumSize + = checkContainerImageOnNode (node & pod.Spec.containers [I]} return schedulerapi.HostPriority {Host: node.Name, Score: calculateScoreFromSize (sumSize),}, nil} func calculateScoreFromSize (sumSize int64) int {var score int switch {case sumSize = = 0 | | sumSize
< minImgSize: // score == 0 means none of the images required by this pod are present on this // node or the total size of the images present is too small to be taken into further consideration. score = 0 // If existing images' total size is larger than max, just make it highest priority. case sumSize >= maxImgSize: score = 10 default: score = int ((10 * (sumSize-minImgSize) / (maxImgSize-minImgSize)) + 1)} / / Return which bucket the given size belongs to return score}
The Score algorithm for calculating each Node is: score = int ((10 * (sumSize-minImgSize) / (maxImgSize-minImgSize)) + 1)
Where: minImgSize int64 = 23 * mb, maxImgSize int64 = 1000 * mb, and sumSize is the sum of container Images' size defined in Pod.
It can be seen that the larger the sum of the container image sizes required by the Pod on Node, the higher the score, the more likely it is to be the target Node.
This is the end of the content of "what's the use of Predicates Policies"? thank you for your reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.