In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
There is one of the most complex schedulers in Kubernetes that can handle pod's allocation policies. Based on the resource requirements mentioned in the pod specification, the Kubernetes scheduler automatically selects the most appropriate node to run pod.
But in many real-world scenarios, we have to intervene in the scheduling process to match between a pod and a node or two specific pod. Therefore, there is a very powerful mechanism in Kubernetes to manage and control the allocation logic of pod.
Then, this article will explore the key features that affect default scheduling decisions in Kubernetes.
Node affinity / anti-affinity
Kubernetes has always relied on label and selector to group resources. For example, a service uses selector to filter pod with specific label, which can optionally receive traffic. Label and selector can evaluate rules using a simple equality-based condition (= and equality =). This technique can be extended to nodes through the nature of nodeSelector, which forces pod to be dispatched to specific nodes.
In addition, label and selector have begun to support collection-based query, which brings advanced filtering techniques based on in, notin, and exist operators. Combined with equality-based requirements, set-based requirements provide complex techniques to filter resources in Kubernetes.
Node affinity / anti-affinity uses label and annotation's expression set-based filtering techniques to define the allocation logic of pod on specific nodes. Annotation can provide other metadata that will not be exposed to selector, which means that keys for annotation are not included in query and filtering resources. However, annotation can be used in expressions for node affinity. Anti-compatibility ensures that pod is not forced to be dispatched to nodes that match the rules.
In addition to being able to use complex logic in query, node affinity / anti-affinity can impose hard and soft rules on allocation logic. Hard rules will enforce strict policies and may prevent pod from being assigned to nodes that do not meet the criteria. The soft rule will first confirm whether the node matches a specific condition, and if they do not, it will use the default scheduling mode to allocate the Pod. The expressions requiredDuringSchedulingIgnoredDuringExecution and preferredDuringSchedulingIgnoredDuringExecution will execute hard and soft rules, respectively.
The following is an example of using node affinity / anti-affinity under hard and soft rules:
Affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms:-matchExpressions:-key: "failure-domain.beta.kubernetes.io/zone" operator: In values: ["asia-south2-a"]
The above rules instruct the Kubernetes scheduler to attempt to assign Pod to nodes running in the asia-south2-an area of the GKE cluster. If no nodes are available, the scheduler will apply standard allocation logic directly.
Affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms:-matchExpressions:-key: "failure-domain.beta.kubernetes.io/zone" operator: NotIn values: ["asia-south2-a"]
The above rules enforce anti-affinity by using the NotIn operator. This is a hard rule that ensures that no pod is assigned to GKE nodes running in asia-south2-a space.
Pod affinity / anti-affinity
Although node affinity / anti-compatibility can handle the matching between pod and nodes, there are scenarios where we need to ensure that pod runs together or does not run 2 pod on the same node. Pod affinity / anti-affinity will help us apply enforcement granularity allocation logic.
Similar to the expression in node affinity / anti-affinity, pod affinity / anti-affinity can enforce hard and soft rules through requiredDuringSchedulingIgnoredDuringExecution and preferredDuringSchedulingIgnoredDuringExecution. Node affinity and pod affinity can also be mixed and matched to define complex allocation logic.
To better understand the concept, imagine that we have a web and cached deployment with three replicas running in a 3-node cluster. To ensure low latency between web and cached pod, we want to run them on one node. At the same time, we don't want to run more than one cache pod on the same node. Based on this situation, we need to implement the following policy: each node runs only 1 web pod with only 1 cached Pod.
First, we will use the anti-affinity rule to deploy the cache, which will prevent more than one pod from running on one node:
Affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution:-labelSelector: matchExpressions:-key: app operator: In values:-redis topologyKey: "kubernetes.io/hostname"
TopoloyKey uses the default label attached to the node to dynamically filter the name of the node. Notice how we use podAntiAffinity expressions and in operators to apply rules.
Assuming that three pod caches are scheduled on a node in the cluster, we now want to deploy web pod on the same node as the cache Pod. We will use podAffinity to implement this logic:
PodAffinity: requiredDuringSchedulingIgnoredDuringExecution:-labelSelector: matchExpressions:-key: app operator: In values:-redis topologyKey: "kubernetes.io/hostname"
The above code indicates that the Kubernetes scheduler is looking for nodes with cached Pod and deploying web pod.
In addition to the affinity / anti-affinity of nodes and pod, we can also use taints and tolerations to define custom allocation logic. In addition, we can write a custom scheduler that takes over the scheduling logic from the default scheduler.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 210
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.