Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Kubernetes's understanding of Affinity and nodeSelector, Taints and tolerance

2025-04-09 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

In most cases, the kubernetes scheduler can schedule the pod to the appropriate node in the cluster. However, in some cases, users need to exert more control over which node pod is dispatched, such as deploying pod to SSD storage nodes, deploying multiple backends of the same service on different racks to improve security, and deploying frequently communicated services in the same availability zone to reduce the length of communication links. User control over the nodes deployed by pod is all related to "label selector".

NodeSelector (Node Selector)

NodeSelector is also a tag selector, which is the simplest and most direct way to control pod to deploy node. NodeSelector is used to filter deployable nodes in daemonset. The following is a common example of its application.

Step 1: tag the node

Kubectl get nodes-o wide gets all node information

1 、 kubectl label nodes daily-k8s-n01 node=node01 2 、 kubectl label nodes daily-k8s-n01 disktype=ssd

Add the node tag node01 to the node, or the disk type is ssd

NodeSelector can only control pod deployment node based on node tags, and selectors only support "and" logical operations. Affinity and anti-affinity features are currently in the testing stage, and they are more flexible and powerful than node selectors, which are reflected in the following three points:

1. Not only "and", but also support more logical expressions.

2. NodeSelector is a hard requirement, affinity and anti-affinity support both soft and hard requirements.

3. In addition to the node label, it is important that affinity and anti-affinity support make node selection based on the pod that has been deployed on the node. For example, if you do not want to deploy two compute-intensive pod on the same node, you can choose to filter the pod after deployment.

It is subdivided into two types of selectors: "node affinity" and "internal pod affinity, anti-affinity".

Node affinity is similar to nodeSelector and has the advantages of 1 and 2 mentioned above. Internal pod affinity relies on the existing pod tag on the node rather than the node tag, which combines the above three advantages. Because node affinity does what nodeSelector does and has additional advantages, nodeSelector, although still available, is no longer maintained and may be deleted in the future.

NodeAffinity node affinity is an attribute defined on Pod that enables Pod to schedule to a certain Node according to our requirements, while Taints, on the contrary, can make Node refuse to run Pod or even expel Pod.

Taints (stain) is an attribute of Node. After setting Taints (stain), Kubernetes will not schedule Pod to this Node because there is a stain, so Kubernetes sets an attribute Tolerations (tolerance) to Pod. As long as Pod can tolerate stains on Node, then Kubernetes will ignore stains on Node and can (not necessarily) schedule Pod over. So Taints (stain) is usually used in conjunction with Tolerations (tolerance).

Set the stain:

Kubectl taint node [node] key=value [effect]

Available value for [effect]: [NoSchedule | PreferNoSchedule | NoExecute]

NoSchedule: must not be dispatched.

PreferNoSchedule: try not to schedule.

NoExecute: not only will it not be scheduled, it will also expel the existing Pod on the Node.

Example: kubectl taint node 10.10.0.111 node=111:NoSchedule

For example, set a stain:

Kubectl taint node 10.10.0.111 node=111:NoSchedule

Kubectl taint node 10.10.0.111 node=111:NoExecute

Remove the specified key and its effect:

Kubectl taint nodes node_name key: [effect]-# (the key here does not need to specify value)

Remove all effect of the specified key:

Kubectl taint nodes node_name key-

Example:

Kubectl taint node 10.10.0.111 node:NoSchedule-

Kubectl taint node 10.10.0.111 node:NoExecute-

Kubectl taint node 10.10.0.111 node-

For the writing of tolerations attribute, the Taint settings of key, value, effect and Node need to be the same, and the following instructions are given:

If the value of operator is Exists, the value property can be omitted

If the value of operator is Equal, the relationship between key and value is equal (equal to)

If you do not specify the operator attribute, the default value is Equal

In addition, there are two special values:

Empty key can match all key and value with Exists, and it is also all Taints that can tolerate all node.

Empty effect matches all effect

Tolerations:-key: "node-role.kubernetes.io/master" operator: "Exists" effect: "NoSchedule" or: spec: tolerations: # set tolerance-key: "test" operator: "Equal" # if the operator is Exists, then the value attribute can be omitted, if operator is not specified The default is Equal value: "16" effect: "NoSchedule" # means that the key of the tainted Node to be tolerated by this Pod is test Equal 16, and the effect is NoSchedule. All values under the # tolerations attribute must be in quotation marks, and the tolerated values are all values given when the taints of Node is set. Containers:-name: pod-tains image: 10.3.1.15:5000/ubuntu:16.04

Through the understanding of Taints and Tolerations, we can know that they allow certain applications to monopolize a Node:

Set a Taint for a specific Node to allow only certain applications to tolerate these stains, and after tolerance, it is possible to be dispatched to this particular Node

However, it is not necessarily scheduled to this specific Node, and setting tolerance does not prevent the scheduler from dispatching to other Node, so how to make the Node of a specific application

Can only be dispatched to this specific Node, which combines the NodeAffinity node affinity (you can also use nodeSelector to tag the node), and then in the Pod attribute

Set NodeAffinity to Node. In this way, the requirements can be met.

Summary: let the specific service run to the dedicated node node, determine the non-scheduling by typing taints to the node node, and then set tolerance to pod, so that other pod will not be dispatched to this node, through node

In the way of Selector or NodeAffinity, you can change pod to specify scheduling to this node.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report