Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

K8s scheduler

2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Node hard affinity: Pod scheduling must meet the planning, if it does not meet the Pod state will be Penging state.

Node soft affinity: Pod scheduling is first scheduled according to the rules, and if it does not meet the rules, it will find a mismatched node to run.

Pod hard affinity and soft affinity are similar to node hard affinity and soft affinity.

1. Node hard affinity (node does not meet the rule)

[root@k8s01 yaml] # cat pod-affinity01.yaml

ApiVersion: v1kind: Podmetadata: name: pod-01spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms:-matchExpressions:-{key: zone,operator: In,values: ["one"]}-Node zone (key) label must be one (value) containers:-name: pod-01 image: nginx:latest imagePullPolicy: Never

[root@k8s01 yaml] # kubectl apply-f pod-affinity01.yaml

Pod/pod-01 created

[root@k8s01 yaml] # kubectl get pods-- show-labels

NAME READY STATUS RESTARTS AGE LABELS

Pod-01 0ax 1 Pending 0 103s-the node node does not meet the pod tag

[root@k8s01 yaml] # kubectl describe pods pod-01

. . .

Events:

Type Reason Age From Message

Warning FailedScheduling default-scheduler 0ax 3 nodes are available: 3 node (s) didn't match node selector.

Warning FailedScheduling default-scheduler 0ax 3 nodes are available: 3 node (s) didn't match node selector.

[root@k8s01 yaml] #

-matchExpressions:-{key: zone,operator: In,values: ["one"]}-zone is the key, operator: is the expression, one is the value-{key: zone,operator: In,values: ["one", "two"]}-- In contains-{key: ssd,operator: Exists,values: []}-Exists exists

two。 Node hard affinity (node satisfies the rule)

[root@k8s01 yaml] # kubectl label node k8s02 zone=one-- create a tag on the k8s02 node

Node/k8s02 labeled

[root@k8s01 yaml] # kubectl get pods-- show-labels-o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS

Pod-01 1/1 Running 0 6m13s 10.244.1.37 k8s02

[root@k8s01 yaml] #

3. Node soft affinity

[root@k8s01 yaml] # cat pod-affinity02.yaml

ApiVersion: apps/v1kind: Deploymentmetadata: name: pod-02spec: replicas: 3 selector: matchLabels: app: myapp template: metadata: name: myapp-pod labels: app: myapp spec: affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution:-weight: 60-60% of Pod will be dispatched to the zone tag One value preference: matchExpressions:-{key: zone,operator: In,values: ["one"]} containers:-name: pod-02 image: nginx:latest imagePullPolicy: Never on this node

[root@k8s01 yaml] # kubectl apply-f pod-affinity02.yaml

Deployment.apps/pod-02 created

[root@k8s01 yaml] # kubectl get pods-- show-labels-o wide-- can be created if the tag is satisfied or not.

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS

Pod-02-77d87986b-9bzg5 1bat 1 Running 0 16s 10.244.1.39 k8s02 app=myapp,pod-template-hash=77d87986b

Pod-02-77d87986b-dckjq 1bat 1 Running 0 16s 10.244.2.42 k8s03 app=myapp,pod-template-hash=77d87986b

Pod-02-77d87986b-z7v47 1bat 1 Running 0 16s 10.244.1.38 k8s02 app=myapp,pod-template-hash=77d87986b

[root@k8s01 yaml] #

4. Pod hard affinity (nodes do not meet the rules)

[root@k8s01 yaml] # kubectl get pods-- show-labels

NAME READY STATUS RESTARTS AGE LABELS

Nginx 1/1 Running 0 4s app=web

[root@k8s01 yaml] # cat pod-affinity03.yaml

ApiVersion: v1kind: Podmetadata: name: pod-01spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution:-labelSelector: matchExpressions:-{key: app,operator: In,values: ["web1"]}-- the tag value does not meet topologyKey: kubernetes.io/hostname containers:-name: pod-01 image: nginx:latest imagePullPolicy: Never

[root@k8s01 yaml] # kubectl apply-f pod-affinity03.yaml

Pod/pod-01 created

[root@k8s01 yaml] # kubectl get pods-- show-labels-o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS

Nginx 1/1 Running 0 8m9s 10.244.1.42 k8s02 app=web

Pod-01 0/1 Pending 0 28s

[root@k8s01 yaml] #

5.Pod hard affinity (node satisfies the rule)

[root@k8s01 yaml] # kubectl get pods-- show-labels-- View the pod tag

NAME READY STATUS RESTARTS AGE LABELS

Nginx 1/1 Running 0 4s app=web

[root@k8s01 yaml] # cat pod-affinity04.yaml

ApiVersion: v1kind: Podmetadata: name: pod-01spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution:-labelSelector: matchExpressions:-{key: app,operator: In,values: ["web"]} topologyKey: kubernetes.io/hostname containers:-name: pod-01 image: nginx:latest imagePullPolicy: Never

[root@k8s01 yaml] # kubectl apply-f pod-affinity04.yaml

Pod/pod-01 created

[root@k8s01 yaml] # kubectl get pods-- show-labels-o wide-- the new Pod created will look for the app=web tag

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS

Nginx 1/1 Running 0 4m14s 10.244.1.42 k8s02 app=web

Pod-01 1/1 Running 0 17s 10.244.1.43 k8s02

[root@k8s01 yaml] #

6.Pod soft affinity

[root@k8s01 yaml] # kubectl get pods-- show-labels-o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS

Nginx 1/1 Running 0 14m 10.244.1.42 k8s02 app=web

[root@k8s01 yaml] # cat pod-affinity04.yaml

ApiVersion: apps/v1kind: Deploymentmetadata: name: pod-02spec: replicas: 3 selector: matchLabels: app: myapp template: metadata: name: myapp-pod labels: app: myapp spec: affinity: podAffinity: preferredDuringSchedulingIgnoredDuringExecution:-weight: 60 podAffinityTerm: labelSelector: matchExpressions:-{key: app,operator: In Values: ["web12"]} topologyKey: zone containers:-name: pod-02 image: nginx:latest imagePullPolicy: Never

[root@k8s01 yaml] # kubectl get pods-- show-labels-o wide-- can be created without satisfying the soft affinity of Pod

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS

Nginx 1/1 Running 0 16m 10.244.1.42 k8s02 app=web

Pod-02-6f96fbfdf-52gbf 1bat 1 Running 0 11s 10.244.2.43 k8s03 app=myapp,pod-template-hash=6f96fbfdf

Pod-02-6f96fbfdf-dl5z5 1bat 1 Running 0 11s 10.244.1.44 k8s02 app=myapp,pod-template-hash=6f96fbfdf

Pod-02-6f96fbfdf-f8bzn 1bat 1 Running 0 11s 10.244.0.55 k8s01 app=myapp,pod-template-hash=6f96fbfdf

[root@k8s01 yaml] #

7. Define stains and tolerance

NoSchedule: a new Pod object that cannot tolerate this stain, cannot be scheduled to the current node, and is mandatory.

PreferNoSchedule: the new Pod should not be scheduled to this node as far as possible. If there are no other nodes to schedule, it is also allowed to use current node scheduling for flexibility.

NoExecute: the new Pod cannot be scheduled to the current node, and the Pod will be expelled if the current node exists, which is mandatory.

[root@k8s01 yaml] # kubectl describe node k8s02 | grep-I taints-- View stains

Taints:

[root@k8s01 yaml] # kubectl taint node k8s02 node-type=production:NoSchedule-- smudges k8s02 nodes

Node/k8s02 tainted

[root@k8s01 yaml] # kubectl describe node k8s02 | grep-I taints

Taints: node-type=production:NoSchedule

[root@k8s01 yaml] # cat pod-affinity05.yaml

ApiVersion: apps/v1kind: Deploymentmetadata: name: pod-02spec: replicas: 3 selector: matchLabels: app: myapp template: metadata: name: myapp-pod labels: app: myapp spec: containers:-name: pod-02 image: nginx:latest imagePullPolicy: Never

[root@k8s01 yaml] # kubectl apply-f pod-affinity05.yaml

Deployment.apps/pod-02 created

[root@k8s01 yaml] # kubectl get pods-o wide-- show-labels-- k8s02 node has been tainted, so the Pod object cannot be dispatched

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS

Pod-02-5c54dc6489-j7tsh 1bat 1 Running 0 29s 10.244.0.56 k8s01 app=myapp,pod-template-hash=5c54dc6489

Pod-02-5c54dc6489-sk5dm 1bat 1 Running 0 29s 10.244.2.44 k8s03 app=myapp,pod-template-hash=5c54dc6489

Pod-02-5c54dc6489-sn5wd 1bat 1 Running 0 29s 10.244.2.45 k8s03 app=myapp,pod-template-hash=5c54dc6489

[root@k8s01 yaml]

8. Remove a stain

[root@k8s01 yaml] # kubectl taint node k8s02 node-type:NoSchedule--remove NoSchedule stain

Node/k8s02 untainted

[root@k8s01 yaml] # kubectl taint node k8s02 node-type=production:PreferNoSchedule-- smudge

Node/k8s02 tainted

[root@k8s01 yaml] # kubectl describe node k8s02 | grep-I node-type

Taints: node-type=production:PreferNoSchedule

[root@k8s01 yaml] # kubectl taint node k8s02 node-type:PreferNoSchedule--remove PreferNoSchedule stain

Node/k8s02 untainted

[root@k8s01 yaml] # kubectl describe node k8s02 | grep-I node-type

[root@k8s01 yaml] # kubectl taint node k8s02 node-type--remove all stains

Node/k8s02 untainted

[root@k8s01 yaml] # kubectl describe node k8s02 | grep-I node-type

[root@k8s01 yaml] #

9. Expel the Pod on the node

[root@k8s01 yaml] # kubectl cordon k8s03-- the newly created pod cannot be scheduled to the k8s03 node, and the previous Pod is not affected.

Node/k8s03 cordoned

[root@k8s01 yaml] # kubectl get nodes

NAME STATUS ROLES AGE VERSION

K8s01 Ready master 71d v1.16.0

K8s02 Ready 70d v1.16.0

K8s03 Ready, SchedulingDisabled 30d v1.16.0

[root@k8s01 yaml] # kubectl get pods-o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

Pod-02-5c54dc6489-442kx 1 k8s03 1 Running 0 12m 10.244.2.48

Pod-02-5c54dc6489-92l8m 1 k8s03 1 Running 0 12m 10.244.2.49

Pod-02-5c54dc6489-k4bc7 1bat 1 Running 0 12m 10.244.0.58 k8s01

[root@k8s01 yaml] # kubectl uncordon k8s03-- release scheduling rules

Node/k8s03 uncordoned

[root@k8s01 yaml] # kubectl get nodes

NAME STATUS ROLES AGE VERSION

K8s01 Ready master 71d v1.16.0

K8s02 Ready 70d v1.16.0

K8s03 Ready 30d v1.16.0

[root@k8s01 yaml] # kubectl drain k8s03-- ignore-daemonsets-- remove Pod from k8s03 nodes

Node/k8s03 already cordoned

WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-cg795, kube-system/kube-proxy-h5dkf

Evicting pod "pod-02-5c54dc6489-92l8m"

Evicting pod "pod-02-5c54dc6489-442kx"

Pod/pod-02-5c54dc6489-92l8m evicted

Pod/pod-02-5c54dc6489-442kx evicted

Node/k8s03 evicted

[root@k8s01 yaml] # kubectl get nodes

NAME STATUS ROLES AGE VERSION

K8s01 Ready master 71d v1.16.0

K8s02 Ready 70d v1.16.0

K8s03 Ready, SchedulingDisabled 30d v1.16.0

[root@k8s01 yaml] # kubectl get pods-o wide-- all the Pod of previous k8s03 nodes are transferred to other nodes

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

Pod-02-5c54dc6489-k4bc7 1bat 1 Running 0 14m 10.244.0.58 k8s01

Pod-02-5c54dc6489-mxk46 1bat 1 Running 0 25s 10.244.1.46 k8s02

Pod-02-5c54dc6489-vmb8l 1bat 1 Running 0 25s 10.244.1.45 k8s02

[root@k8s01 yaml] # kubectl uncordon k8s03-resume scheduling

Node/k8s03 uncordoned

[root@k8s01 yaml] # kubectl get nodes

NAME STATUS ROLES AGE VERSION

K8s01 Ready master 71d v1.16.0

K8s02 Ready 70d v1.16.0

K8s03 Ready 30d v1.16.0

[root@k8s01 yaml] #

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report