In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
About the author Liu Haiping (HappyLau) senior cloud computing consultant is currently engaged in public cloud related work in Tencent Cloud. He has worked for KuGou and EasyStack, has many years of public cloud + private cloud computing architecture design, operation and maintenance, delivery related experience, participated in KuGou, Southern Power Grid, Guotai Junan and other large private cloud platform construction, proficient in Linux,Kubernetes,OpenStack,Ceph and other open source technologies, has rich practical experience in the field of cloud computing. Have RHCA/OpenStack/Linux teaching experience.
Write at the front
In the previous article, kubernetes series tutorials (6) kubernetes resource management and quality of service introduced the resource resource scheduling and quality of service Qos in kubernetes, introduced how to define the resource and resource scheduling of pod in kubernetes, and the priority Qos after setting resource, and then introduced the scheduling mechanism of kubernetes series tutorial pod.
1. Pod scheduling 1.1 Overview of pod scheduling
Kubernets is a container orchestration engine, and one of the most important functions is the scheduling of containers. The fully automatic scheduling of containers is realized through kube-scheduler. The scheduling cycle is divided into scheduling cycle Scheduling Cycle and binding cycle Binding Cycle, in which the scheduling cycle is subdivided into filtering filter and weight weighing, and the node meeting the running pod nodes is selected according to the specified scheduling strategy, and then sorted. The binding cycle is run by a specific node node watch and then through kubelet after the preferred pod is dispatched by kube-scheduler.
The filtering phase includes pre-selected Predicate and scoring sorting. Pre-selection is to filter the node that meets the criteria, and sorting is the node scoring and sorting that best meets the conditions. The pre-selected algorithms include:
CheckNodeConditionPred node whether readyMemoryPressure node memory pressure is large (memory is sufficient) DiskPressure node disk pressure is large (space is sufficient) PIDPressure node Pid is under pressure (Pid process is sufficient) GeneralPred matches pod.spec.hostname fields MatchNodeSelector matches pod.spec.nodeSelector tags PodFitsResources to determine whether the resources defined by resource meet the stain pod.spec.tolerationsCheckNodeLabelPresenceCheckServiceAffinityCheckVolumeBindingNoVolumeZoneConflict that PodToleratesNodeTaints can tolerate
Filter conditions need to be checked on the node, which can be viewed by kubectl describe node node-id, as shown below:
The preferred scheduling algorithms are:
The node with the least least_requested resource consumption, the node with the most uniform resource consumption, the node_prefer_avoid_pods node tends to taint_toleration stain detection, the node with stain condition is detected, the lower the score is, the lower the selector_spreading node selectorinterpod_affinity pod affinity traverses the node with the largest most_requested resource consumption, the node_label node tag 1. 2 specifies nodeName scheduling
NodeName is a field in PodSpec. You can specify to schedule pod to a specific node node through pod.spec.nodeName. This field is special and usually empty. If the nodeName field is set, kube-scheduler will skip scheduling directly and start pod through kubelet on a specific node. Scheduling through nodeName is not an intelligent scheduling of the cluster, and there may be uneven resources by specifying scheduling. It is recommended to set the Qos of Guaranteed to prevent Pod from being expelled from evince when resources are uneven. Take creating a pod to run on node-3 as an example:
Write yaml to specify pod to run on the node-3 node [root@node-1 demo] # cat nginx-nodeName.yaml apiVersion: v1kind: Podmetadata: name: nginx-run-on-nodename annotations: kubernetes.io/description: "Running the Pod on specific nodeName" spec: containers:-name: nginx-run-on-nodename image: nginx:latest ports:-name: http-80-port protocol: TCP containerPort: 80 nodeName: node-3 # specify nginx-run-on-nodename to run on a specific node node-3 via nodeName to make it effective by running yaml configuration [root@node-1 demo] # kubectl apply-f nginx-nodeName.yaml pod/nginx-run-on-nodename created to check and confirm the operation of pod Has run on the node-3 node [root@node-1 demo] # kubectl get pods nginx-run-on-nodename-o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-run-on-nodename 1 Running 0 6m52s 10.244.2.15 node-3 1.2. Scheduling nodeSelector through nodeSelector is a field in PodSpec. NodeSelector is the easiest way to run pod on a specific node node. It is achieved by specifying key and value key-value pairs, which requires a matching Labels on the node setting, and a specific labels can be specified when the node is scheduled. Add a labels of app:web to node-2 as follows. When scheduling pod, select the labels through nodeSelector: add labels to node-2 [root @ node-1 demo] # kubectl label node node-2 app=webnode/node-2 labeled to check the labels settings Node-2 added an app=web labels [root @ node-1 demo] # kubectl get nodes-- show-labels NAME STATUS ROLES AGE VERSION LABELSnode-1 Ready master 15d v1.15.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-1,kubernetes.io/os=linux,node-role.kubernetes.io/master=node-2 Ready 15d v1.15.3 app=web,beta.kubernetes.io/arch=amd64 Beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-2,kubernetes.io/os=linuxnode-3 Ready 15d v1.15.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-3 Kubernetes.io/os=linux dispatches pod to labels [root @ node-1 demo] # cat nginx-nodeselector.yaml apiVersion: v1kind: Podmetadata: name: nginx-run-on-nodeselector annotations: kubernetes.io/description: "Running the Pod on specific node by nodeSelector" spec: containers:-name: nginx-run-on-nodeselector image: nginx:latest ports:-name: http-80-port protocol: TCP containerPort: 80 nodeSelector: # schedule pod to a specific labels app through nodeSelector: generate pod from yaml file of web application [root @ node-1 demo] # kubectl apply-f nginx-nodeselector.yaml pod/nginx-run-on-nodeselector created check to verify the operation of pod Has been running on the node-2 node [root@node-1 demo] # kubectl get pods nginx-run-on-nodeselector-o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-run-on-nodeselector 1 Running 0 51s 10.244.1.24 node-2
By default, the system pre-defines a variety of built-in labels, which can identify the attributes of node, such as arch architecture, operating system type, hostname, etc.
Beta.kubernetes.io/arch=amd64beta.kubernetes.io/os=linuxkubernetes.io/arch=amd64kubernetes.io/hostname=node-3kubernetes.io/os=linux1.3 node Affinity and anti-affinity
Affinity/anti-affinity is similar to nodeSelector, but it is richer than nodeSelector,affinity. In the future, it will replace nodeSelector,affinity and add the following enhancements:
Expressions are richer, and various matching methods are supported, such as In,NotIn, Exists, DoesNotExist. Gt, and Lt You can specify soft and preference rules, soft indicates the conditions to be satisfied, and preference is the preferred selection condition. Specifying affinity through preferredDuringSchedulingIgnoredDuringExecution provides two levels of affinity and anti-affinity: node affinity based on node and inter-pod affinity/anti-affinity,node affinity based on pod achieve affinity scheduling through labels on node, while pod affinity schedules affinity through labels on pod. The scope of the two functions is different.
Here is an example to demonstrate the use of node affinity. RequiredDuringSchedulingIgnoredDuringExecution specifies the conditions to be met, preferredDuringSchedulingIgnoredDuringExecution specifies the preferred conditions, and the relationship between them.
Query the labels of the node node, which contains multiple labels by default For example, kubernetes.io/ hostname [root @ node-1 ~] # kubectl get nodes-- show-labels NAME STATUS ROLES AGE VERSION LABELSnode-1 Ready master 15d v1.15.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-1,kubernetes.io/os=linux,node-role.kubernetes.io/master=node-2 Ready 15d v1.15.3 app=web,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-2 Kubernetes.io/os=linuxnode-3 Ready 15d v1.15.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-3,kubernetes.io/os=linux implements scheduling through node affiinity Specify the satisfying condition kubernetes.io/hostname as node-2 and node-3 through requiredDuringSchedulingIgnoredDuringExecution The optimal conditions through preferredDuringSchedulingIgnoredDuringExecution need to meet the labels of app=web [root @ node-1 demo] # cat nginx-node-affinity.yaml apiVersion: v1kind: Podmetadata: name: nginx-run-node-affinity annotations: kubernetes.io/description: "Running the Pod on specific node by node affinity" spec: containers:-name: nginx-run-node-affinity image: nginx:latest ports:-name: http-80-port protocol: TCP containerPort: 80 affinity: nodeAffinity: RequiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms:-matchExpressions:-key: kubernetes.io/hostname operator: In values:-node-1-node-2-node-3 preferredDuringSchedulingIgnoredDuringExecution:-weight: 1 preference: matchExpressions:-key: app operator: In Values: ["web"] apply yaml file to generate pod [root @ node-1 demo] # kubectl apply-f nginx-node-affinity.yaml pod/nginx-run-node-affinity created to confirm the node node to which pod belongs The node that meets the require and preferre conditions is node-2 [root@node-1 demo] # kubectl get pods-- show-labels nginx-run-node-affinity-o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELSnginx-run-node-affinity 1 Running 0 106s 10.244.1.25 node-2 written at the end
This paper introduces the scheduling mechanism in kubernetes. The default creation of pod is a fully automatic scheduling mechanism, and scheduling is implemented by kube-scheduler. The scheduling process is divided into two stages: scheduling phase (filtering and heavy sorting) and binding phase (running pod on node). There are four ways to intervene:
Specify nodeName through nodeSelector, through node affinity and anti-affinity, through pod affinity and anti-affinity appendices
Introduction to the scheduling framework: https://kubernetes.io/docs/concepts/configuration/scheduling-framework/
Pod scheduling method: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
When your talent can't support your ambition, you should calm down and study.
Return to the kubernetes series tutorial directory
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.