Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

K8s practice (14): Pod eviction migration and Node node maintenance

2025-04-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Environment description:

Hostname operating system version ipdocker versionkubelet version configuration remarks masterCentos 7.6.1810172.27.9.131Docker 18.09.6V1.14.22C2Gmaster host node01Centos 7.6.1810172.27.9.135Docker 18.09.6V1.14.22C2Gnode node node02Centos 7.6.1810172.27.9.136Docker 18.09.6V1.14.22C2Gnode node

For more information on k8s cluster deployment, please see Centos7.6 deployment k8s (v1.14.2) cluster.

For more information on K8s learning materials, see: basic concepts, kubectl commands and data sharing.

For more information on emptyDir, please see: storage volumes and data persistence (Volumes and Persistent Storage)

For more information on k8s high availability cluster deployment, please see Centos7.6 deployment k8s v1.16.4 high availability cluster (active / standby mode)

I. background

When node nodes perform operations such as patching and operating system upgrades, they need to stop maintenance, which involves pod eviction and migration. This paper will introduce the whole process of node node maintenance in detail.

Brief introduction to pdb pdb is the abbreviation of poddisruptionbudgets, which means to actively expel protection; there is no pdb. When maintaining a node, if multiple pod of a service are on the node, the downtime of the node may cause service interruption or service degradation. For example, a service has five pod, and a minimum of three pod can guarantee the quality of service, otherwise it will cause slow response and other impacts. If the four pod of the service is on the node01, if the node01 is downtime and maintenance, only one pod can be served normally. During the four pod migration process of the node01, the normal response of the service will be affected. Pdb can ensure that the application runs no less than a certain number of pod during node maintenance, so as to maintain the quality of service. New pod [root@master ~] # more nginx-master.ymlapiVersion: extensions/v1beta1kind: Deploymentmetadata: name: replicas: 10 template: metadata: labels: app: nginx spec: restartPolicy: Always containers:-name: nginx image: nginx:latest [root@master ~] # kubectl apply-f nginx-master.yml deployment.extensions/nginx-master created [root@master ~] # kubectl get po-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-master-9d4cf4f77-47vfj 1 node02 nginx-master-9d4cf4f77 1 Running 0 28s 10.244.0.129 master nginx-master-9d4cf4f77-69jn6 1 Running 0 28s 10.244.2.206 node02 nginx-master-9d4cf4f77-6drhg 1 Running 0 28s 10.244.1.218 node01 nginx-master-9d4cf4f77-b7zfd 1/1 Running 0 28s 10.244.1.219 node01 nginx-master-9d4cf4f77-fxsjd 1/1 Running 0 28s 10.244.2.204 node02 nginx-master-9d4cf4f77-ktnvk 1/1 Running 0 28s 10.244.0.128 master nginx-master-9d4cf4f77-mzrx7 1/1 Running 0 28s 10.244.1.217 node01 nginx-master-9d4cf4f77-pcznk 1/1 Running 0 28s 10.244.2.203 node02 nginx-master-9d4cf4f77-px98b 1/1 Running 0 28s 10.244.2.205 node02 Nginx-master-9d4cf4f77-wtcwt 1/1 Running 0 28s 10.244.1.220 node01

Create a new pod, and the image is the latest version of nginx,deployment: nginx-master, with a quantity of 10. You can see 10 pod distributed on three different hosts: node01, node02, and master.

two。 New pdb [root@master ~] # more pdb-nginx.yaml apiVersion: policy/v1beta1kind: PodDisruptionBudgetmetadata: name: pdb-nginxspec: minAvailable: 9 selector: matchLabels: app: nginx [root@master ~] # kubectl apply-f pdb-nginx.yaml poddisruptionbudget.policy/pdb-nginx created [root@master ~] # kubectl get pdbNAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGEpdb-nginx 9 NAMA 18s

The new pdb pdb-nginx,Label Selector is app like deployment: nginx,minAvailable: 9 means there are at least 9 surviving nginx pod.

IV. Node maintenance

This paper takes the maintenance of node node02 as an example.

1. Setting node is not schedulable [root@master ~] # kubectl cordon node02node/node02 cordoned [root@master ~] # kubectl get nodeNAME STATUS ROLES AGE VERSIONmaster Ready master 184d v1.14.2node01 Ready 183d v1.14.2node02 Ready SchedulingDisabled 182d v1.14.2 [root@master ~] # kubectl get po-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-master-9d4cf4f77-47vfj 1 Running 0 30m 10.244.0.129 master nginx-master-9d4cf4f77-69jn6 1 Running 0 30m 10.244 . 2.206 node02 nginx-master-9d4cf4f77- 6drhg 1/1 Running 0 30m 10.244.1.218 node01 nginx-master-9d4cf4f77-b7zfd 1/1 Running 0 30m 10.244.1.219 node01 nginx-master-9d4cf4f77-fxsjd 1/1 Running 0 30m 10.244.2.204 node02 Nginx-master-9d4cf4f77-ktnvk 1/1 Running 0 30m 10.244.0.128 master nginx-master-9d4cf4f77-mzrx7 1/1 Running 0 30m 10.244.1.217 node01 nginx-master-9d4cf4f77-pcznk 1/1 Running 0 30m 10.244.2.203 node02 nginx-master-9d4cf4f77- Px98b 1/1 Running 0 30m 10.244.2.205 node02 nginx-master-9d4cf4f77-wtcwt 1/1 Running 0 30m 10.244.1.220 node01

Set node02 to be unschedulable. Check the status of each node and find that node02 is SchedulingDisabled. In this case, master will not schedule a new pod to this node, but pod on node02 is still running normally.

two。 Expel pod [root@master ~] # kubectl drain node02-- delete-local-data-- ignore-daemonsets-- force node/node02 already cordoned on the node

Parameter description:

-- delete-local-data deletes even if pod uses emptyDir-- ignore-daemonsets ignores the pod of the deamonset controller. If not, the pod controlled by the deamonset controller may start again on this node immediately after it is deleted, which will become an endless loop;-- force without adding the force parameter will only delete the Pod created by ReplicationController, ReplicaSet, DaemonSet,StatefulSet or Job on the NODE, and delete the 'streaking pod' (not bound to any replication controller).

You can see that only one pod is migrating at a time, and there are always 9 pod providing services.

Migrate pod nginx-master-9d4cf4f77-pcznk to node01

Migrate pod nginx-master-9d4cf4f77-px98b to master, when the previous pod nginx-master-9d4cf4f77-pcznk has been migrated.

Migrate pod nginx-master-9d4cf4f77-69jn6 to master

Migrate pod nginx-master-9d4cf4f77-fxsjd to master

This also verifies once again that there is only one pod migration at a time, and there are always nine pod for nginx services.

3. End of maintenance [root@master ~] # kubectl uncordon node02node/node02 uncordoned [root@master ~] # kubectl get nodes NAME STATUS ROLES AGE VERSIONmaster Ready master 184d v1.14.2node01 Ready 183d v1.14.2node02 Ready 183d v1.14.2

At the end of maintenance, the node02 node is rescheduled.

5. Pod will move back

There seems to be no good way to move back to pod. Here we use delete and then rebuild it to move back.

[root@master] # kubectl get po-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-master-9d4cf4f77-2vnvk 1 node01 nginx-master-9d4cf4f77 1 Running 0 33m 10.244.1.222 node01 nginx-master-9d4cf4f77-47vfj 1 Running 0 73m 10.244.0.129 master Nginx-master-9d4cf4f77- 6drhg 1/1 Running 0 73m 10.244.1.218 node01 nginx-master-9d4cf4f77- 7n7pt 1/1 Running 0 32m 10.244.0.131 master nginx-master-9d4cf4f77-b7zfd 1/1 Running 0 73m 10.244.1.219 node01 nginx-master-9d4cf4f77-ktnvk 1/1 Running 0 73m 10.244.0.128 master nginx-master-9d4cf4f77-mzrx7 1/1 Running 0 73m 10.244.1.217 node01 nginx-master-9d4cf4f77-pdkst 1/1 Running 0 32m 10.244.0.130 master nginx-master-9d4cf4f77-pskmp 1/1 Running 0 32m 10.244.0.132 master nginx-master-9d4cf4f77-wtcwt 1 Running 0 73m 10.244.1.220 node01 [root@master ~] # kubectl delete po nginx-master-9d4cf4f77- 47vfjpod "nginx-master-9d4cf4f77- 47vfj" deleted [root@master ~] # kubectl delete po nginx-master-9d4cf4f77- 2vnvkpod "nginx-master-9d4cf4f77- 2vnvk" deleted [root@master ~] # Kubectl get po-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-master-9d4cf4f77-6drhg 1 node01 nginx-master-9d4cf4f77 1 Running 0 76m 10.244.1.218 node01 nginx-master-9d4cf4f77-7n7pt 1 Running 0 35m 10.244.0.131 master nginx-master-9d4cf4f77 -b7zfd 1 Running 0 76m 10.244.1.219 node01 nginx-master-9d4cf4f77-f92hp 1 Running 0 44s 10.244.207 node02 nginx-master-9d4cf4f77-ktnvk 1 Running 0 76m 10.244.0.128 master nginx-master-9d4cf4f77-mzrx7 1 Running 0 76m 10.244.1.217 node01 nginx-master-9d4cf4f77-pdkst 1/1 Running 0 35m 10.244.0.130 master nginx-master-9d4cf4f77-pskmp 1/1 Running 0 35m 10.244.0.132 master nginx-master-9d4cf4f77-tdghn 1/1 Running 0 15s 10 .244.2.208 node02 nginx-master-9d4cf4f77-wtcwt 1/1 Running 0 76m 10.244.1.220 node01

At the business low peak of delete pod nginx-master-9d4cf4f77-47vfj and nginx-master-9d4cf4f77-2vnvk, since the pod on node02 has been expelled before, and the resource utilization is the lowest, the node will be adjusted when pod is rebuilt to complete the pod migration.

6. Delete node 1. Delete nod

A node node may be deleted in the actual operation and maintenance process. This article will take node02 as an example to introduce how to delete a node.

[root@master] # kubectl cordon node02 [root@master] # kubectl drain node02-- delete-local-data-- ignore-daemonsets-- force [root@master ~] # kubectl delete node node02

[root@node02 ~] # kubeadm reset

two。 Node rejoin

Run on the master node

[root@master] # kubeadm token create-- print-join-commandkubeadm join 172.27.9.131pur6443-- token kpz40z.tuxb4t4m1q37vwl1-- discovery-token-ca-cert-hash sha256:5f656ae26b5e7d4641a979cbfdffeb7845cc5962bbfcd1d5435f00a25c02ea50

Node02 rejoins the cluster

[root@node02] # kubeadm join 172.27.9.131 token svrip0.lajrfl4jgal0ul6i-- discovery-token-ca-cert-hash sha256:5f656ae26b5e7d4641a979cbfdffeb7845cc5962bbfcd1d5435f00a25c02ea50

View node

All scripts and configuration files for this article have been uploaded: Pode Eviction and Node Manage

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report