In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
I. the relationship among Deployment, ReplicaSet and Pod
Let's go on to the previous article, if you are not clear, please check the previous blog post: https://blog.51cto.com/wzlinux/2322616
As we learned earlier, Kubernetes manages the lifecycle of Pod through a variety of Controller. In order to meet different business scenarios, Kubernetes has developed a variety of Controller, such as Deployment, ReplicaSet, DaemonSet, StatefuleSet, Job and so on. Let's first learn the most commonly used Deployment.
1. Run Deployment
Starting with the example, run a Deployment:
Kubectl run nginx-deployment-image=nginx:1.7.9-replicas=2
The above command deploys a Deployment nginx-deployment with two copies, and the container's image is nginx:1.7.9.
2. View Deployment (deploy)
Take a look at the deployment you just created, which can be abbreviated to deploy.
[root@master] # kubectl get deployNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEnginx-deployment 2 2 2 4m56s
Use the command kubectl describe deploy to view the internal contents.
Kubectl describe deploy nginx-deploymentName: nginx-deploymentNamespace: defaultCreationTimestamp: Thu, 29 Nov 2018 17:47:16 + 0800Labels: run=nginx-deploymentAnnotations: deployment.kubernetes.io/revision: 1Selector: run=nginx-deploymentReplicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailableStrategyType: RollingUpdateMinReadySeconds: 0RollingUpdateStrategy: 25 max unavailable 25% max surgePod Template: Labels: run=nginx-deployment Containers: nginx-deployment: Image: nginx:1.7.9 Port: Host Port: Environment: Mounts: Volumes: Conditions: Type Status Reason-Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailableOldReplicaSets: NewReplicaSet: Nginx-deployment-5fd98dbf5f (2 replicas created) Events: Type Reason Age From Message-Normal ScalingReplicaSet 6m11s deployment-controller Scaled up replica set nginx-deployment-5fd98dbf5f to 2
Most of the content shown is description information. Let's take a look at the last line, which tells us that we have created a log in which ReplicaSet nginx-deployment-5fd98dbf5f,Events is Deployment, recording the startup process of ReplicaSet.
Through the above analysis, it also verifies the fact that Deployment manages Pod through ReplicaSet.
3. View ReplicaSet (rs)
Check out what rs we have.
[root@master] # kubectl get rsNAME DESIRED CURRENT READY AGEnginx-deployment-5fd98dbf5f 2 2 212m
Use the command kubectl describe rs to view its details.
Kubectl describe rs nginx-deployment-5fd98dbf5fName: nginx-deployment-5fd98dbf5fNamespace: defaultSelector: pod-template-hash=5fd98dbf5f Run=nginx-deploymentLabels: pod-template-hash=5fd98dbf5f run=nginx-deploymentAnnotations: deployment.kubernetes.io/desired-replicas: 2 deployment.kubernetes.io/max-replicas: 3 deployment.kubernetes.io/revision: 1Controlled By: Deployment/nginx-deploymentReplicas: 2 current / 2 desiredPods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 FailedPod Template: Labels: pod-template-hash=5fd98dbf5f run=nginx -deployment Containers: nginx-deployment: Image: nginx:1.7.9 Port: Host Port: Environment: Mounts: Volumes: Events: Type Reason Age From Message- -Normal SuccessfulCreate 13m replicaset-controller Created pod: nginx-deployment-5fd98dbf5f-8g7nm Normal SuccessfulCreate 13m replicaset-controller Created pod: nginx-deployment-5fd98dbf5f-58c4z
We can see Controlled By: Deployment/nginx-deployment, indicating that this ReplicaSet is created by Deployment nginx-deployment.
The creation of two copies of Pod is recorded in Events, so let's take a look at Pod.
4. View Pod
View the current Pod.
[root@master ~] # kubectl get podsNAME READY STATUS RESTARTS AGEnginx-deployment-5fd98dbf5f-58c4z 1 kubectl get podsNAME READY STATUS RESTARTS AGEnginx-deployment-5fd98dbf5f-58c4z 1 Running 0 19mnginx-deployment-5fd98dbf5f-8g7nm 1 Running 0 19m
Select a random Pod to view its details.
Kubectl describe pod nginx-deployment-5fd98dbf5f-58c4zName: nginx-deployment-5fd98dbf5f-58c4zNamespace: defaultPriority: 0PriorityClassName: Node: node02.wzlinux.com/172.18.8.202Start Time: Thu 29 Nov 2018 17:47:16 + 0800Labels: pod-template-hash=5fd98dbf5f run=nginx-deploymentAnnotations: Status: RunningIP: 10.244.2.3Controlled By: ReplicaSet/nginx-deployment-5fd98dbf5fContainers: nginx-deployment: Container ID: docker://69fa73ed16d634627b69b8968915d9a5704f159206ac0d3b2f1179fa99acd56f Image: nginx:1.7.9 Image ID: docker-pullable:// Nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451 Port: Host Port: State: Running Started: Thu 29 Nov 2018 17:47:28 + 0800 Ready: True Restart Count: 0 Environment: Mounts: / var/run/secrets/kubernetes.io/serviceaccount from default-token-sm664 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-sm664: Type: Secret (a volume populated by a Secret) SecretName : default-token-sm664 Optional: falseQoS Class: BestEffortNode-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300sEvents: Type Reason Age From Message- -Normal Scheduled 20m default-scheduler Successfully assigned default/nginx-deployment-5fd98dbf5f-58c4z to node02.wzlinux.com Normal Pulling 20m kubelet Node02.wzlinux.com pulling image "nginx:1.7.9" Normal Pulled 20m kubelet, node02.wzlinux.com Successfully pulled image "nginx:1.7.9" Normal Created 20m kubelet, node02.wzlinux.com Created container Normal Started 20m kubelet, node02.wzlinux.com Started container
We can see Controlled By: ReplicaSet/nginx-deployment-5fd98dbf5f, indicating that this Pod was created by ReplicaSet nginx-deployment-5fd98dbf5f.
Events records the startup process of Pod.
5. Summarize that users create Deployment through kubectl. Deployment creates a ReplicaSet. ReplicaSet creates a Pod.
As you can see from the image above, the object is named as follows: the name of the child object = the name of the parent object + a random string or number.
II. Telescopic
Scale Up/Down refers to increasing or decreasing the number of copies of Pod online.
Let's recreate it.
[root@master] # kubectl run nginx--image=nginx:1.7.9-- replicas=2deployment.apps/nginx created [root@master ~] # kubectl get pod-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODEnginx-699ff78c9-2xxnj 1 Running 0 51s 10.244.1.11 node01.wzlinux.com nginx-699ff78c9-j5w6c 1 Running 0 51s 10.244.3.6 node02.wzlinux.com
Let's change the number of copies to 5 and check it out.
[root@master ~] # kubectl scale-- replicas=5 deploy/nginxdeployment.extensions/nginx scaled [root@master ~] # kubectl get pod-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODEnginx-699ff78c9-2xxnj 1 Running 0 2m21s 10.244.1.11 node01.wzlinux.com nginx-699ff78c9-4qq9h 1 node01.wzlinux.com nginx-699ff78c9 1 Running 0 18s 10.244.1.12 node01.wzlinux.com nginx-699ff78c9-b6dt4 1/1 Running 0 18s 10.244.3.7 node02.wzlinux.com nginx-699ff78c9-j5w6c 1/1 Running 0 2m21s 10.244.3.6 node02.wzlinux.com nginx-699ff78c9-zhwsz 1/1 Running 0 18s 10.244.3.8 node02.wzlinux.com
Three new copies are created and dispatched to node01 and node02. For security reasons, Kubernetes does not schedule Pod to the Master node by default. If you want to use master as a Node, you can execute the following command:
Kubectl taint node master node-role.kubernetes.io/master-
If you want to restore the Master Only state, execute the following command:
Kubectl taint node master node-role.kubernetes.io/master= "": NoSchedule
It's the same way to reduce the number of copies, just specify the number, and we'll reduce it to 3 copies.
[root@master] # kubectl get pod-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODEnginx-699ff78c9-2xxnj 1 Running 0 2m55s 10.244.1.11 node01.wzlinux.com nginx-699ff78c9- 4qq9h 1 Running 0 52s 10.244.1.12 node01.wzlinux.com nginx-699ff78c9- J5w6c 1 Running 0 2m55s 10.244.3.6 node02.wzlinux.com 3. Fail-over
Currently there are five applications running on two machines. We shut down node02, causing problems with node02, and then take a look at Pod.
[root@master] # kubectl get pod-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODEnginx-699ff78c9-2xxnj 1 Running 0 8m49s 10.244.1.11 node01.wzlinux.com nginx-699ff78c9- 4qq9h 1 Running 0 6m46s 10.244.1.12 node01.wzlinux.com nginx-699ff78c9-j5w6c 1/1 Unknown 0 8m49s 10.244.3.6 node02.wzlinux.com nginx-699ff78c9-wqd5k 1/1 Running 0 32s 10.244.1.13 node01.wzlinux.com
After waiting for a while, we see that the Pod on node02 is marked as Unknown, and three new Pod are created on top of node01, keeping the total number of replicas at 3.
Then we restart to start the server. Normally, if there is no problem with the configuration, the service will be automatically added to the cluster, and we will start to check the status.
[root@master] # kubectl get pod-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODEnginx-699ff78c9-2xxnj 1 Running 0 14m 10.244.1.11 node01.wzlinux.com nginx-699ff78c9-4qq9h 1 Running 0 12m 10.244.1.12 node01.wzlinux.com nginx-699ff78c9 -wqd5k 1ax 1 Running 0 6m37s 10.244.1.13 node01.wzlinux.com
When the node02 is restored, the Pod of the Unknown is deleted, but the Pod that is already running will not be rescheduled back to node02.
IV. Label
By default, Scheduler dispatches Pod to all available Node. However, there are some situations where we want to deploy Pod to a specified Node, such as deploying a Pod with a large number of disk Iripple O to a Node; configured with SSD or Pod requiring GPU, which needs to run on a node configured with GPU.
We use the mytest.yaml file to create a Deployment with the following contents:
ApiVersion: extensions/v1beta1kind: Deploymentmetadata: name: mytest namespace: defaultspec: replicas: 5 template: metadata: labels: run: mytest spec: containers:-image: wangzan18/mytest:v1 imagePullPolicy: IfNotPresent name: mytest
Use the following command to create the application.
[root@master] # kubectl create-f mytest.yaml deployment.extensions/mytest created
Kubernetes implements this function through label. Label is a key-value pair. Label can be set for all kinds of resources, and various custom attributes can be added flexibly. For example, execute the following command to mark that node01 is a node configured with SSD.
Kubectl label node node01.wzlinux.com disktype=ssd
Then use the command kubectl get node-- show-labels to see.
NAME STATUS ROLES AGE VERSION LABELSmaster.wzlinux.com Ready master 26h v1.12.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master.wzlinux.com,node-role.kubernetes.io/master=node01.wzlinux.com Ready 25h v1.12.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd Kubernetes.io/hostname=node01.wzlinux.comnode02.wzlinux.com Ready 91m v1.12.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node02.wzlinux.com
Disktype=ssd has been successfully added to node01, and there are several label maintained by Kubernetes itself in addition to disktype,Node.
With the custom label of disktype, you can then specify that Pod be deployed to node01. Edit mytest.yaml:
ApiVersion: extensions/v1beta1kind: Deploymentmetadata: name: mytest namespace: defaultspec: replicas: 5 template: metadata: labels: run: mytest spec: containers:-image: wangzan18/mytest:v1 imagePullPolicy: IfNotPresent name: mytest nodeSelector: disktype: ssd
In the spec of the Pod template, specify that this Pod be deployed to the Node with label disktype=ssd through nodeSelector.
Redeploy Deployment and view the running node of Pod:
[root@master] # kubectl apply-f mytest.yaml Warning: kubectl apply should be used on resource created by either kubectl create-- save-config or kubectl applydeployment.extensions/mytest configured [root@master] # kubectl get pod-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODEmytest-6f7fbbfdc7-2tr6s 1 Running 063s 10.244.1.19 node01.wzlinux. Com mytest-6f7fbbfdc7- 5g9tj 1/1 Running 0 61s 10.244.1.21 node01.wzlinux.com mytest-6f7fbbfdc7-bnfxv 1/1 Running 0 61s 10.244.1.22 node01.wzlinux.com mytest-6f7fbbfdc7-bqzqq 1/1 Running 0 60s 10.244.1.23 node01.wzlinux.com mytest-6f7fbbfdc7-v6cqk 1/1 Running 0 63s 10.244.1.20 node01.wzlinux.com
All 6 copies run on node01, in line with our expectations.
To delete the label disktype, execute the following command:
[root@master ~] # kubectl label node node01.wzlinux.com disktype-node/node01.wzlinux.com labeled
However, Pod will not be redeployed at this time and will still be running on node01.
Unless you delete the nodeSelector setting in mytest.yaml and then redeploy via kubectl apply.
About the setting of object resources, you can use the command kubectl explain. For example, to view the parameter settings of nodeSelector in pod, we can use the command kubectl explain pod.spec.nodeSelector.
Minor problem: manually re-add to the cluster
If the service cannot be automatically added to the cluster because of some problem, we need to manually reinitialize and add it again.
Delete the node02 node above the master node first.
Kubectl delete node node02.wzlinux.com
Reset on node02.
Kubeadm reset
Re-initialize using kubeadm init, but find that the token has expired and we need to regenerate the token on the master node.
[root@master ~] # kubeadm token createv269qh.2mylwtmc96kd28sq
Generate the value of ca-cert-hash sha256.
[root@master ~] # openssl x509-pubkey-in / etc/kubernetes/pki/ca.crt | openssl rsa-pubin-outform der 2 > / dev/null |\ > openssl dgst-sha256-hex | sed's / ^. * / / '84e50f7beaa4d3296532ae1350330aaf79f3f0d45ec8623fae6cd9fe9a804635
Then re-use kubeadm init on top of the node node to add to the cluster.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.