Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Application Policy rules of K8s Autonomous pod

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Autonomous pod application

Most of the pod we come into contact with is the controller-controlled pod, so today we are talking about the autonomous pod (that is, the pod created by the yaml file), that is, the pod controls itself to prevent the pod from being killed by the controller.

1. First, let's create a pod resource object for nginx:

Before creating a pod, let's take a look at the download policy of the image:

[root@master yaml] # kubectl explain pod.spec.containers View the policy field of imagePullpolicy: imagePullPolicy Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if: latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images

There are several strategic explanations:

Always: the image is always obtained from the specified repository, that is, the image will be pulled again every time a pod is created.

Nerver: it is forbidden to download images from the repository, that is, only local images are used. If there are no mirrors locally, pod will not run. IfNotPresent (provided): download from the target repository only if there is no corresponding image locally.

The default acquisition policy is as follows:

The image is labeled "latest" and its default download policy is Always.

The image label is custom (not the default latest), and its default download policy is IfNotPresent.

/ / create a pod (define a policy):

[root@master yaml] # vim test-pod.yaml apiVersion: v1kind: Podmetadata: name: test-podspec: containers:-name: myapp image: nginx imagePullPolicy: Always # define image download policy (always get image from specified repository) ports:-containerPort: 80 # Port of exposure container / / execute yaml file: [root@master yaml] # vim test-pod.yaml [root@ Master yaml] # kubectl apply-f test-pod.yaml / / View the status of pod: [root@master yaml] # kubectl get pod test-pod-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEStest-pod 1 Running 0 7m55s 10.244.2.2 node02 / / View a detail of pod in json format: [root@master yaml] # kubectl get pod test-pod-o json

Image mirror phase (status):

The container required by Running:Pod has been successfully dispatched to a node and has been run successfully.

Pending:APIserver has created the pod resource object and has stored it in etcd, but it has not been scheduled or is still in the process of downloading the image in the repository.

All containers in Succeeded:Pod have been successfully terminated and will not be restarted.

All containers in Failed:Pod have been terminated, and at least one container has been terminated due to a failure. That is, the container either exits in a non-zero state or is terminated by the system

Unknown:APIserver is unable to get the state of the pod object properly, usually because it is unable to communicate with the kubelet of the worker node.

Be sure to be familiar with and remember these states, as it helps to accurately find the problem in the event of a cluster failure and troubleshoot errors quickly.

2Reboot strategy of dint pod:

/ / first, let's look at a restart policy of pod: [root@master yaml] # kubectl explain pod.spec # View the restart policy of the spec field of pod

Field explanation: Always: restart whenever the pod object is terminated. This is the default policy. OnFailure: restart only if there is an error in the pod object. If the container is in complete state, that is, if it exits normally, it will not be restarted. Never: never restarts.

3. Let's simulate the creation of a pod that uses the image in the private repository within the company. The image download policy is Never (local image is used), and the restart policy of pod is Never (never restart).

[root@master yaml] # vim nginx-pod1.yamlapiVersion: v1kind: Podmetadata: name: mynginx labels: # definition tag is associated with servic test: mywebspec: restartPolicy: Never # defines image restart policy as Never containers:-name: myweb image: 172.16.1.30:5000/nginx:v1 imagePullPolicy: Never # defines image download policy as Never---apiVersion: v1kind: Servicemetadata: name: mynginxspec: type: NodePort selector: test: myweb ports:-name: nginx port: 8080 # definition Cluster IP port targetPort: 80 # exposure container port nodePort: 30000 # definition to public network port exposed through host [root@master yaml] # kubectl apply-f nginx-pod1.yaml pod/mynginx createdservice/mynginx created

/ / View the current status of pod:

Check the pod information to see that the image pull failed, so we check the details of the pod for troubleshooting:

[root@master yaml] # kubectl describe pod mynginx

You can see that because of the policy, the Nerver policy can only be downloaded from the local image of this node, so we need to pull the image from the private repository on this node (node02) to the local.

[root@node02 ~] # docker pull 172.16.1.30:5000/nginx:v1v1: Pulling from nginxDigest: sha256:189cce606b29fb2a33ebc2fcecfa8e33b0b99740da4737133cdbcee92f3aba0aStatus: Downloaded newer image for 172.16.1.30:5000/nginx:v1// returns to master Check whether pod1 is running again: [root@master yaml] # kubectl get pod-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESmynginx 1 Running 0 7m32s 10.244.2.4 node02 test-pod 1 Running 1 7h20m 10.244.2.3 node02

2) Test the pod restart strategy:

We simulate the abnormal exit of pod

[root@master yaml] # vim nginx-pod1.yaml apiVersion: v1kind: Podmetadata: name: mynginx labels: test: mywebspec: restartPolicy: Never containers:-name: myweb image: 172.16.1.30:5000/nginx:v1 imagePullPolicy: Never args: [/ bin/sh-c sleep 10 Exit 1] # add args field, exit / / restart pod [root @ master yaml] # kubectl delete-f nginx-pod1.yaml pod "mynginx" deletedservice "mynginx" deleted [root@master yaml] # kubectl apply-f nginx-pod1.yaml pod/mynginx createdservice/mynginx created after simulating sleep10 for a second

PS: because the restart policy of pod is set to Never in the pod file, if the policy cannot be modified, you need to delete the pod and re-execute the yaml file to generate a new pod.

/ / check the pod status for the first time:

We see that the new pod is on the node01 node, so we also need to pull the image locally on node01.

[root@node01 ~] # docker pull 172.16.1.30:5000/nginx:v1v1: Pulling from nginxDigest: sha256:189cce606b29fb2a33ebc2fcecfa8e33b0b99740da4737133cdbcee92f3aba0aStatus: Downloaded newer image for 172.16.1.30:5000/nginx:v1

To better see the effect, we re-execute the yaml file to generate a new pod:

[root@master yaml] # kubectl delete-f nginx-pod1.yaml

[root@master yaml] # kubectl apply-f nginx-pod1.yaml

/ / View the final status of pod:

Let's view the details of the pod (see the reason why the pod failed to run): [root@master yaml] # kubectl describe pod mynginx

From the above information, we can see that after creating the container, pod exited abnormally, resulting in the failure of container creation.

3) We modify the policy rule to set the rule to OnFailure:

/ / rerun pod: [root@master yaml] # kubectl delete-f nginx-pod1.yaml pod "mynginx" deletedservice "mynginx" deleted [root@master yaml] # kubectl apply-f nginx-pod1.yaml pod/mynginx createdservice/mynginx created check pod status (failed): [root@master yaml] # kubectl get pod-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESmynginx 0amp 1 RunContainerError 3 55s 10.244.2.8 node02

[root@master yaml] # kubectl get pod-o json

We can see that the current pod failed to create the container.

/ / We will immediately monitor the current status of pod in real time:

You can see that pod is constantly rebooting, because when we set pod to restart to OnFailure, we also set pod to restart after 10 seconds of sleep, so the pod will continue to restart.

4) We modify the policy rules and restore their pod to normal operation:

/ / rerun the yaml file Generate a new pod: [root@master yaml] # kubectl delete-f nginx-pod1.yaml pod "mynginx" deletedservice "mynginx" deletedk [root@master yaml] # kubectl apply-f nginx-pod1.yaml pod/mynginx createdservice/mynginx created// to see if pod is back to normal operation: (normal operation) # kubectl get pod-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESmynginx 1 Running 0 71s 10.244.1.8 node01

5) check the servie information to verify whether the web interface of nginx can be accessed properly:

[root@master yaml] # kubectl get svc mynginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEmynginx NodePort 10.101.131.36 8080:30000/TCP 4m23s

-this is the end of this article. Thank you for reading-

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report