Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Labels,Daemonset,Job resource object of K8s

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Label (label)

Why do we use label?

When there are more and more objects of the same type of resources, in order to better manage, they are divided into a group according to the label, in order to improve the efficiency of resource management.

A lable is a key-value pair attached to an object, such as pod. It can be specified when the object is created or at any time after the object is created. The value of Labels has no meaning to the system itself, only to the user.

"labels": {"key1": "value1", "key2": "value2"}

Syntax and character set

The composition of Label key: * no more than 63 characters * can be prefixed and / separated. The prefix must be a DNS subdomain and must not exceed 253 characters. The label created by the automation components in the system must specify a prefix, and the kubernetes.io/ must be retained by kubernetes. * it must start with letters (uppercase and lowercase) or numbers, with hyphens, underscores and dots in the middle. Composition of Label value: no more than 63 characters must start with letters (uppercase and lowercase) or numbers, with hyphens, underscores and dots in the middle.

Commonly used, multi-dimensional label classification:

Version label (release): stable (stable version), canary (Canary version), beta (beta version) environment class (environment): dev (development), qa (test), production (production), op (operation and maintenance) application class (applaction): ui (design), as (application software), pc (computer side), sc (network side) architecture layer (tier): frontend (front end), backend (back end) Cache (cache) partition label (partition): customerA (customer), customerB quality control level (track): daily (daily), weekly (weekly)

Practice label with the following example:

[root@master yaml] # vim label-pod.yamlapiVersion: v1kind: Podmetadata: name: label-pod labels: # use the labels field to define tags, and you can define multiple tags at a time Here define three tags release: stable # version: stable version env: qa # Environment: test tier: frontend # Architecture class: front-end spec: containers:-name: testapp image: nginx # deploys the nginx service-kind: Service # Associates a service resource object apiVersion: v1metadata: name: nginx-svcspec: type: NodePort selector: # using the tag selector Release: stable # only need to define a label in the selector field All other tags under the field can be associated. Ports:-protocol: TCP port: 80 targetPort: 80 nodePort: 32134 [root@master yaml] # kubectl apply-f label-pod.yaml pod/label-pod createdservice/nginx-svc unchanged// View all pod and display the label key:value: [root@master yaml] # kubectl get pod-- show-labels NAME READY STATUS RESTARTS AGE LABELSlabel-pod 1 kubectl get pod 1 Running 0 30m env=qa,release=stable,tier=frontend

/ / View the key:value of the specified pod:

[root@master yaml] # kubectl get pod label-pod-- show-labels NAME READY STATUS RESTARTS AGE LABELSlabel-pod 1 Running 0 40m app=as,env=qa,release=stable,tier=frontend

/ / only the value of the tag is displayed:

[root@master yaml] # kubectl get pod label-pod-L env,release,tierNAME READY STATUS RESTARTS AGE ENV RELEASE TIERlabel-pod 1 env,release,tierNAME READY STATUS RESTARTS AGE ENV RELEASE TIERlabel-pod 1 Running 0 41m qa stable frontend

Other actions of label (command line): add, modify, delete tags

/ / add tags on the command line:

[root@master yaml] # kubectl label pod label-pod app=scpod/label-pod labeled [root@master yaml] # kubectl get pod-L appNAME READY STATUS RESTARTS AGE APPlabel-pod 1 Running 0 36m sc

/ / modify the label:

[root@master yaml] # kubectl label pod label-pod app=aserror: 'app' already has a value (sc), and-- overwrite is false [root@master yaml] # kubectl label pod label-pod app=as-- overwrite pod/label-pod labeled

You can see that if you want to change the tag, you must override it with the-- overwrite option.

/ / Delete the tag:

[root@master yaml] # kubectl label pod label-pod app-pod/label-pod labeled [root@master yaml] # kubectl get pod-L appNAME READY STATUS RESTARTS AGE APPlabel-pod 1 Running 0 43m # you can see the tag to be deleted

/ / We test whether the nginx service is running properly:

Label selector

Tag selector: query filter criteria for tags.

Label is not unique. Many object may have the same label. Through label selector, clients / users can specify an object collection and operate on the object collection through label selector.

Currently, kubernetes API supports two kinds of tag selectors:

1) equivalence-based relationship (matchLables): "=", "=", "!" 2) set-based (matchExpressions): in (in this set), notin (not in this set), exists (either exists or does not exist)

The operation logic of using the tag selector:

1) the logical relationship between multiple selectors specified at the same time is the "and" operation.

2) use a tag selector with a null value, which means that each resource object will be selected.

3) the empty tag selector will not select any resources.

4) in a collection-based selector, when using the "in" or "Notin" operation, its values is not forced to be a non-empty string list, while using exists or DostNoteExists, its value must be empty.

For example:

The operation syntax of selector is as follows:

[root@master yaml] # vim selector.yamlselector: matchLabels: # app based on equivalence: nginx matchExpressions: # Collection-based-{key: name,operator: In,values: [zhangsan,lisi]} # key,operator,values these three are fixed parameters-{key: age,operator: Exists,values:} # if Exists is specified, its values must be empty. Daemonset

1) what is Daemonset?

Daemonset ensures that one pod is running on each node in the cluster, and that only one pod can be run. When node joins the cluster, a new pod is added to them as well. When node is removed from the cluster, these pod are also recycled. When you delete a Daemonset, all pod that it creates will be deleted.

2) points to pay attention to when writing Daemonset:

Daemonset does not support the replicas field, except that it is written in the same way as resources such as Deployment,RS.

3) General usage scenarios of Daemonset:

It is often used for log collection of each node. Monitor the running status of each node.

Practice Daemonset:

[root@master yaml] # vim daemonset.yamlkind: DaemonSetapiVersion: extensions/v1beta1metadata: name: nginx-dsspec: metadata: labels: app: qa env: dev spec: containers:-name: nginx image: nginx---kind: ServiceapiVersion: v1metadata: name: nginx-dsvcspec: type: NodePort selector: app: qa ports:-protocol: TCP port: 80 targetPort: 80 nodePort: 30003 [root@ Master yaml] # kubectl apply-f daemonset.yaml daemonset.extensions/nginx-ds createdservice/nginx-dsvc created

/ / View the distribution of pod:

[root@master yaml] # kubectl get pod-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-ds-dh529 1 node02 nginx-ds-xz4d8 1 Running 0 76s 10.244.2.2 node02 nginx-ds-xz4d8 1 Running 0 76s 10.244.1.3 node01

There are only two nodes in my cluster, and you can see that Daemonset enables one copy of pod to run on each node.

JOB resource object

Unlike the previous service class container, the previous resource object is to provide a continuous service. Job is responsible for batch processing of short-lived one-time tasks, that is, tasks that are executed only once, which ensures the successful completion of one or more pod of the batch task.

1Jet Kubernetes supports the following job:

* non-parallel job: usually create a pod until it finishes successfully. * job with a fixed number of terminations: set spec.completions and create multiple pod until the successful completion of .spec.completions pod. * parallel job with work queues: set .spec.Parallelism but not .spec.completions, and job is considered successful when all pod ends and at least one is successful.

2,Job Controller

Job Controller is responsible for creating the pod based on the Job Spec and continuously monitoring the status of the pod until its successful completion. If it fails, it decides whether to create a new pod and retry the task based on the restartPolicy (only OnFailure and Never are supported, not Always).

Practice Job with the following example:

/ / create a job resource object:

Kind: JobapiVersion: batch/v1metadata: name: test-jobspec: template: metadata: labels: app: jobspec: containers:-name: job image: busybox command: ["echo", "hello job!"] RestartPolicy: Never [root@master yaml] # kubectl apply-f job.yaml job.batch/test-job created

If you forget the use of fields in a production environment, you can use the kubectl explain command tool to help.

/ / View the status of the pod resource object: [root@master yaml] # kubectl get pod test-job-dcv6g NAME READY STATUS RESTARTS AGEtest-job-dcv6g 0swap 1 Completed 0 2m8s

We can see that job is different from other resource objects in that it only executes one-time tasks. By default, job ends after pod is run, and the status is Completed.

/ / check the log of the pod to ensure that the task is completed:

[root@master yaml] # kubectl logs test-job-dcv6g hello job!

After the task is completed, if there are no other requirements, we can delete the job:

[root@master yaml] # kubectl delete jobs.batch test-job job.batch "test-job" deleted

2. Methods to improve the efficiency of job execution.

This is achieved by defining fields in the yaml file:

Kind: JobapiVersion: batch/v1metadata: name: test-jobspec: # optimize parallelism: 2 # run 2 pod completions: 8 # run pod Total number of 8 template: metadata: labels: app: jobspec: containers:-name: job image: busybox command: ["echo", "hello job!"] RestartPolicy: Never

The job field explains:

Completions: the number of Pod that needs to be run successfully to mark the end of Job. Default is 1.

Parallelism: indicates the number of Pod running in parallel. Default is 1.

ActiveDeadlineSeconds: marks the maximum time for a failed Pod to retry, after which no retry will be made.

/ / after rerunning job, check the pod status:

[root@master yaml] # kubectl get pod NAME READY STATUS RESTARTS AGEtest-job-28ww5 0/1 Completed 0 50stest-job-5wt95 0/1 Completed 0 46stest-job-6s4p6 0/1 Completed 0 44stest-job-8s2v7 0/1 Completed 0 50stest-job-bt4ch 0/1 Completed 0 45stest-job-bzjz6 0/1 Completed 0 48stest-job-fhnvc 0/1 Completed 0 44stest-job-kfn9l 0/1 Completed 0 48s [root@master yaml] # kubectl logs test-job-28ww5 hello job!

You can see that the total number of pod is 8, and there are 2 in parallel, the time will be slightly different, but not much.

3. Run job tasks regularly:

It is equivalent to our crontab planning task in linux.

[root@master yaml] # vim cronjob.yamlkind: CronJob # Type is CronJobapiVersion: batch/v1beta1metadata: name: cronjobspec: # use the spec.schedule field to define a scheduled job task schedule: "* / 1 *" # specify that the task is executed every minute, in the same format as crontab (minutes, hours, days, months) in linux (week) jobTemplate: spec: template: spec: containers:-name: cronjob image: busybox command: ["echo", "hello job!"] RestartPolicy: OnFailure # restarts only if pod fails

/ / Monitor the status of pod after executing the yaml file:

[root@master yaml] # kubectl apply-f cronjob.yaml cronjob.batch/cronjob created

You can see that it executes the job task every minute, and each time it executes it generates a pod.

View the log to verify that the task is performed:

[root@master ~] # kubectl logs cronjob-1577505180-4ss84 hello job! [root@master ~] # kubectl logs cronjob-1577505240-d5gf8 hello job!

Extending: adding apiVersion

1) check the corresponding API version in the current kubernetes cluster:

[root@master ~] # kubectl api-versions admissionregistration.k8s.io/v1beta1apiextensions.k8s.io/v1beta1apiregistration.k8s.io/v1apiregistration.k8s.io/v1beta1apps/v1apps/v1beta1apps/v1beta2authentication.k8s.io/v1authentication.k8s.io/v1beta1authorization.k8s.io/v1authorization.k8s.io/v1beta1autoscaling/v1autoscaling/v2beta1autoscaling/v2beta2batch/v1batch/v1beta1certificates.k8s.io/v1beta1coordination.k8s.io/v1coordination.k8s.io/v1beta1events.k8s.io/v1beta1extensions/v1beta1networking.k8s.io/v1networking.k8s.io/v1beta1node.k8s.io/v1beta1policy / v1beta1rbac.authorization.k8s.io/v1rbac.authorization.k8s.io/v1beta1scheduling.k8s.io/v1scheduling.k8s.io/v1beta1storage.k8s.io/v1storage.k8s.io/v1beta1v1

Check and find that there is no development version, all test version.

Add an api version:

[root@master] # cd / etc/kubernetes/manifests/ [root@master manifests] # lltotal 16 RW-1 root root 1900 Nov 4 16:32 etcd.yaml-rw- 1 root root 2602 Nov 4 16:32 kube-apiserver.yaml-rw- 1 root root 2486 Nov 4 16:32 kube-controller-manager.yaml-rw- 1 root root 990 Nov 4 16:32 kube-scheduler.yaml [root@master manifests] # vim kube-apiserver.yaml

Under this field, add the corresponding version according to the corresponding format. The above version is the batch development version.

/ / restart kubelet and reload: [root@master manifests] # systemctl restart kubelet.service

/ / when you view the api version again, you can view the development version:

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report