Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use Pod in Kubernetes

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article is to share with you about how to use Pod in Kubernetes. The editor thinks it is very practical, so I share it with you to learn. I hope you can get something after reading this article.

Pod is the basic unit of Kubernetes system, the smallest component in the resource object model that can be created or deployed by users, and the resource object to run containerized applications on Kubernetes systems.

The relationship between container and Pod

Docker recommends a single container and a single process, but due to the isolation mechanism between containers, IPC (Inter-Process Communication) communication can not be realized between container processes. This makes it difficult to communicate between functional containers, such as between the main container and the container responsible for log collection. The Pod resource abstraction is the component used to solve this kind of problem. The Pod object is a collection of containers that share Network, UTS (UNIX Time-sharing System) and IPC namespaces, so they have the same domain name, hostname and network interface, and can communicate directly through IPC. It is the underlying container pause that provides sharing mechanisms such as network namespaces for containers in a Pod object. Although Pod supports running multiple containers, as a best practice, unless there is a close relationship between multiple processes, they should be built into multiple Pod, so that multiple Pod can be scheduled to run on multiple different hosts, improving resource utilization and facilitating scaling.

Sidecar pattern (side car mode)

When there is a close relationship between multiple processes, multiple containers are generally organized according to the side car model, which provides a collaborative auxiliary application container for the main application container of Pod. A typical application scenario is that when the logs in the main application container are collected to the log server using agent, agent can be run as a secondary application container.

Managing containers for Pod objects

An example of the configuration list of Pod:

ApiVersion: v1kind: Podmetadata: name: pod-examplespec: containers:-name: myapp image: ikubernetes/myapp:v2

Under the spec field, containers is required and its sub-field name is required. Image is required for manual scenario Pod, but it can be selected when controlled by high-level management resources such as Deployment, because this field may be overwritten.

Define the acquisition policy for images

The core function of Pod is to run the container, and you can customize the image acquisition policy through image.imagePullPolicy.

Always: always get the image from the specified repository when the image tag is "latest" or if the image does not exist

IfNotPresent: download the image from the target repository only if the local image is missing

Never: prohibit downloading images from the repository, that is, only local images are used

Spec: containers:-name: myapp image: ikubernetes/myapp:v2 imagePullPolicy: Always

The default image acquisition policy is "Always" for images labeled "latest", and "IfNotPresent" for images of other tags.

Exposed port

Exposing ports in Pod has a different meaning than exposing ports for Docker containers: in Docker's network model, containerized applications using default networks need to "expose" them to the external network through the NAT mechanism before they can be accessed by container clients on other nodes. In K8S, the IP address of each Pod is already in the same network plane, whether it is a container exposed port or not, it will not affect the access of Pod clients on other nodes in the cluster, so the exposed port is only informational data, and explicitly specifying the container port is also convenient to call.

Spec: containers:-name: myapp image: ikubernetes/myapp:v2 ports:-name: http containerPort: 80 protocol: TCP

The configuration here specifies TCP port 80 on the exposure container and names it http. The IP addresses of Pod objects are only reachable within the current cluster, and they cannot directly receive request traffic from clients outside the cluster. Although their service reachability is not constrained by the worker node boundary, it is still subject to the cluster boundary. How to make Pod objects accessible outside the cluster will be learned later.

Containerized applications for custom operation

The command field can specify an application that is different from the default running of the mirror, and you can pass parameters using the args field at the same time, which will override the default definition in the mirror. However, if only the args field is defined for the container, it will be passed as a parameter to the application specified to run by default in the image; if only the command field is defined for the container, it will override the program and parameters defined in the image and run the application in a parameterless manner.

Spec: containers:-name: myapp image: ikubernetes/myapp:v2 imagePullPolicy: Never command: ["/ bin/sh"] args: ["- c", "while true; do sleep 30; done"] environment variable

Environment variables are also a way to pass configuration to containerized applications. There are two ways to pass data to container environment variables in Pod objects: env and envFrom. Only the first method is described here, and the second method is described when introducing ConfigMap and Secret resources. Environment variables usually consist of name and value fields.

Spec: containers:-name: myapp image: ikubernetes/myapp:v2 env:-name: REDIS_HOST value: do.macOS-name: LOG_LEVEL value: info tag and tag selector tag management

The tag selector can select the resource object with a tag and do what you need. An object can have more than one tag, and the same tag can be added to multiple resources. Resources can be tagged at different latitudes to achieve flexible resource grouping management functions, such as version tags, environment tags, hierarchical architecture tags, and so on. used to cross-identify different versions, environments, and architectural levels to which the same resource belongs. Example of defining a label:

ApiVersion: v1kind: Podmetadata: name: pod-example labels: env: qa tier: frontend

After the resource is created, add the-- show-labels option to the kubectl get pods command to display the lables information. -L, option to increase the corresponding column information. Directly manage the labels of active objects:

Kubectl label pods/pod-example release=beta

Release=beta has been added for pod-example. If you want to modify an existing impairment pair, you need to add the-- overwrite option.

Tag selector

Tag selectors are used to express query criteria or selection criteria for tags. Kubernetes API currently supports two selectors: based on

Equality-based, the available operators are "=" and "! =", the first two are equivalent

Set-based supports in, notin and exists operators. In addition, you can only specify KEY to filter all resources with this key name tag.! KEY filters all resources that do not have this key name tag by using the following logic when using the tag selector:

The logical relationship between multiple selectors specified at the same time is the and operation.

Using a tag selector with null values means that each resource object will be selected

Empty tag selectors will not be able to select any resources.

Many Kubernetes resource objects must be associated with Pod resource objects in the form of tag selectors, such as resources of type Service, Deployment, and ReplicaSet. Selectors can be specified in the spec field through nested "selector" fields in two ways:

MatchLabels: specifies the tag selector by giving a key-value pair directly

MatchExpressions: based on the tag selector list specified by the expression, each selector is shaped like "{key:KEY_NAME, operator: OPERATOR,values: [VALUE1, VALUE2, …]}". There is a "logical and" relationship between selector lists; when using the In or NotIn operator, its values does not force a non-empty list of strings, while when using Exists or DostNotExist, its values must be empty. Examples of formats:

Selector: matchLabels: component: redis matchExpressions:-{key: tier, operator: In, values: [cache]}-{key: environment, operator: Exists, values:} Resource Notes

In addition to tags, Pod and other resources can also use resource annotations (annotation), which are also key-type data, but it cannot be used for tagging and selecting Kubernetes objects, but can only be used to provide "metadata" information for resources. In addition, the metadata in annotations is not limited by the number of characters, can be structured or unstructured, and there is no restriction on character types. Annotation places build, release, or mirror-related information that points to the address of the log, monitoring, analysis, or audit repository, or information generated by the client library or tool program for debugging purposes, such as name, version, build information, and so on.

View resource comments use the kubectl get-o yaml and kubectl describe commands to display the annotation information of the resource.

Kubectl describe pods pod-example | grep "Annotations"

Manage resource annotations define annotations in the configuration list:

ApiVersion: v1kind: Podmetadata: name: pod-example annotations: created-by: "cluster admin"

Append annotations:

The life cycle of the kubectl annotate pods pod-example created-by2= "admin" Pod object

The life cycle of Pod is shown in the figure:

Phase

A Pod object should always be in one of the following Phase phases in its life process:

Pending:API Server has created the Pod resource object and stored it in etcd, but it has not been scheduled or is still in the process of downloading the image from the repository. Cymbal

Running:Pod has been dispatched to a node, and all containers have been created by kubelet. Cymbal

All containers in Succeeded:Pod have been successfully terminated and will not be restarted. Cymbal

Failed: all containers have been terminated, but at least one container has failed, that is, the container has returned an exit status with a non-zero value or has been terminated by the system. Cymbal

Unknown:API Server cannot get the state information of the Pod object properly, usually because it cannot communicate with the kubelet of the worker node.

The process of creating Pod

The process of creating Pod refers to the process of creating Pod itself, its main container and its auxiliary containers.

Users submit PodSpec to API Server through kubectl or other API clients.

API Server attempts to store the relevant information about the Pod object in etcd, and when the write operation is completed, API Server will return the confirmation message to the client.

API Server begins to reflect state changes in etcd.

All Kubernetes components use the "watch" mechanism to track and check for changes on the API Server.

The kube-scheduler (scheduler) senses through its "watcher" that API Server has created a new Pod object but has not yet bound to any worker node.

Kube-scheduler picks a worker node for the Pod object and updates the result information to API Server.

The scheduling result information is updated from API Server to the etcd storage system, and API Server begins to reflect the scheduling result of this Pod object.

The kubelet on the target work node to which the Pod is scheduled attempts to call the Docker startup container on the current node and send the resulting state of the container back to API Server.

API Server stores the Pod status information in the etcd system.

After the etcd acknowledges that the write operation completes successfully, the API Server sends the acknowledgment to the relevant kubelet, through which the event is accepted.

Important behaviors in the Pod Lifecycle

In addition to creating application containers (primary containers and their auxiliary containers), users can also define a variety of behaviors in the life cycle of Pod objects, such as containers for initialization, viability probes, and readiness probes.

Container for initialization

The init container used for initialization is the container to be run before the main container of the application starts. It is often used to perform some preset operations on the main container, such as:

Used to run specific tool programs that are not suitable for inclusion in the main container image for security and other reasons.

Provides tool programs or custom code that are not available in the main container image.

It provides a separate and independent way for container image builders and deployers to work independently, so that they do not have to work together to create a single image file.

The initialization container and the main container are in different file system views, so sensitive data, such as Secrets resources, can be safely used separately.

Initializing the container starts and runs serially before the application container, so it can be used to delay the startup of the application container until the conditions on which it depends are met.

Define through the initContainers field in the resource list:

Spec: containers:-name: myapp image: ikubernetes/myapp:v2 initContainers:-name: init-something image: busybox command: ['sh','-cations, 'sleep 10'] Lifecycle hook function

Kubernetes provides two lifecycle hooks for containers:

PostStart: runs immediately after the container is created, but Kubernetes cannot guarantee that it will run before the ENTRYPOINT in the container.

PreStop: runs immediately before the container terminates the operation, which is called synchronously, thus blocking the deletion of the container before it is completed.

Hook functions can be implemented in two ways: "Exec" and "HTTP". The former runs user-defined commands directly in the current container when the hook event is triggered, while the latter initiates a HTTP request to a URL in the current container. The hook function is defined in the spec.lifecycle field of the container.

Restart strategy of container

Reasons such as the crash of the container program or the container requesting resources that exceed the limit may lead to the termination of the Pod object. Whether the Pod object should be rebuilt at this time depends on the definition of its restart policy (restartPolicy) property.

Always: restart the Pod object as soon as it terminates, which is the default setting.

OnFailure: restart the Pod object only if there is an error.

Never: never restarts.

After the container fails to restart, there will be a delay for a period of time, and the delay time is getting longer and longer, which is 10 seconds, 20 seconds, 40 seconds, 80 seconds, 160 seconds and 300 seconds respectively.

The termination process of Pod

The user sends a command to delete the Pod object.

The Pod object in the API server is updated over time, and Pod is considered "dead" during the grace period (default is 30 seconds).

Mark the Pod as "Terminating".

(run at the same time as step 3) kubelet starts the Pod shutdown process while monitoring that the Pod object changes to the "Terminating" state.

(run in conjunction with step 3) the endpoint controller removes the Pod object from the list of endpoints for all Service resources that match to it when it monitors the closing behavior of the object.

If the current Pod object defines a preStop hook handler, execution will be started synchronously after it is marked "terminating"; if the preStop execution is not finished after the grace period ends, step 2 will be re-executed and an additional small grace period of 2 seconds will be obtained.

The container process in the Pod object receives a TERM signal.

After the grace period ends, if there is any process that is still running, the Pod object receives a SIGKILL signal.

Kubelet requests API Server to set the grace period for this Pod resource to 0 to complete the delete operation, which is no longer visible to the user.

If the kubelet or container manager restarts while waiting for the process to terminate, the termination operation will regain a full delete grace period and re-perform the delete operation.

Pod viability detection

Kubelet can determine when a container needs to be restarted based on viability detection. Three detection methods are supported through spec.containers.livenessProbe definition:

Exec

HttpGet

TcpSocket

Exec

The probe of exec type determines the health status of the container by executing a user-defined command in the target container. If the command status returns a value of 0, it means "success" has passed the test, and all other values are in "failure" state. It has only one available property, "command", which specifies the command to be executed, for example:

ApiVersion: v1kind: Podmetadata: name: liveness-exec-demo labels: test: liveness-exec-demospec: containers:-name: liveness-exec-demo image: busybox args: ["/ bin/sh", "- c", "touch / tmp/healthy;sleep 60; rm-rf / tmp/healthy;sleep 600"] livenessProbe: exec: command: ["test", "- e", "/ tmp/healthy"]

This configuration listing starts a container based on the busybox image and executes the command defined by args, which creates the / tmp/healthy file when the container starts and deletes it 60 seconds later. The viable probe runs the "test-e/tmp/healthy" command to check the existence of the / tmp/healthy file. If the file exists, it returns a status code of 0, indicating that it has passed the test successfully. So 60 seconds later, using the describe command, you can see the event that the container has been restarted.

HttpGet

In httpGet mode, a HTTP GET request is made to the target container, and the result is judged according to its response code. 2xx or 3xx means that the test is passed. The configurable fields are:

Host, the requested host address, defaults to Pod IP, which can also be defined using "Host:" in httpHeaders.

Port, the requested port, a required field.

HttpHeaders, a custom request header.

Path, the requested HTTP resource path.

Scheme: the protocol used to establish a connection. It can only be HTTP or HTTPS. The default is HTTP.

Example

ApiVersion: v1kind: Podmetadata: name: liveness-http-demo labels: test: liveness-http-demospec: containers:-name: liveness-http-demo image: nginx:1.12-alpine ports:-name: http containerPort: 80 lifecycle: postStart: exec: command: ["/ bin/sh", "- c" "echo Healthy > / usr/share/nginx/html/healthz"] livenessProbe: httpGet: path: / healthz port: http scheme: HTTP

This configuration listing creates a page file, healthz, dedicated to httpGet testing through postStart hook. The path specified for the httpGet probe is "/ healthz", the address is Pod IP by default, and the port uses the port name http defined in the container. The health check is normal after starting the container, but after executing the following command to delete the healthz page, you can see Container liveness-http-demo failed liveness probe and will be restarted in event.

Kubectl exec liveness-http-demo rm / usr/share/nginx/html/healthz

Generally, a dedicated URL path should be defined for HTTP probe operations, and the Web resources corresponding to this URL path should conduct a comprehensive internal inspection of the key components of the application in a lightweight manner to ensure that they can normally provide complete services to the client.

TcpSocket

The TCP-based survivability probe is used to initiate a TCP request to a specific port of the container and try to establish a connection. Comparatively speaking, it is more efficient and resource-saving than HTTP-based detection, but the accuracy is low. The configurable fields are:

Host, the destination IP address of the request connection. Default is Pod IP.

Port, the target port of the request connection, a required field.

For example:

Spec: containers:-name: liveness-tcp-demo image: nginx:1.12-alpine livenessProbe: tcpSocket: port: 80 Survivability Detection behavior attribute

For pod configured with liveness, you can see information like this through the describe command, such as delay, timeout and other configurations, which are all default values because they are not specified before:

Liveness: tcp-socket: 80 delay=0s timeout=1s period=10s # success=1 # failure=3

InitialDelaySeconds, the latency of survivor detection, that is, how long after the container starts to start the first probe operation, which is displayed as delay attribute. Default is 0 seconds, and integer type

TimeoutSeconds, timeout of survivor detection, displayed as timeout attribute, default is 1s, integer, minimum 1s

PeriodSeconds, the frequency of survivability detection, is shown as period attribute, integer, default is 10s, minimum value is 1s; too high frequency will bring more extra overhead to Pod objects, and too low frequency will make the response to errors untimely.

SuccessThreshold, when it is in a failed state, at least how many consecutive successes of the probe operation are considered to have passed the detection, which is displayed as a # success attribute, with a default value of 1, a minimum value of 1, and an integer

FailureThreshold: when the probe operation is successful, at least how many consecutive failures are considered as failed detection, which is displayed as # failure attribute, with a default value of 3, a minimum value of 1, and an integer.

In addition, liveness detection is only valid for the current service. For example, if the back-end service (such as database or cache service) causes a failure, restarting the current service will not solve the problem, but it will be restarted again and again until the back-end service returns to normal.

Pod Readiness probe

After the Pod object starts, the container application usually takes some time to complete its initialization process, such as loading configuration or data, and even some programs need to warm up. Therefore, you should avoid letting the Pod object process the client request immediately after it starts, but instead wait for the container initialization to complete and transition to the Ready state, especially if there are other Pod objects that provide the same service. The readiness probe is a periodic operation used to determine whether the container is ready or not. When the probe operation returns to the "success" state, the container is considered to be ready. Similar to the liveness probe, it supports three ways, but the property is defined with the name readinessProbe. For example:

ApiVersion: v1kind: Podmetadata: name: readiness-tcp-demo labels: test: readiness-tcp-demospec: containers:-name: readiness-tcp-demo image: nginx:1.12-alpine readinessProbe: tcpSocket: port: 80

Pod objects that do not define a ready probe will be ready as soon as the Pod enters the "Running" state. In production practice, readiness probes must be defined for containers that take time to initialize as well as containers in critical Pod resources.

Resource requirements and resource constraints

The "computing resources" in K8S that can be requested or consumed by containers or Pod refer to CPU and memory, where CPU belongs to compressible resources, which can be contracted on demand, while memory is incompressible resources, and shrinking operations on it may lead to unpredictable problems. Currently, resource isolation belongs to the container level, so the configuration of CPU and memory resources needs to be carried out on the container in Pod. Two attributes are supported:

Requests, which defines the guaranteed availability value of its request, that is, resources that may not be used by the container to run, but must ensure that so many resources are available when needed

Limits, limiting the maximum available value of a resource

In K8S, a unit of CPU is equivalent to a virtual CPU (vCPU) on a virtual machine or a hyperthread (Hyperthread, or called a logical CPU) on a physical machine. It supports score measurement. One core (1 core) is equivalent to 1000 microcores (millicores), so 500m is equivalent to 0.5 cores. Memory is measured in the same way as daily use, with the default unit of bytes, and you can also use E (Ei), P (Pi), T (Ti), G (Gi), M (Mi), and K (Ki) as unit suffixes.

Resource requirements apiVersion: v1kind: Podmetadata: name: stress-demospec: containers:-name: stress-demo image: ikubernetes/stress-ng command: ["/ usr/bin/stress-ng", "- M1", "- c 1", "--metrics-brief"] resources: requests: memory: "128Mi" cpu: "200m"

The above configuration listing defines the resource requirements of the container as 128m memory and 200m (0.2m) CPU cores. It runs stress-ng (a multi-function system stress testing tool) image to start a process (- M1) for memory performance stress testing, and then starts a dedicated CPU stress testing process (- C1). Then use the kubectl exec stress-demo-- top command to see how the resources are being used. On my computer (6 cores, 16 gigabytes of memory), the memory footprint shown on my computer (6 cores, 16 gigabytes of memory) is 262 mcm CPU (about 2max 6, because two test threads run at full load on two CPU cores), both of which are much higher than the values defined in requests, because the current resource is abundant, once resources are tight. The node only ensures that the container has 1/5 CPU cores available. For the node with 6 cores, its occupancy rate is about 3.33%, and the overoccupied resources will be compressed. Memory is a non-compressible resource, so this Pod may be killed due to OOM when memory resources are tight. If requests is not defined, when CPU resources are tight, it may be compressed to a very low level by other Pod, or even to the extent that Pod cannot be scheduled to run, while incompressible memory resources may cause the process to be killed by OOM. Therefore, when running a critical business-related Pod on a Kubernetes system, you must use the requests attribute to define the guaranteed availability of resources for the container.

The CPU and memory resources owned by each node in the cluster are fixed. When scheduling the Pod, the Kubernetes scheduler will determine which nodes can receive and run the current Pod resources according to the requests attribute of the container. For the resources of a node, the amount of requests defined in the requests of each Pod object will be reserved until all Pod objects are allocated.

Resource restriction

The minimum amount of resources of the container can be guaranteed by defining resource requirements. If you want to limit the upper limit of resources used by the container, you need to define resource limits. If a resource limit is defined, the container process cannot get free time that exceeds its CPU quota, and when a process requests to allocate memory resources that exceed its limits definition, it will be killed by OOM killer.

Quality of service category of Pod

Kubernetes allows node resources to overload limits, which means that nodes cannot satisfy all Pod objects on it to run at full resource load. So you need to prioritize Pod objects and terminate low-priority Pod objects when memory resources are scarce. The priority of the Pod object is based on the requests and limits properties and is divided into three levels or QoS (Quality of Service):

All containers in Guaranteed,Pod define Limits and Requests for all resource types, and the limits value is equal to the request value and not 0, and when the request value is undefined, it is equal to Limits by default, and has the highest priority.

BestEffort, Pod resources that do not have requests or limits attributes set for any container fall into this category and have the lowest priority.

Burstable, if not Guaranteed and BestEffort, the priority is medium.

It is only applicable when memory resources are scarce and CPU resources cannot be satisfied, and Pod is only temporarily unable to obtain the corresponding resources.

The above is how to use Pod in Kubernetes. The editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report