In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces the kubernetes container choreography system example analysis, has a certain reference value, interested friends can refer to, I hope you can learn a lot after reading this article, the following let the editor with you to understand.
As an important member of the container orchestration ecosystem, Kubernetes is an open source version of Google's large-scale container management system borg, which draws lessons from the experience and lessons learned by google in the production environment in the past decade. Kubernetes provides functions such as application deployment, maintenance and extension mechanism, and Kubernetes can be used to easily manage containerized applications running across machines. Currently, Kubernetes supports GCE, vShpere, CoreOS, OpenShift, Azure and other platforms. In addition, Kubernetes can also run directly on physical machines. Kubernetes is an open container scheduling management platform that does not limit any language and supports various applications such as java/C++/go/python.
Kubernetes is a complete distributed system support platform, supporting multi-tier security protection, access mechanism, multi-tenant application support, transparent service registration, service discovery, built-in load balancing, powerful fault discovery and self-repair mechanism, service rolling upgrade and online expansion, scalable automatic resource scheduling mechanism, multi-granularity resource quota management capabilities, and perfect management tools. Including development, testing, deployment, operation and maintenance monitoring, one-stop complete distributed system development and supporting platform.
one。 System architecture
The kubernetes system consists of master and node according to the node function.
Master
As a control node, Master dispatches and manages the entire system, including the following components:
As the entrance of kubernetes system, API Server encapsulates the operation of adding, deleting, modifying and querying core objects, and provides it to external customers and internal components by RESTful interface. The REST objects it maintains will be persisted to etcd.
Scheduler: responsible for resource scheduling of the cluster and assigning machines to the newly created pod. This part of the work is split into a component, which means that it can be easily replaced with other schedulers.
Controller Manager: responsible for executing various controllers, there are currently two types:
Endpoint Controller: associate service and pod periodically (the association information is maintained by the endpoint object) to ensure that the service-to-pod mapping is always up to date.
Replication Controller: associate replicationController and pod periodically to ensure that the number of replications defined by replicationController is always the same as the actual number of pod running.
Node
Node is the running node that runs the business container and contains the following components:
Kubelet: responsible for managing and controlling docker containers, such as starting / stopping, monitoring running status, etc. It periodically fetches the pod assigned to the machine from etcd and starts or stops the corresponding container based on the pod information. At the same time, it also receives HTTP requests from apiserver to report the running status of pod.
Kube Proxy: responsible for providing proxies for pod. It periodically fetches all service from etcd and creates proxies based on service information. When a customer pod wants to access another pod, the access request is forwarded through the native proxy.
Borrow a network map to express the relationship between functional components:
two。 Basic Concepts Node
Node is the working host in the kubernetes cluster as opposed to master, also known as minion in earlier versions. The Node can be either a physical host or a virtual machine (VM). Run kubelet, the service used to start and manage pod on each node, and can be managed by master. Service processes running on node include kubelet, kube-proxy, and docker daemon.
The information of Node is as follows:
Node address: the IP address or nodeid of the host
Running status of node: pending,running,terminated
Node condition: describes the operating conditions of running status node. Currently, there is only one condition Ready, which indicates that node is in a healthy state and can receive instructions from master to create pod.
Node system capacity: describes the system resources available to node, including CPU, memory, maximum number of schedulable pod, etc.
Pod
Pod is the most basic operating unit of kubernetes, including one or more closely related containers. A pod can be regarded as a "logical host" (Logical host) of the application layer by a containerized environment. Multiple container applications in a pod are usually tightly coupled. Pod is created, started, or destroyed on node.
Why does kubernetes use pod to encapsulate another layer on top of the container? One important reason is that communication between docker containers is limited by the docker network mechanism. In docker's world, one container needs to use link to access services (ports) provided by another container. Link between a large number of containers will be a very heavy task. By combining multiple containers into a virtual "host" through the concept of pod, containers can communicate with each other only through localhost.
An application container in a pod shares a set of resources, such as:
Pid Namespace: different applications in pod can see other process PID
Network namespace: multiple containers in pod can access the same IP and port range
IPC namespace: multiple containers in pod can communicate using systemV ipc or POSIX message queues.
UTS namespace: multiple containers in pod share a hostname.
Volumes (shared Storage Volume): individual containers in pod can access the volumes defined at the pod level.
Label
Label is a core concept in kubernetes system. Label is attached to various objects in the form of key/value key-value pairs, such as pod, service, RC, Node, and so on. Label defines the identifiable properties of these objects, which are used to manage and select them. Label can be attached to an object when it is created, or can be managed through API after the object is created.
Once the label is defined for the object, other objects can use label selector to define other functional objects.
The definition of label selector consists of several comma-separated conditions:
"label": {
"key1": "value1"
"key2": "value2"
}
Resource controller (RC)
Resource controller (RC) is a core concept in kubernetes systems that defines the number of pod copies. The Controller manager process in master completes the creation, monitoring, start and stop of pod through the definition of RC.
According to the definition of replication controller, kubernetes ensures that the user-specified number of pod "replica" can be run at any time. If there are too many pod copies running, the system will stop some pod;. If the number of pod copies running is too small, the system will start some more pod. In short, by the definition of RC, kubernetes always ensures that users expect the number of copies running in the cluster.
Service (Service)
In the world of kubernetes, although each pod is assigned a separate IP address, this IP address disappears with the destruction of the pod. This leads to the question: if a group of pod forms a cluster to provide services, how do you access them?
Kubernetes's service is the core concept used to solve this problem. An service can be seen as the external access interface of a group of pod that provide the same service. Which pod Service acts on is defined by label selector.
The IP address of pod is assigned by docker daemon according to the IP address field of the docker0 bridge, but the Cluster IP address of service is a virtual IP address in the kubernetes system, which is dynamically assigned by the system. The ClusterIP address of Service is relatively stable compared to the IP address of pod. The IP address is assigned when the service is created, and the IP address will not change until the service is destroyed.
Because the IP assigned to the service object in the Cluster IP Range pool can only be accessed internally, it is accessible to all other pod. But if this service serves as a front-end service and is ready to provide services to clients outside the cluster, we need to provide a public IP for this service.
Kubernetes supports two type definitions of service that provide services: nodeport and loadbalancer.
Volume (Storage Volume)
Volume is a shared directory in pod that can be accessed by multiple containers. Kubernetes's volume concept is similar to docker's volume, but not exactly the same. The volume in Kubernetes is the same as the pod lifecycle, but not related to the container lifecycle. When the container terminates or restarts, the data in the volume is not lost. In addition, kubernetes supports multiple types of volume, and a pod can use any number of volume at the same time.
(1) EmptyDir: an EmptyDir volume is created when pod is assigned to Node. As can be seen from its name, its initial content is empty. All containers in the same pod can read and write the same files in the EmptyDir. When the pod is removed from the node, the data in the EmptyDir is also permanently deleted.
(2) hostPath: Mount the file or directory on the host on the pod. Typically used for:
The log files generated by the container application need to be saved permanently and can be stored using the host's high-speed file system.
Container applications that need to access the internal data structure of the docker engine on the host can directly access the docker file system by defining hostPath as the host / var/lib/docker directory.
(3) gcePersistentDick: using this type of volume means using files on the permanent disk (persistent disk,PD) on Google's computing engine (Google Compute Engine, GCE). Unlike EmptyDir, the content on the PD is saved permanently, and when the pod is deleted, the PD is only uninstalled (unmount), but not deleted. Note that you need to create a permanent disk (PD) before you can use gcePersistentDisk.
(4) awsElasticBlockStore: similar to GCE, this type of volume uses the EBS Volume of Amazon Web Service (AWS) provided by Amazon and can be mounted into pod. It is important to note that you need to create an EBS Volume before you can use awsElasticBlockStore.
(5) nfs: Mount to Pod using the shared directory provided by NFS (Network File system). A NFS system in a branch is required in the system.
(6) iscsi: Mount to pod using the directory on the iSCSI storage device.
(7) glusterfs: Mount to pod using the directory of the open source BlusterFS network file system.
(8) rbd: use Linux block device shared storage (Rados Block Device) to mount to pod.
(9) gitRepo: by mounting an empty directory and clone a git repository from the GIT library for pod to use.
(10) secret: a secret volume is used to provide encrypted information for pod. You can directly mount the secret defined in kubernetes as a file for pod to access. Secret volume is implemented through tmfs (memory file system), so this type of volume is never persisted.
(11) persistentVolumeClaim: the space required to apply from PV (persistentVolume). PV is usually a kind of network storage, such as GCEPersistentDisk, AWSElasticBlockStore, NFS, iSCSI, etc.
Namespace (Namespace)
Namespace (namespace) is another very important concept in the kubernetes system. By "assigning" objects within the system to different namespace to form logically grouped different projects, groups or user groups, it is convenient for different groups to share the resources of the whole cluster and manage them separately at the same time.
After the kubernetes cluster starts, it creates a namespace named "default". Next, if namespace is not specifically specified, the user-created pod, RC, and Service will all be created by the system into a namespace named "default".
Using namespace to organize various objects of kubernetes, you can realize the grouping of users, that is, "multi-tenant" management. Separate resource quota equipment and management can also be carried out for different rentals, which makes the configuration of the whole cluster very flexible and convenient.
Annotation (comments)
Annotation, like label, is defined in the form of key/value key-value pairs. Label has a strict full name rule that defines metadata (metadata) for kubernetes objects and is used for label selector. Annotation is user-defined "additional" information that can be found by external tools.
The information recorded with annotation includes:
Build information, release information, docker image information, such as timestamp, release id number, PR number, image hash value, docker Controller address, etc.
Typical process
Take creating a Pod as an example, the typical process of kubernetes is shown in the following figure:
three。 Component Replication Controller
In order to distinguish between Replication Controller (replica controller) and resource object Replication Controller in Controller Manager, we abbreviate resource object as RC, while Replication Controller refers to "replica controller".
The core role of Replication Controller is to ensure that a certain number of pod copies associated with a RC are in a normal running state at any time in the cluster. If there are too many pod copies of this class pod, Replication Controller destroys some pod copies; otherwise, Replication Controller adds pod copies until the number of pod copies of this class pod reaches the preset number of copies. It is best not to create a pod directly beyond RC, because Replication Controller manages pod replicas through RC, and automatically creates, complements, replaces and deletes pod replicas. This can improve the disaster recovery capability of the system and reduce losses caused by unexpected situations such as node crashes. Even if the application uses only one copy of pod, it is strongly built to use RC to define pod.
The object managed by Replication Controller is pod, so its operation is closely related to the state of pod and restart strategy.
Common usage patterns of copy controllers:
(1) rescheduling: regardless of whether you want to run 1 replica or 1000 replicas, the replica controller can ensure that the specified number of pod replicas exist in the cluster. If the node fails or the replica is terminated, it will be rescheduled until the expected replica runs normally.
(2) Auto scaling: it is very easy to expand or reduce the number of replicas by modifying the spec.replicas attribute value of the replica controller manually or through the automatic expansion agent.
(3) Rolling update: the replica controller is designed to assist the rolling update of the service by replacing the pod one by one. The recommended way is to create a new RC with only one copy. If the number of new RC copies is added by 1, the number of copies of the old RC is reduced by 1 until the number of copies of the old RC is zero, and then delete the old RC.
In the discussion of rolling updates, we found that there may be multiple versions of release for an application when rolling updates. In fact, it is normal for a released application to have multiple release versions in a production environment. Through the tag selector of RC, we can easily track the multi-version release of an application.
Node controller
Node Controller is responsible for discovering, managing, and monitoring individual node nodes in the cluster. Kubelet registers node information through API Server at startup and sends node information to API Server regularly. When API Server receives this information, it writes it to etcd. The node information stored in etcd includes node health status, node resources, node name, node address information, operating system version, docker version, kubelet version and so on. There are three kinds of node health status: ready (true), not ready (false) and unknown (unknown).
(1) when Controller Manager starts, if the-cluster-cidr parameter is set, a CIDR address is generated for each node that does not set spec.podCIDR, and the CIDR is used to set the spec.PodCIDR attribute of the node, so as to prevent conflicts between the CIDR addresses of different nodes.
(2) read the information of the node one by one, try to modify the node state information in nodeStatusMap many times, and compare the node information with the node information saved in the nodeStatusMap of node controller. If the node information sent by the kubelet is not received in the judgment, the node information sent by the node kubelet is received for the first time, or the node state becomes a non-"healthy" state during the process, the state information of the node is saved in the nodeStatusMap, and the system time of the node where the node controller is located is used as the detection time and the node state change time.
If it is determined that the state information of the node has not been received within a certain period of time, the node state is set to "unknown (unknown)", and the node state is saved through api server.
(3) read the node information one by one, and if the node state becomes non-"ready", the node will be added to the queue to be deleted, otherwise the node will be deleted from the queue. If the node state is not "ready" and the system specifies Cloud Provider, node controller calls Cloud Provider to view the node. If a node failure is found, the node information in the etcd is deleted, and the information of resources such as pod associated with the node is deleted.
ResourceQuota controller
As the management platform of container cluster, kubernetes also provides the advanced function of resource quota management. Resource quota management ensures that the specified objects will not overoccupy system resources at any time, avoiding the whole system running disorder or even accidental downtime due to the design or implementation defects of some business processes, which plays a very important role in the smooth operation and stability of the whole cluster.
Currently, kubernetes supports three levels of resource quota management:
(1) Container level, which can manage the resource quota of CPU and memory.
(2) pod level, which can limit the resources available to all containers in the pod.
(3) namespace level, which is the resource limit at namespace (available for multi-tenancy) level, including: number of pod, number of replication Controller, number of service, number of ResourceQuota, number of secret, number of PV (persistent volume) that can be held.
The quota management of kubernetes is realized through admission mechanism (admission control). The two admission controllers related to quotas are LimitRanger and ResoureQuota, in which LimitRanger acts on pod and container, and ResourceQuota acts on namespace. In addition, if a resource quota is defined, scheduler also takes this factor into account during pod scheduling to ensure that pod scheduling does not exceed the quota limit.
A typical resource control process is shown in the following figure:
Namaspace controller
Users can create a new namespace through API Server and save it in etcd, and namespace controller periodically reads these namespace information through api server. If the namespace is identified as gracefully deleted by the API (the deletion period is set and the deletionTimestamp property is set), the state of the namespace is set to "terminating" and saved to the etcd. At the same time, namespace controller deletes resource objects such as serviceAccount, RC, Pod, Secret, PersistentVolume, ListRange, SesourceQuota and event under the namespace.
When the state of namespace is set to "terminating", the NamespaceLifecycle plug-in of Adminssion Controller prevents the creation of new resources for that namespace. At the same time, after namespace controller deletes all resource objects in the namespace, Namespace Controller performs a finalize operation on the namespace, deleting the information in the spec.finalizers domain of namespace.
If Namespace Controller observes that namespace has set a delete period (that is, the DeletionTimestamp property is set) and the spec.finalizers field value of namespacer is empty, then namespace controller will delete the namespace resource through API Server.
Kubernetes security control
ServiceAccount Controller and token Controller are two security-related controllers. ServiceAccount Controller is created when Controller Manager starts. It listens for Service Account deletion events and Namespace creation and modification events. If there is no default ServiceAccount in the namespace of the ServiceAccount, then ServiceAccount Controller creates a default ServiceAccount for the namespace of the ServiceAccount.
After adding "- admission_control=ServiceAccount" to the startup of API Server, API Server creates a key and crt (/ var/run/kubernetes/apiserver.crt and apiserver.key) on startup, and then adds the parameter service_account_privatge_key_file=/var/run/kubernetes/apiserver.key when starting. / kube-controller-manager, so that after starting kubernetes master, you will find that the system will automatically create a secret for ServiceAccount when it is created.
If Controller Manager specifies the parameter service-account-private-key-file at startup and the file specified by this parameter contains the private key of an PEM-encoded-encoded RSA algorithm, then Controler Manager creates a token controller object.
Token controller
The token controller object listens for Service Account creation, modification, and deletion events, and handles them differently depending on the event. If the listening event is to create and modify a Service Account event, read the information of the Service Account; if the Service Account does not have a Service Account Secret (that is, the secret used to access the Api server), create a JWT Token for the Service Account with the private key mentioned earlier, put the Token and ROOT CA (if the startup parameter specifies the ROOT CA) in the new secret, put the new secret in the Service Account, and modify the contents of the Service Account in the etcd. If the listening event is a delete Service Account event, delete the secret associated with that Service Account.
The token controller object also listens for secret creation, modification, and deletion events, and handles them differently depending on the event. If the listening event is to create and modify the secret event, then read the Service Account information specified by the annotation in the secret and create a token; related to its Service Account as needed for the secret. If the listening event is to delete the secret event, delete the reference relationship between the secret and the related Service Account.
Service controller&endpoint controller
Kubernetes service is an abstraction that defines a collection of pod, or is accessed as an access policy, sometimes referred to as a microservice.
Service in kubernetes is a kind of resource object, and like all other resource objects, a new instance can be created through the POST interface of API Server. In the following example code, a Service named "MyServer" is created, which contains a tag selector through which all pod containing the tag "app=MyApp" are selected as the pod collection for that service. Port 80 of each pod in the Pod collection is mapped to port 9376 local to the node, and kubernetes assigns a cluster IP (that is, virtual IP) to the service.
{"kind": "service", "apiVersion": "v1", "metadata": {"name": "MyService"}, "spec": {"selector": {"app": "MyApp"}, "ports": [{"protocol": "TCP" "port": 80, "targetPort": 9376]},} four. Feature Service cluster access process (service discovery)
There is a process called "kube-proxy" running on each node in the kubernetes cluster, which observes the addition and removal of "service" and "endpoint" by master nodes, as shown in step 1 of the figure.
Kube-proxy opens a port on the local host for each service (randomly selected). Any connection that accesses the port is proxied to a corresponding back-end pod. Kube-proxy determines which background pod is selected based on the round robin algorithm and service's session adhesion (SessionAffinity), as shown in step 2.
Finally, as shown in step 3, kube-proxy installs rules in the native iptables that cause iptables to redirect captured traffic to the random port mentioned earlier. Traffic through this port is then transferred to the corresponding back-end pod by kube-proxy.
After the service is created, the service endpoint model creates the IP and port list of the backend pod (included in the endpoint object), from which kube-proxy selects the service backend. Nodes in the cluster can access the pod of the service background through virtual IP and ports.
By default, kubernetes specifies a cluster IP (or virtual IP, cluster IP) for server, but in some cases you want to be able to specify the cluster IP yourself. In order to specify a cluster IP for service, users only need to set the desired IP address in the spec.clusterIP domain of service when defining service.
Scheduler (scheduling)
Scheduler undertakes the important function of "connecting link between the preceding and the next" in the whole kubernetes system. "inheriting the above" means that it is responsible for receiving the new pod created by Controller Manager and arranging a "home" for it-- the target node; "up and down" means that after the placement work is completed, the kubelet service process on the target node will take over the successor work and be responsible for the "second half of life" in the pod life cycle.
Specifically, the function of scheduler is to bind (binding) the pod to be scheduled (the newly created Pod of API, the pod created by Controller Manager to complement the copy, etc.) to an appropriate node in the cluster according to the specific scheduling algorithm and scheduling policy, and write the binding information into the etcd. Three objects are involved in the whole scheduling process, namely, the pod list to be scheduled, the available node list, and the scheduling algorithm and policy. To put it simply, it is scheduled through the scheduling algorithm to select the most appropriate pod from the node list for each pod in the pod list to be scheduled.
Then, the kublet on the target node listens to the pod binding event generated by scheduler through API Server, then gets the corresponding pod, downloads the image image, and starts the container.
Scheduler (scheduling policy)
The default scheduling process currently provided by scheduler is divided into two steps:
(1) pre-selection scheduling process, that is, traversing all target node to filter out the candidate nodes that meet the requirements. For this reason, kubernetes has built-in a variety of pre-strategy (xxx predicates) for users to choose from.
(2) determine the optimal node, on the basis of the first step, use the priority strategy (xxx priority) to calculate the score of each candidate node, and the highest score wins.
The scheduling process of scheduler is implemented through the "scheduling algorithm provider (AlgorithmProvider)" loaded by plug-ins. An AlgorithmProvider actually includes a set of preselected policies and a set of preferred policies. The function to register AlgorithmProvider is as follows:
Func RegisterAlgorithmProvider (name string, predicateKeys, priorityKeys util.StringSet)
It contains three parameters, and the name string parameter is the algorithm name.
The predicateKeys parameter is the set of preselected policies used by the algorithm set
The priorityKeys parameter is the set of optimization strategies used by the algorithm.
There are seven pre-selected policies available in scheduler, and each node can be initially selected and proceed to the next process only after passing five default pre-policies of PodFitsPorts, PodFitsResources, NoDiskConflict, PodSelectorMatches and PodFitsHost.
When each node passes the optimization strategy, it will calculate a score, calculate each score, and finally select the node with the highest score as the result of the optimization (which is also the result of the scheduling algorithm). LeastRequestedPriority, select the node with the lowest resource consumption:
(1) calculate the CPU occupancy totalMilliCPU of pod and alternative pod running on all alternative nodes
(2) calculate the memory usage totalMomory of pod and alternative pod running on all alternative nodes
(3) calculate the score of each node, and the calculation rules are roughly as follows:
Score=int ((nodeCpuCapacity-totalMilliCPU) 10) / nodeCpuCapacity+ ((nodeMemoryCapacity-totalMemory) 10/nodeMemoryCapacity) / 2)
CalculateNodeLabelPriority, graded according to CheckNodeLabelPresence policy
BalancedResourceAllocation, select the most balanced node for resource use
(1) calculate the CPU occupancy totalMilliCPU of pod and alternative pod running on all alternative nodes
(2) calculate the memory usage totalMomory of pod and alternative pod running on all alternative nodes
(3) calculate the score of each node, and the calculation rules are roughly as follows:
Score=int (10-math.abs (totalMilliCPU/nodeCpuCapacity-totalMemory/nodeMemoryCapacity) * 10)
Node management
Node management includes node registration, status reporting, Pod management, container health inspection, resource monitoring and other parts.
Node registration
In a kubernetes cluster, a kubelet service process is started on each node node. This process is used to handle the tasks sent to this node by the master node, and to manage the containers in pod and pod. Each kubelet process registers the node's own information on the API Server, reports the node resource usage to the master node periodically, and monitors the container and node resources through cAdvisor.
The node decides whether to register itself with kubelet by setting the startup parameter "- register-node" of API Server. If this parameter is true, then kubelet will try to register itself with API Server. As a self-registration, kubelet startup also includes the following parameters:
-- api-servers, tell kubelet API Server the location
-- kubeconfig, telling kubelet where to find the certificate used to access API Server
-- cloud-provider, telling kubelet how to read metadata related to him from a cloud service provider (IAAS).
Status report
Kubelet registers the node through API Server at startup, and periodically sends new node messages to API Server. After receiving these information, API Server writes these information to etcd. Set how often kubelet reports node status to API Server through the kubelet startup parameter "- node-status-update-frequency", which defaults to 10 seconds.
Pod management
Kubelet gets a list of pod to run on its own node in the following ways:
(1) File: the file under the configuration file directory specified by the kubelet startup parameter "--config". The interval at which the file directory is checked is set by-file-check-frequency, which defaults to 20 seconds.
(2) HTTP endpoint (URL): set through the "- manifest-url" parameter. The data interval for the HTTP endpoint is checked through the-http-check-frequency setting, which defaults to 20 seconds.
(3) API Server:kubelet listens to the etcd directory through API Server and synchronizes the pod list.
All pod created in a non-API Server manner is called static pod. Kubelet reports the status of static pod to API Server,API Server to create a mirror pod for static pod to match. The state of Mirror pod will truly reflect the state of static pod. When the static pod is deleted, the corresponding mirror pod is also deleted. Kubelet listens to the "/ registry/node/" and "/ registry/pods" directories in the same way that API Server client uses watch+list to synchronize the acquired information to the local cache.
Kubelet listens to etcd, and all operations against pod will be heard by kubelet. If you find a new pod bound to this node, create the pod as required by the pod list. If it is found that the local pod has been modified, the kubelet will make the corresponding changes, such as deleting a container in the pod, deleting the container through docker client. If you find that the pod of this node is deleted, delete the corresponding pod and delete the container in the pod through docker client.
Kubelet reads the monitored information. If you are creating and modifying a pod task, do the following:
(1) create a data directory for the pod.
(2) read the pod list from API Server.
(3) Mount the external volume (Extenal Volume) for the pod.
(4) download the secret used by pod.
(5) check the pod that is already running in the node. If the pod does not have a container or the pause container is not started, stop all container processes in the pod first. If there are containers in pod that need to be deleted, delete them.
(6) create a container for each pod with a "kubernetes/pause" image that is used to take over the network of all other containers in the pod.
(7) do the following for each container in pod:
Calculate a hash value for the container, and then docker the corresponding container's hash value with the container's name. If the container is found and the hash values of the two are different, stop the container process in docker and stop the pause container process associated with it; if both are the same, do not do any processing
If the container is aborted and the container does not have a specified restartPolicy (restart policy), no processing is done.
Call docker client to download the container image and call docker client to run the container.
Container health examination
Pod uses two types of probes to check the health of the container. One is the LivenessProbe probe, which is used to determine whether the container is healthy and tells kubelet when a container is in an unhealthy state. If the LivenessProbe probe detects that the container is unhealthy, kubelet will delete the container and handle it accordingly according to the container restart policy. If a container does not contain a LivenessProbe probe, kubelet assumes that the value returned by the container's LivenessProbe probe is always "success". The other is the ReadinessProbe probe, which is used to determine whether the container has started and is ready to receive the request. If the ReadinessProbe probe detects a failure, the state of the pod will be modified. Endpoint controller removes the endpoint entry that contains the IP address of the pod where the container is located from the endpoint of the service.
Thank you for reading this article carefully. I hope the article "sample Analysis of kubernetes Container choreography system" shared by the editor will be helpful to you. At the same time, I also hope you will support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.