Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Introduction to K8s from scratch | Application choreography and management of K8s

2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Author | Zhang Zhen Ali Yun Senior Technical expert

I. Resource meta-information

1. Kubernetes resource object We know that the resource object of Kubernetes consists of two parts: Spec and Status. The Spec part is used to describe the desired state, and the Status part is used to describe the observed state. Today we're going to introduce you to another part of K8s, the metadata part. This section mainly includes tags used to identify resources: Labels, annotations used to describe resources, and Annotations, OwnerReference, used to describe the relationship between multiple resources. These metadata play a very important role in the operation of K8s. 2. Labels's first and most important metadata-resource tags. Resource tags are identifiable Key:Value metadata, as shown in the following figure, showing several common tags. The first three tags are marked on the Pod object, identifying the corresponding application environment, release maturity, and application version, respectively. From the example of applying tags, you can see that the name of the tag includes a prefix of a domain name, which is used to describe the system and tools for tagging. The last tag is typed on the Node object, and the version identification beta string is added to the domain name. Tags are mainly used to filter resources and composite resources, and you can use a select query similar to SQL to query related resources based on Label.

3. The most common Selector of Selector is equivalent Selector. Now let's take a simple example: suppose there are four Pod in the system, each Pod has a label that identifies the system level and environment, we can match the Pod in the left column through the Tie:front tag, the equivalent Selector can also include multiple equality conditions, and there is a logical "and" relationship between multiple equality conditions. In the example just now, through the Selector of Tie=front,Env=dev, we can filter out all the Tie=front, and the Pod of Env=dev, that is, the Pod in the upper-left corner of the following figure. Another type of Selector is collective Selector. In this example, Selector filters all Pod whose environment is test or gray. In addition to in's collection operations, there are also notin collection operations, such as tie notin (front,back), which will filter all Pod whose tie is not front and is not back. In addition, you can filter all Pod with release tags based on whether there is a filter for a certain lable, such as Selector release. Collective and equivalent Selector can also be connected with ",", the same identification logic "and" relationship.

4. Annotations another important kind of metadata is annotations. Generally, systems or tools are used to store non-indicative information of resources, and can be used to expand the spec/status description of resources. Here are several examples of annotations: the first example is that the certificate ID of Aliyun loader is stored. We can see that annotations can also have a domain name prefix, and the tag can also include version information. The second annotation stores the configuration information for the nginx access layer, and we can see that the annotations includes "," special characters that cannot appear in the label. The third annotations can generally be seen in the resource after the kubectl apply command line operation. The annotation value is a structured data, which is actually a json string that marks the description of the json of the resource of the last kubectl operation. 5. The last metadata of Ownereference is called Ownereference. The so-called owner generally refers to the resources of the collection class, such as the Pod collection, which has replicaset and statefulset, which will be discussed in the later course.

The controller of the collection class resource creates the corresponding home resource. For example, the replicaset controller creates a Pod during the operation, and the Ownereference of the created Pod points to the replicaset,Ownereference of creating Pod so that users can easily find an object that creates resources. In addition, it can also be used to achieve the effect of cascading deletion.

II. Operation demonstration

Here, use the kubectl command to connect to a K8s cluster that has been created in our ACK, and then show how to view and modify the metadata in the K8s object, mainly a tag of Pod, comments, and the corresponding Ownerference. First of all, let's take a look at the current configuration in the cluster: check Pod, there is no Pod;kubectl get pods now, and then create a Pod with a yaml of a Pod prepared in advance; kubectl apply-f pod1.yamlkubectl apply-f pod2.yaml now take a look at the Pod tag, and we use the-- show-labels option to see that both Pod are labeled with a deployment environment and hierarchy. Kubectl get pods-show-labels we can also view specific resource information in another way. First, take a look at the information of the first Pod of nginx1 and output it in the way of-o yaml. You can see that this Pod metadata includes a lables field with two lable;kubectl get pods nginx1-o yaml | less now think about how to modify the existing lable of Pod. Let's first change its deployment environment from the development environment to the test environment, then specify the Pod name, and add a value of test to the environment to see if it can be successful. An error is reported here, and you can see that it actually means that the label is worth it now; if kubectl label pods nginx1 env=test wants to overwrite it, it has to add an additional override option. After that, we should be able to see that this marking has been successful; kubectl label pods nginx1 env=test-overwrite, let's take a look at the current lable settings of the cluster. First of all, we can see that nginx1 has indeed added a deployment environment test tag; if kubectl get pods-show-labels wants to remove a tag for Pod, it is the same operation as tagging, but it is not equal sign after env. Just add the name of label without an equal sign. Instead of using the minus sign to indicate that the label is removed from the KRV v tie=front kubectl label pods nginx tie-, you can see the label. The de-marking has been completely successful. If you take a look at the configured label value below kubectl get pods-show-labels, you can see that the Pod of nginx1 is missing a tie=front tag. With this Pod tag, you can take a look at how to match with label Selector? First of all, label Selector is specified through the option-l. When specifying, we first try to filter with an equivalent label, so we specify that the deployment environment is equal to the test Pod, and we can see that one can be filtered out. Kubectl get pods-show-labels-l env=test if there are multiple equal conditions that need to be specified, in fact, this is a relationship with. If env equals dev, we can't actually get a Pod; kubectl get pods-show-labels-l env=test,env=dev then say env=dev, but tie=front, we can match to the second Pod, that is, nginx2. Kubectl get pods-show-labels-l env=dev,tie=front we can also try again how to filter with aggregate label Selector. This time we still want to match whether all deployment environments are test or a Pod of dev, so put a quotation mark here and specify a collection of all deployment environments in parentheses. This time we can filter out both created Pod; kubectl get pods-show-labels-l 'env in (dev,test)' let's try again how to add a comment to the Pod, which is the same operation as typing, but change the label command to the annotate command; then specify the type and the corresponding name as well. After that, we don't add label's KRV, but annotation's KRV. Here we can specify an arbitrary string, such as adding spaces or commas; kubectl annotate pods nginx1 my-annotate='my annotate,ok' then, let's take a look at some metadata of this Pod. We can see here that annotations in the metadata of this Pod, this is a my-annotate, this Annotations. Kubectl get pods nging1-o yaml | less then we can actually see here that when there is a kubectl apply, the kubectl tool adds an annotation, which is also a json string. Then let's demonstrate how Pod's Ownereference comes out. The original Pod is created directly by creating the resource Pod, but this time it is created in a different way: create a Pod by creating a ReplicaSet object. First, create a ReplicaSet object, which can be viewed specifically; kubectl apply-f rs.yamlkubectl get replicasets nginx-replicasets-o yaml | less We can take a look at the spec in this ReplicaSet, and it is mentioned that two Pod will be created, and then selector matches by matching the tag that the deployment environment is the product production environment. So we can take a look at the Pod in the cluster now; kubectl get pods will find that there are two more Pod. If you take a closer look at these two Pod, you can see that the Pod created by ReplicaSet has a feature, that is, it will have Ownereference, and then the Ownereference points to a replicasets type, which is called nginx-replicasets;kubectl get pods nginx-replicasets-rhd68-o yaml | less.

Third, controller mode

1. The core of the control cycle control mode is the concept of control cycle. The control cycle includes three logic components: the controller, the controlled system, and the sensor that can observe the system. Of course, these components are logical. The outside world controls the resources by modifying the resource spec, and the controller compares the resource spec with the status, thus calculating that a diff,diff will finally be used to determine what kind of control operation to the system to perform. The control operation will cause the system to produce new output and will be reported by the sensor in the form of resource status, and each component of the controller will run independently. Continuously make the system approach to the final state of the spec representation.

2. The logic sensor in the Sensor control cycle is mainly composed of Reflector, Informer and Indexer. Reflector obtains the data of resources through List and Watch K8s server. List is used to update system resources in the event of Controller restart and Watch interruption, while Watch performs incremental resource updates between multiple List After obtaining the new resource data, Reflector will insert a Delta record including the resource object information itself and the resource object event type into the Delta queue. The Delta queue can ensure that there is only one record for the same object in the queue, thus avoiding duplicate records when Reflector re-List and Watch. The Informer component constantly pops up delta records from the Delta queue, and then gives the resource object to indexer and lets indexer record the resource in a cache, which is indexed by the resource's namespace by default and can be shared by Controller Manager or multiple Controller. After that, the controller component in the event callback function control loop is mainly composed of event handling functions and worker. The event handling functions will pay attention to the newly added, updated and deleted events of resources, and decide whether to deal with them according to the logic of the controller. For events that need to be handled, the namespace and name of the resource associated with the event are crammed into a work queue and handled by a Worker in the subsequent worker pool, and the work queue deduplicates the stored objects so as to avoid multiple Woker processing the same resource. When dealing with resource objects, Worker generally needs to use the name of the resource to retrieve the latest resource data, to create or update the resource object, or to call other external services. If Worker fails to process, it will generally add the name of the resource back to the work queue to facilitate retry later. 3. Example of control loop-capacity expansion here is a simple example to illustrate how the control loop works. ReplicaSet is a resource used to describe the capacity expansion behavior of stateless applications. ReplicaSet controler maintains the desired number of states of the application by listening to ReplicaSet resources. Selector is used in ReplicaSet to match the associated Pod. Here we consider the scenario where ReplicaSet rsA and replicas are changed from 2 to 3. First of all, Reflector will watch the changes to both ReplicaSet and Pod resources, and why we will watch pod the changes in resources will be discussed later. When it is found that the ReplicaSet has changed, the delta queue is stuffed with a record whose object is rsA and whose type is updated. On the one hand, Informer updates the new ReplicaSet to the cache and uses Namespace nsA as an index. On the other hand, when calling the callback function of Update, the ReplicaSet controller will insert the nsA/rsA string of the string into the work queue when it finds that the ReplicaSet has changed. A Worker behind the work queue fetches the key of the string nsA/rsA from the work queue and the latest ReplicaSet data from the cache. By comparing the values in spec and status in ReplicaSet, Worker finds that the ReplicaSet needs to be expanded, so ReplicaSet's Worker creates a Pod, and the Ownereference in this pod is oriented to ReplicaSet rsA. Then the Pod added events to Reflector Watch, adding additional Add type deta records to the delta queue, on the one hand, storing the new Pod records in the cache through Indexer, on the other hand, calling the Add callback function of the ReplicaSet controller, the Add callback function found the corresponding ReplicaSet by checking the pod ownerReferences, and stuffed the ReplicaSet namespace and string into the work queue. After getting the new work item, ReplicaSet's Woker fetches the new ReplicaSet record from the cache and gets all its created Pod, because the state of the ReplicaSet is not up to date, that is, the number of all created Pod is not up to date. So at this point ReplicaSet updates status so that spec and status agree.

Fourth, summary of controller modes.

1. Two API design methods Kubernetes controller patterns rely on declarative API. Another common type of API is imperative API. Why does Kubernetes use declarative API instead of imperative API to design the entire controller? First of all, compare the differences in interactive behavior between the two API. In life, the common way of command interaction is the way of communication between parents and children. Because children lack a sense of goal and can not understand parents' expectations, parents often teach their children some clear actions through some commands. For example: eat, sleep and similar orders. In our container orchestration system, imperative API is performed by issuing explicit operations to the system. The common way of declarative interaction is the way the boss communicates with his employees. The boss usually does not make a clear decision for his employees. In fact, the boss may not be as clear as the employee about what he is going to do. Therefore, the boss gives full play to the employees' subjective initiative by setting quantifiable business goals for their employees. For example, the boss will require a product to have a market share of 80% without pointing out the details of what needs to be done to achieve that market share. Similarly, in the container orchestration system, we can keep the number of replicas of an application instance at 3 without explicitly expanding the Pod or deleting the existing Pod to ensure that the number of replicas is 3.

two。 The problem of imperative API after understanding the difference between two interactive API, we can analyze the problem of imperative API. One of the biggest problems with command API is error handling; in large-scale distributed systems, errors are ubiquitous. Once the command issued does not respond, the caller can only try to recover the error by retrying again and again, but blind retry may cause more problems. Assuming that the original command has actually been executed in the background, one more retried command operation is executed after the retry. In order to avoid the problem of retry, the system often needs to record the commands that need to be executed before executing the commands, and redo the commands to be executed in scenarios such as restart, and in the process of execution, we also need to consider some complex logic situations, such as the sequence of multiple commands, coverage relations and so on. In fact, many imperative interactive systems often do a patrol system in the background, which is used to correct the data inconsistency caused by command processing timeout, retry and other scenarios; however, because the patrol logic is different from the daily operation logic, it is often not covered enough in testing and not rigorous in error handling, which is of great operational risk, so many patrol systems are often triggered manually. Finally, imperative API is also prone to problems when dealing with multiple concurrent access; if there are multiple concurrent operations on a resource request, and if there is an error in the operation, it needs to be retried. Then it is difficult to confirm and cannot guarantee which operation takes effect in the end. Many imperative systems often lock the system before operation, so as to ensure the predictability of the final effective behavior of the whole system, but the locking behavior will reduce the operation execution efficiency of the whole system. In contrast, declarative API systems naturally record the current and final state of the system. No additional operational data is required. In addition, because the state is idempotent, it can be operated repeatedly at any time. In the mode of declarative system operation, the normal operation is actually the inspection of the state of resources, there is no need to develop the patrol system, and the operation logic of the system can be tested and tempered in the daily operation. therefore, the stability of the whole operation can be guaranteed. Finally, because the final state of the resource is clear, we can merge multiple changes to the state. Multi-party concurrent access can be supported without locking.

3. Summary of controller patterns finally let's sum up: the controller mode adopted by Kubernetes is driven by declarative API. Specifically, it is driven based on modifications to the Kubernetes resource object; after the Kubernetes resource, it is the controller that pays attention to that resource. These controllers drive the asynchronous control system closer to the set final state; these controllers operate autonomously, making it possible for the system to be automated and unattended; because the controllers and resources of Kubernetes can be customized, the controller mode can be easily extended. Especially for stateful applications, we often automate operation and maintenance operations by customizing resources and controllers. This is the operator scenario that will be introduced later.

This paper summarizes

Here is a brief summary of the main contents of this article:

The metadata part of the Kubernetes resource object mainly includes the tag used to identify the resource: Labels, the annotation used to describe the resource, and Annotations, the OwnerReference used to describe the relationship between multiple resources. These metadata play a very important role in the operation of K8s.

The core of the control mode is the concept of control cycle.

There are two API design methods: declarative API and imperative API; the controller mode adopted by Kubernetes is driven by declarative API.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report