Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Understand the resource update mechanism of K8s, starting with an OpenKruise user question

2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Background

OpenKruise is an open source large-scale application automation management engine of Aliyun, which features native Kubernetes controllers such as Deployment / StatefulSet, but OpenKruise provides more enhanced features such as elegant in-place upgrade, release priority / fragmentation strategy, multi-availability zone workload abstract management, unified sidecar container injection management, and so on, all of which are the core competencies honed by Alibaba's super-large-scale application scenarios. These feature help us cope with more diverse deployment environments and requirements, and bring more flexible deployment and release portfolio strategies for cluster maintainers and application developers.

Currently, in Alibaba's internal cloud native environment, most applications uniformly use the capabilities of OpenKruise for Pod deployment and release management, while many industry companies and Ali Cloud customers have turned to OpenKruise as the application deployment carrier because the load such as K8s native Deployment can not fully meet their needs.

Today's sharing article starts with a question about the customer's connection to OpenKruise on Aliyun. Here is to restore the usage of this student (the following YAML data is demo only):

Prepare a YAML file for Advanced StatefulSet and submit it for creation. Such as:

ApiVersion: apps.kruise.io/v1alpha1

Kind: StatefulSet

Metadata:

Name: sample

Spec:

#...

Template:

#... spec: containers:-name: main image: nginx:alpine

UpdateStrategy:

Type: RollingUpdaterollingUpdate: podUpdatePolicy: InPlaceIfPossible

Then, modify the image image version in YAML, and then call the K8s api API to do the update. As a result, an error was received as follows:

Metadata.resourceVersion: Invalid value: 0x0: must be specified for an update

If you use the kubectl apply command to make an update, you will return success:

Statefulset.apps.kruise.io/sample configured

The question is, why is it successful to call the api interface update for the same modified YAML file, but to update it with kubectl apply? This is not really a special check of OpenKruise, but is determined by the update mechanism of K8s itself.

From our contact, the vast majority of users have experience updating K8s resources through kubectl commands or sdk, but not many people really understand the principles behind these updates. This article will focus on the resource update mechanism of K8s and how to implement some of our commonly used update methods.

Renewal principle

Have you ever thought about a question: for a K8s resource object such as Deployment, when we try to modify the image image, what happens if someone else is making changes to the Deployment at the same time?

Of course, two more questions can be raised here:

What happens if both parties modify the same field, such as the image field? What if both parties modify different fields, such as one to modify image and the other to modify replicas?

In fact, to "update" a Kubernetes resource object is simply to tell the kube-apiserver component how we want to modify the object. K8s defines two "notification" methods for such requirements, namely update and patch. In the update request, we need to submit the entire modified object to K8s; for the patch request, we only need to submit changes to certain fields in the object to K8s.

So back to the background question, why did the user fail to submit the modified YAML file for update? This is actually limited by the version control mechanism of K8s for update requests.

Update mechanism

All resource objects in Kubernetes have a globally unique version number (metadata.resourceVersion). Each resource object has a version number from the beginning of its creation, and then the version number changes each time it is modified (whether update or patch).

The official document tells us that this version number is an internal mechanism of K8s, and users should not assume that it is a number or determine whether the resource object is new or old by comparing the size of two version numbers. the only thing you can do is to determine whether the object is the same version (that is, whether it has changed) by comparing the same version number. One of the important uses of resourceVersion is to do version control of update requests.

K8s requires that the object submitted in the user's update request must have resourceVersion, that is, the data we submit to update must first come from the object that already exists in K8s. Therefore, a complete update operation flow is:

First, get an existing object from K8s (you can choose to query it directly from K8s; if you do list watch on the client, it is recommended to get it from the local informer); then, make some modifications based on this object, such as adding or subtracting the replicas in Deployment, or changing the image field to a new version of the image; finally, submit the modified object to K8s through a update request At this point, kube-apiserver verifies that the resourceVersion in the object submitted by the user's update request must be the same as the latest resourceVersion of the object in the current K8s before accepting this update. Otherwise, K8s rejects the request and informs the user that a version conflict (Conflict) has occurred.

The figure above shows what happens when multiple users update a resource object at the same time. If there is a Conflict conflict, what User A should do is try again, get the latest version of the object again, modify it and resubmit the update.

Therefore, both of our above questions have been answered:

The user failed to submit the update after modifying the YAML because the resourceVersion field was not included in the YAML file. For update requests, the objects in the current K8s should be modified and submitted. If two users update a resource object at the same time, regardless of whether they are operating on the same or different fields in the object, there is a version control mechanism to ensure that the update requests of the two users will not be overwritten.

Patch mechanism

Compared with the version control of update, the patch mechanism of K8s is simpler.

When a user submits a patch request to a resource object, kube-apiserver does not consider the version, but accepts the user's request "mindlessly" (as long as the patch content sent by the request is legal), that is, typing the patch to the object and updating the version number at the same time.

However, the complication of patch is that K8s currently provides four patch strategies: json patch, merge patch, strategic merge patch, and apply patch (since K8s 1.14 supports server-side apply). We can also see this policy option through the kubectl patch-h command (default is strategic):

$kubectl patch-h

...

-- type='strategic': The type of patch being provided; one of [json merge strategic]

Space limit here will not give a detailed introduction to each strategy, let's take a simple example to look at their differences. If you target an existing Deployment object, assume that there is already a container called app in template:

If you want to add a nginx container to it, how do you update it with patch? If I want to modify the image of the app container, how do I update it with patch?

Json patch ([RFC 6902] ())

New containers:

Kubectl patch deployment/foo-- type='json'-p\

[{"op": "add", "path": "/ spec/template/spec/containers/1", "value": {"name": "nginx", "image": "nginx:alpine"}}]'

Modify the existing container image:

Kubectl patch deployment/foo-- type='json'-p\

[{"op": "replace", "path": "/ spec/template/spec/containers/0/image", "value": "app-image:v2"}]'

As you can see, in json patch, we need to specify the type of operation, such as add addition or replace replacement, and specify the container by element serial number when modifying the containers list.

In this way, if the object before our patch has been modified by someone else, then our patch may have unintended consequences. For example, when performing an app container image update, we specify the sequence number 0, but when the first position in the containers list is inserted into another container, the updated image is incorrectly inserted into this unexpected container.

Merge patch (RFC 7386)

Merge patch cannot update an element in a list separately, so no matter whether we want to add containers to containers or modify the image, env and other fields of existing containers, we have to use the entire containers list to submit patch:

Kubectl patch deployment/foo-- type='merge'-p\

'{"spec": {"template": {"spec": {"containers": [{"name": "app", "image": "app-image:v2"}, {"name": "nginx", "image": "nginx:alpline"}'

Obviously, this strategy is not suitable for us to update some fields deep in the list, but is more suitable for large overlay updates.

However, for map-type element updates such as labels/annotations, merge patch can specify key-value operation separately, which is more convenient and intuitive to write than json patch:

Kubectl patch deployment/foo-- type='merge'-p'{"metadata": {"labels": {"test-key": "foo"}}'

Strategic merge patch

This patch strategy does not have a general RFC standard, but is unique to K8s, but is more powerful than the first two.

Let's start with the K8s source code and define some additional policy annotations in the data structure definition of K8s native resources. For example, the following intercepts the definition of containers list in podSpec. Refer to Github:

/ /...

/ / + patchMergeKey=name

/ / + patchStrategy=merge

Containers [] Container json: "containers" patchStrategy: "merge" patchMergeKey: "name" protobuf: "bytes,2,rep,name=containers"

You can see that there are two key messages: patchStrategy: "merge" patchMergeKey: "name". This means that when the containers list is updated with the strategic merge patch policy, the name field in each of the following elements is treated as key.

To put it simply, when we patch update containers no longer need to specify the subscript number, but specify name to modify, K8s will use name as key to calculate merge. For example, for the following patch operations:

Kubectl patch deployment/foo-p\

'{"spec": {"template": {"spec": {"containers": [{"name": "nginx", "image": "nginx:mainline"}'

If K8s finds that there is already a container named nginx in the current containers, it will only update the image; if there is no nginx container in the current containers, K8s will insert the container into the containers list.

In addition, it should be noted that the current strategic policy can only be used for native K8s resources and custom resources in Aggregated API mode, but cannot be used for resource objects defined by CRD. This is easy to understand because kube-apiserver does not know the structure of CRD resources and merge policies. If you update a CR with the kubectl patch command, the merge patch policy will be used by default.

Kubectl encapsulation

After understanding the basic update mechanism of K8s, let's go back to the original problem. Why can't users directly call the update API to update the YAML file after modifying it, but can update it through the kubectl apply command?

In fact, kubectl in order to provide command line users with a good sense of interaction, designed a more complex internal execution logic, such as apply, edit and other common operations are not corresponding to a simple update request. After all, update has version control, and it is not friendly to ordinary users if there is an update conflict. The following is a brief introduction to the logic of several update operations in kubectl. If you are interested, you can take a look at the source code encapsulated by kubectl.

Apply

When apply is executed with the default parameters, client-side apply is triggered. The kubectl logic is as follows:

First, parse the data submitted by the user (YAML/JSON) into an object A, and then call the Get API to query the resource object from K8s:

If the query result does not exist, kubectl records the data submitted by the user to the annotation of object A (key is kubectl.kubernetes.io/last-applied-configuration), and finally submits object A to K8s to create

If this resource is already found in K8s, suppose that object Bkubectl.kubernetes.io/last-applied-configuration 1. Kubectl attempts to extract the value of kubectl.kubernetes.io/last-applied-configuration from the annotation of object B (corresponding to the content submitted by the last apply); 2. Kubectl calculates the diff based on the contents of the previous apply and this apply (default is strategic merge patch format, if non-native resources are merge patch); 3. Add this kubectl.kubernetes.io/last-applied-configuration annotation to the diff, and finally submit the patch request to K8s for update.

Here is just a general process carding, the real logic will be more complex, and from K8s 1.14 also supports server-side apply, interested students can take a look at the source code implementation.

Edit

Kubectl edit is logically simpler. After the user executes the command, kubectl looks up the current resource object from K8s and opens a command-line editor (using vi by default) to provide the user with an editing interface.

When the user completes the modification and saves and exits, kubectl does not directly submit the modified object to update (avoid Conflict, if the resource object is updated in the process of user modification), but will calculate the diff of the modified object and the initial object, and finally submit the diff content to K8s with a patch request.

Summary

After reading the above introduction, you should have a preliminary understanding of the K8s update mechanism. Then think about it, since K8s provides two ways to update, how can we choose update or patch to use it in different scenarios? Our advice here is:

If only we can modify the fields that we want to update (for example, we have some custom tags and write operator to manage them), then using patch is the easiest way

If the fields to be updated may be modified by other parties (for example, our modified replicas field, some other components such as HPA may also be modified), it is recommended to use update to update to avoid overwriting each other.

In the end, our customer changed it to the object based on get and submitted it to update, which finally successfully triggered the in-situ upgrade of Advanced StatefulSet. In addition, we also welcome and encourage more students to participate in the OpenKruise community and work together to create a high-performance application delivery solution for large-scale scenarios. (welcome to join the nail communication group: 23330762)

Course recommendation

In order that more developers can enjoy the dividends brought by Serverless, this time, we have gathered 10 + technical experts in Alibaba's Serverless field to create an open Serverless course that is most suitable for developers to get started, so that you can easily embrace the new paradigm of cloud computing-Serverless.

Click to watch the course for free: https://developer.aliyun.com/...

"Alibaba Cloud Native focus on micro services, Serverless, containers, Service Mesh and other technology areas, focus on cloud native popular technology trends, cloud native large-scale landing practice, to be the official account of cloud native developers."

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report