Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Interpretation of the New highlights in the Field of KubeCon EU 2019 Application Management

2025-04-12 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Author | Deng Hongchao, technical expert of Aliyun Intelligent Business Group

Draw the key points

Deng Hongchao, technical expert on Aliyun Container platform, former CoreOS engineer, and one of the core authors of K8s Operator project, brilliantly interprets the essence of KubeCon EU 2019 "Application Management":

The config changed

Server-side Apply

Gitops

Automated Canary Rollout

Kubecon EU 2019 has just concluded in Barcelona, and a group of lecturers from Alibaba economy shared the landing experiences and lessons of large-scale Kubernetes clusters in the Internet scene. As the saying goes, "traveling alone while traveling far away", from the growing community, we see more and more people embrace open source, evolve to the standard, and get on this high-speed train bound for the original cloud.

As we all know, the central project of cloud native architecture is Kubernetes, while Kubernetes revolves around "applications". Better deployment of applications and more efficient developers can bring tangible benefits to teams and organizations, and cloud-based technological changes can play a greater role.

The momentum of change not only engulfs the old and closed internal system, but also breeds more new developer tools like spring rain. In this KubeCon, there is a lot of new knowledge about application management and deployment. What are the ideas and ideas in this knowledge that can be used for reference, so that we can avoid detours? What kind of technological evolution direction does it indicate behind them?

In this article, we invited Deng Hongchao, a technical expert on Aliyun Container platform, a former engineer from CoreOS, and one of the core authors of the K8s Operator project, to analyze and comment on the best content in the field of "application management" for readers.

The Config Changed

Applications deployed on Kubernetes typically store the configuration in ConfigMap and then mount it to the Pod file system. When ConfigMap changes, only files mounted in Pod are automatically updated. This is OK for some applications that automatically do hot updates, such as nginx. However, for most application developers, they are more likely to think that configuration changes require a new grayscale release and that containers associated with ConfigMap should do a grayscale upgrade.

Grayscale upgrade not only simplifies user code and enhances security and stability, but also embodies the idea of immutable infrastructure. Once the application is deployed, no changes are made. When you need to upgrade, just deploy a new version of the system, verify the OK and then destroy the old version; if the verification fails, it is also easy to roll back to the old version.

It is based on this idea that engineers from Pusher developed Wave, a tool that automatically listens to the ConfigMap/Secret associated with Deployment and changes it to trigger Deployment upgrades. What makes this tool unique is that it automatically searches the ConfigMap/Secret in the Deployment PodTemplate and then calculates all the data in the hash and puts it into the PodTemplate annotations; when there is a change, it recalculates the hash and updates the PodTemplate annotations, thus triggering the Deployment upgrade.

Similarly, there is another tool in the open source community, Reloader, that does the same thing-except that Reloader allows users to choose which ConfigMap/Secret to listen to.

Analysis and comment

Upgrade not grayscale, recite two lines of tears. Whether you are upgrading the application image or changing the configuration, remember to do a new grayscale release and verification.

In addition, we also see that the immutable infrastructure brings a new perspective for building cloud computing applications. Developing in this direction can not only make the architecture more secure and reliable, but also integrate with other major tools, give full play to the role of the cloud native community, and achieve "corner overtaking" of traditional application services. For example, with the full combination of the above wave project and the weighted routing function in Istio, the website can achieve the effect of verifying the new version of the configuration with a small amount of traffic.

Video link: https://www.youtube.com/watch?v=8P7-C44Gjj8

Server-side Apply

Kubernetes is a declarative resource management system. The user defines the desired state locally, and then updates the part of the current cluster status specified by the user through kubectl apply. However, it is not nearly as simple as it sounds.

The original kubectl apply was implemented on the client side. When Apply, you can't simply replace the overall state of a single resource, because other people will also change the resource, such as controllers, admissions, webhooks. So how to ensure that while changing a resource, it will not overwrite other people's changes?

So you have the existing 3 way merge: users store last applied state in Pod annotations, do 3 way diff according to the next apply (latest status, last applied, user-specified status), and then generate patch and send it to APIServer. But there is still a problem with this! The original intention of Apply is to let the individual specify which resource fields are managed by him.

However, the original implementation can neither prevent different individuals from tampering with fields with each other, nor inform users and resolve conflicts when conflicts occur. For example, when the author was working in CoreOS, both the controller and the users in the product would change some special labels of the Node object, resulting in conflicts, resulting in cluster failure and had to send someone to fix it.

This kind of Kesuru-style fear hangs over every K8s user, and now we finally have the dawn of victory-- server-side apply. APIServer can do diff and merge operations, and many originally fragile phenomena have been solved. More importantly, compared to the original last-applied annotations, server-side apply provides a new declarative API (called ManagedFields) to specify who manages which resource fields. When there is a conflict, such as when kubectl and controller are changed to the same field, requests that are not Admin (administrators) will return an error and prompt for resolution.

Analysis and comment

Mom doesn't have to worry about me kubectl apply anymore. Although it is still in the Alpha phase, it is only a matter of time before the server-side apply replaces the client. This makes it more secure and reliable for different components to change the same resource at the same time.

We also see that as the system evolves, especially the widespread use of declarative API, there will be less logic locally and more on the server side. There are many advantages on the server side: many operations, such as kubectl dry-run and diff, are easier to implement on the server side; providing HTTP endpoint makes it easier to build functions like apply into other tools; and implementing and publishing complex logic on the server side makes it easier to manage and control, so that users can enjoy secure, consistent, and high-quality services.

Video link: https://www.youtube.com/watch?v=1DWWlcDUxtA

Gitops

During the meeting, a panel discussed the benefits of Gitops. Let's sum it up.

First, Gitops makes the whole team more "democratic". Everything is written down. You can read it if you want. Any changes need to go to pull request before release, which not only let you know clearly, but also allow you to participate in the review and input comments. All changes and discussions are recorded on Github and other tools, and history can be looked up at any time. All this makes teamwork smoother and more professional.

Second, Gitops makes the release more secure and stable. The code can no longer be released at will and needs to be reviewed by relevant responsible persons or even multiple people. When you need to roll back, the original version is stored in Git. Who released what code when and has an audit history. These release processes are more professional, making the release results more stable and reliable.

Analysis and comment

Gitops is not only to solve a technical problem, but also to make use of the version, history, audit and authority transfer of Github and other tools to make the team collaboration and release process more professional and procedural.

If Gitops can be widely promoted, it will have a great impact on the whole industry. For example, anyone who goes to any company can quickly get started with releasing code.

The ideas of Configuration as code and Git as the source of truth embodied in Gitops are still worthy of our study and practice.

Video link: https://www.youtube.com/watch?v=uvbaxC1Dexc

Automated Canary Rollout

Canary release (Canary rollout) means that during the release process, a small part of the traffic is imported into the new version, and the online behavior is analyzed and verified. If everything is all right, continue to gradually switch the traffic to the new version until the old version has no traffic and is destroyed. We know that in tools such as Spinnaker, there is a manual validation and pass step. This step can actually be replaced with automated tools, after all, each check is quite mechanical, such as the success rate of the check and the p99 delay.

Based on the above ideas, engineers from Amadeus and Datadog shared how to use Kubernetes, Operator, Istio, Prometheus and other tools to do canary releases. The idea is that the whole canary release is abstracted into a CRD, and then the process of doing a canary release is to write a declarative YAML file. Operator will automatically complete complex operation and maintenance operations after receiving the YAML file created by the user. The main steps here are as follows:

Deploy a new version of the service (Deployment + Service)

Change the Istio VirtualService configuration to switch part of the traffic to the new version

Verify whether the success rate and p99 response time of the new version of the service in Istio metrics meet the conditions

If satisfied, the entire application will be upgraded to the new version; otherwise, it will be rolled back.

Similarly, Weave has also opened up Flagger, an automated canary publishing tool. The difference is that Flagger will gradually cut the stream to the new version, such as cutting 5% of the traffic each time, and then directly destroy the old version when the traffic is cut.

Analysis and comment

Use the canary to release for a while, always use it all the time. Canary release helps to improve the release success rate and system stability, and is an important process of application management.

In addition, we also see that these complex operation and maintenance processes in the cloud native era will be simplified and standardized. Through the CRD abstraction, the complex process steps will become a few short API objects provided to the user. Using Operator for automated operation and maintenance, users can use these functions as long as they are on the Kubernetes standard platform. Istio and Kubernetes, as top standardized platforms, provide strong basic capabilities to make it easier for users to use.

Video link: https://www.youtube.com/watch?v=mmvSzDEw-JI

Write at the end

In this article, we take stock of some new knowledge about application management and deployment in this KubeCon:

When the configuration file changes, do a new application release reasons and methods.

There are many problems with client-side kubectl apply, one of which is tampering with resource fields each other. These are solved in the implementation of server-side apply.

Gitops is not only to solve a technical problem, but also to make the team collaboration and release process more professional and procedural.

Using top standardized platforms such as Kubernetes, Operator, Istio and Prometheus, we can simplify the operation and maintenance operations released by Canary and lower the threshold for developers.

These new ideas also make us feel full of emotion: in the past, we always envied "other people's infrastructure", which was always so excellent and out of reach. Now, open source projects and technical standards are lowering the threshold for every developer to use.

On the other hand, a subtle change is taking place-"self-developed" basic software has to face the law of diminishing marginal effects, causing more and more companies like Twitter to join the cloud native camp. Embracing open source ecological and technical standards has become an important opportunity and challenge for Internet enterprises. Only by building native cloud-oriented applications and architectures and with the power of cloud and open source can we be fully prepared to set sail in this cloud change.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report