In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Through this article you will learn: how does Knative enable ordinary applications to have Serverless capabilities? Why is Knative a cloud native application Serverless choreography engine? Why Knative is composed of three modules: Tekton, Eventing and Serving, and how these three modules work together. This paper consists of four parts: first, let's take a look at what the core driving force of the cloud is, and then take a look at what the cloud native application looks like from this core driver. Then let's take a look at what value Knative brings to the cloud biochemistry of the application, and finally we can experience these capabilities brought by Knative through a Demo. The core drivers of the cloud let's think about why companies want to go to the cloud, why technicians should learn cloud-oriented programming thinking, and how we should look at the cloud before talking about cloud natives. Let's first analyze the core drivers that are happening, and then use this core driver to see what the entire cloud native technology stack looks like. Social division of labor
Let's start with a hot pot. Although a hot pot is very simple, it will involve a lot of things, such as all kinds of vegetables, beef and mutton. If you think about it, we often eat all these things, but which of them are grown or raised by ourselves? In fact, no, we sit in the office every day and buy these things in the market or supermarket on our way from work. We don't know or care about the specific production process of these things. When I was a child, I grew up in the countryside of Inner Mongolia. As we all know, Inner Mongolia is very big. Every family in the village has a big garden. In summer, every family will grow all kinds of vegetables in the garden, such as tomatoes, cucumbers, eggplant, peppers and so on. But in winter, because the weather is very cold, there are no fresh vegetables at all. Everyone eats pickles, pickles, or some dried vegetables dried in autumn. We can find that an area is very easily affected by the seasons. Although all kinds of vegetables can be grown in summer, there will be nothing in winter. Now because there are professionals doing these things, most of us don't have to worry about how vegetables are grown. And what we engineers do to deal with computers can help vegetable growers in turn through other channels. The division of labor improves efficiency, which is the opening point of Adam Smith, the father of modern economics, in his classic on the Wealth of Nations more than 200 years ago. In fact, the operating mechanism of modern society also confirms this theory; I think it is also the root cause for enterprises to embrace cloud computing. Because we need to contract some non-core parts of the business to professionals to complete, those parts that are strongly related to their own business are the core competitiveness of the enterprise. Apply to the cloud
Then let's take a look at what parts an application is made of. In order to provide services, applications must first have computing, storage and network resources to get the process running. Of course, these are just running processes. If you want to undertake specific business, you still need to rely on services such as database; if you want to do distributed architecture, you also need to split micro-services. At this time, you need cache, registry, message middleware services. The above situation is that the program already exists, how to run the program to undertake the business part. The other part is how to turn the code into a bootable program, which is the CICD part. From the code to the launch of the application to the peripheral system support on which the application depends, there are also some related to daily operation and maintenance, such as log collection, analysis, such as monitoring and alarm, although we just wanted to write an application to provide services for the business. But the consumption of these support systems around the application is much higher than that of the application itself. In fact, this is not what we expect, to let the business team focus more on the business logic itself, rather than on the surrounding systems, which is what we want to see.
In fact, for companies of a certain size, the internal organizational structure is basically composed of a basic platform team and multiple business teams. The basic platform team is responsible for providing the public capability support required for these applications. The business team is more focused on the business and can directly use the capabilities of the basic platform team. In fact, this is the embodiment of the social division of labor in the IT organization, professional people do professional things, division of labor to improve efficiency. Now let's review the example of eating hot pot just now. Then it is the same for an enterprise. Although each enterprise has its own basic platform team, it is impossible to provide such a rich platform support as cloud due to scale or capital investment and other reasons. If there is, it must also be a cloud vendor. Therefore, for the business, it is efficient to use the initial facilities of the cloud to build all the peripheral systems related to the application, and contract these peripheral systems to the cloud vendors. I understand that this is the background of the original emergence of the cloud. Division of labor to improve efficiency is the fundamental driving force for everyone to use the cloud. The original concept of cloud is gradually formed and perfected in the process of cloud in major enterprises. The idea is to coordinate the gradual formation of a unified standard for cloud on services by all participants, which can help enterprises to go to the cloud and help cloud vendors release cloud capabilities. From the perspective of cloud customers, this unified standard is to prevent cloud vendors from locking in. For example, Kubernetes is a very good consensus, because all cloud platforms support Kubernetes.
So what is the value of this standard? Why are cloud vendors and Shangyun enterprises actively involved in the "development" of this standard? This is actually a result of mutual benefit. Before we talk specifically about the role of this standard, let's talk about the differences between the two resource allocation models. Let's take the part of queuing (first-come-first-served) as an example. Go to the hospital to see a doctor need to register in advance, the doctor's number source this is a first-come-first-served resource classification. Especially in the big cities such as Beijing, Shanghai and Guangzhou, the number source of a good doctor is even more difficult to obtain. If you want to ensure that you must get a doctor's number, be sure to queue at the hospital earlier than anyone else, and you can get the number first if you wait in advance. Let's now analyze the value of the efforts of people who wait in line in advance. Those who come early get the medical number, which sends a signal to people from other competing sources that the earlier they come, the more likely they are to get the source. In many cases, patients even ask for a plus sign, but the doctor is actually reluctant, because each additional source means that the doctor has to pay a little rest time. In the process of queuing, a large number of resources are lost in vain, without bringing any value to the society. In this part of the market economy, we will take the purchase of CVM ECS as an example. Assuming that Aliyun's ECS is the most cost-effective, and everyone gives priority to using Aliyun's ECS, then if supply falls short of demand, the price of ECS may rise, and the price increase will cause more cloud vendors to supply ECS, which will eventually lead to prices falling back down. Let's analyze the relationship between the people who buy ECS and the cloud vendors who provide ECS in the process: we find that the fact that a large number of customers choose ECS is good for cloud customers, because a large amount of demand will cause cloud manufacturers to make more supply, and the price cannot remain high. So the competitor of ECS demand is not a zero-sum game, but a cooperative relationship. If we make the cake bigger together, it will make the whole supply chain more stable and cheaper. For ECS provider cloud vendors, the more people buy their own services, the better. All customers and cloud vendors have a mutually reinforcing relationship. Market economy, a free transaction based on price, can promote cooperation very efficiently, and the efforts of either party are valuable to other participants. Compared with the efforts made by the first arrivals in the hospital queue, in the free trading mode of the market economy, the efforts of each party have optimized the whole system, which is a very good way for the rational use and distribution of social resources.
Does it feel like it's gone too far? what does this have to do with cloud computing? If we continue to analyze the market economy, we can see the close relationship with cloud computing. Let's take a look at this scenario: if cloud vendor A provides a cost-effective service, but one day cloud vendor B provides a more cost-effective service, will customers immediately migrate the service to cloud vendor B? The answer is that the customer wants to migrate, but it is more difficult to migrate. Because the service standards provided by each cloud vendor are different, the process of service migration needs to adapt to a large number of differentiated underlying infrastructure, which brings huge costs to the migration and even offsets the high performance-to-price ratio provided by cloud vendor B. therefore, this kind of migration is rarely seen. Previously, we analyzed that the resource classification of the market economy is very conducive to the optimal allocation of resources in all aspects of society, but if cloud customers can not flow at a low cost among cloud manufacturers, it will be very difficult to choose cost-effective cloud manufacturers. Therefore, from the perspective of effective allocation of social resources, there is an urgent need for a low-cost "flow" system that allows customers to "flow" between different cloud vendors, and this is the meaning of cloud origin. If the application that the customer wants to host on the cloud is compared to water, then the price of the cloud service is the altitude. Where the elevation is low, the water flows naturally. Reducing the cost of migration indefinitely is a very valuable thing for both customers and cloud vendors. Cloud Native aims to connect cloud vendors and customers by providing standardized cloud services. For customers, this approach reduces the cost of cloud and cross-cloud migration, allowing customers to always maintain the bargaining power with cloud vendors; for cloud vendors, as long as they can provide more cost-effective cloud services, it is easy to gather a large number of users. Cloud native is constantly promoting the virtuous circle of the whole system: it not only allows customers to keep the ability of choice, but also enables excellent cloud manufacturers to serve more customers quickly. If customers' business services can be circulated between different cloud vendors at as low cost as water, then the services provided by cloud manufacturers can circulate between customers like money. This is a win-win situation. After talking about the concept of cloud native applications, let's take a look at cloud native applications and how to view the traditional application architecture in the context of cloud native applications. Cloud-based infrastructure
Whether it is the application on the cloud or the application under the cloud, in fact, the core elements that depend on have not changed, but the form of provision of these core elements has changed. As shown in the figure above, there are seven core elements. In fact, the daily operation and maintenance of these seven elements is not strongly dependent, although it has a great impact on the stability of the business, but this is not the core link for the business to run, it can run without these services, while the other several blocks are the core links. So let's take a look at where are the elements of these core links under the cloud native architecture? Then analyze the basic paradigm of cloud native applications. Cloud native technology stack
Let's first look at the rightmost piece of middleware, which contains database, Redis, and messaging middleware components. In fact, this piece is called directly in the application code, and all the capabilities contained here have standard protocols, such as using SQL Server or MySQL, we can use the SQL specification to operate. In fact, this part has long been standardized. As shown in the figure, the three core elements of computing, storage, and networking have been unified by the Kubernetes layer. Many cloud services have implemented serverless computing, storage, and network resources, such as Aliyun's ECI and ASK (Aliyun Serverless Kubernetes). So there are still two pieces of CICD and application hosting that are not standardized, which is what the application orchestration layer needs to standardize. In fact, Serverless is not only serverless, but also includes the choreography of the application itself, which is the value of this layer of application orchestration. Cloud native application Serverless choreography Serverless Kubernetes already provides serverless support for Pod, but there are still a lot of things to deal with if the application layer wants to make good use of this capability. Flexibility: scale down to zero burst traffic
How to realize the relationship between Grayscale Publishing and elasticity
When the grayscale of traffic management is released, how to dynamically adjust the traffic ratio between v1 and v2? what is the relationship between traffic management and flexibility? when there is sudden traffic, how to cooperate flexibly, so that sudden requests are not lost.
We find that although basic resources can be applied dynamically, applications still need a layer of orchestration system to adapt to Kubernetes if they want to achieve real-time flexibility, on-demand allocation and pay-per-view capabilities. This adaptation is not only responsible for flexibility, but also the ability to manage both traffic and grayscale releases. Knative arises at the historic moment the content in the above is the problem that Knative wants to solve, this is also the background of Knative. Next let's take a look at Knative. What is Knative?
Let's first look at the official definition: "based on the Kubernetes platform for building, deploying, and managing modern Serverless workloads." Knative is an application Serverless choreography system based on Kubernetes. In fact, Knative contains not only Workload, but also Kubernetes's native process choreography engine and a complete event system. As mentioned earlier, Kubernetes standardizes computing, storage, and networking, while Knative goals provide standardization for applying Serverless workload orchestration based on Kubernetes. Knative core module
Knative consists of three core modules: Tekton, Eventing and Serving. Tekton is the native process scheduling framework of Kubernetes, which is mainly used to build CICD system; Eventing is mainly responsible for event handling function, which can access events of external systems, and after event access, a series of flow processing is carried out and Serving consumption events are triggered; Serving is the core management module of application running workload, which is mainly responsible for traffic scheduling, elasticity and grayscale release. Tekton
Tekton is a set of Kubernetes native process choreography framework, which is mainly used to build CICD system. For example, a series of operations such as compiling from source code to an image, testing the services in the image, and publishing the image into an application can be done based on Tekton. The basic execution unit in Tekton is Task. > Task can contain multiple sequentially executed Step. A Task is ultimately a Pod, and each Step in it is a Container when it is finally executed. Each time a TaskRun CRD is submitted to Kubernetes, the execution of Task is triggered; Pipeline can orchestrate Task in multiple Task,Pipeline, and dependencies can be set. Pipeline generates a directed acyclic graph based on the dependency, and then executes a series of Task concurrently or serially according to the directed acyclic graph. Each PipelineRun CRD submitted triggers an execution of Pipeline. PipelineResource represents the resources used or generated by Pipeline, such as github code repository, dependent or used images, and so on. Tekton is a native implementation of Kubernetes, so information such as the key of resource authentication can be managed through Kubernetes's Secret. Eventing
Eventing module implements a series of event handling mechanisms based on CloudEvent standard. The core competence of Eventing module is divided into four parts. External event access Eventing has a strong extension mechanism, which can access events from any external event source, such as commit, pull request and other events in github; events in Kubernetes; messages in messaging systems; and events in OSS, table storage, Redis and other systems. The CloudEvent standard Eventing will be converted into CloudEvent format after connecting to external events, and the internal flow of events is completed through the CloudEvent event standard. The Broker and Trigger model introduced by the internal event processing Eventing module not only shields the complex event processing to users, but also provides a rich event subscription and filtering mechanism. Event processing transaction management Eventing is based on a reliable messaging system and can manage events through transactions. If the event consumption fails, you can retry or redistribute. Serving
The core CRD of Serving is that Service,Knative Controller automatically operates Kubernetes's Ingress, K8s Service and Deployment through the configuration of Service to achieve the goal of simplifying application management. Knative Service corresponds to a resource called Configuration. Each Service change updates the Configuration if you need to create a new Workload, and then each Configuration update creates a unique Revision,Revision that can be thought of as a Configuration version management mechanism. In theory, Revision will not be modified after it is created, and different Revision is generally used for grayscale publishing. Route is the core logic of Knative to manage traffic. Knative is built after Istio. Knative Route Controller automatically generates Istio VirtualService resources through the configuration of Route, thus realizing the management of traffic. The Serverless choreography of the application Workload by Knative Serving starts with traffic. First of all, the Gateway,Gateway that reaches the Knative automatically splits the traffic to different Revision according to the Route configuration, and then each Revision has its own independent elastic policy. When there are more traffic requests, the current Revision starts to expand automatically. Each Revision expansion strategy is independent and does not affect each other. Different Revision are grayscaled based on the percentage of traffic, and each Revision has its own independent elastic strategy. Knative Serving realizes the perfect combination of flow management, flexibility and grayscale by controlling the flow. Cloud Native Features of Knative
Kubernetes is a cloud native operating system recognized by the industry. As a cloud native Serverless orchestration engine, Knative is of course compatible with Kubernetes API. Knative itself is open source, and you can deploy a set of Knative on any Kubernetes cluster. Similarly, your services in any Knative cluster can be seamlessly migrated to another Knative cluster. If your service is built on top of Knative, then your service can flow among cloud vendors like water, and any cloud vendor's Kubernetes with Knative can easily deploy your service. We can see from the following support list that Knative has been supported by a large number of vendors or platforms: Google CloudRun is Knative-based IBM Public Cloud has supported Knative Aliyun has supported KnativePivotal Riff is built on Knative FaaS system OpenShift Knative support Rancher RIO is Knative-based kubeflow Community Knative-based KFServing, Knative is being used to do AI-related framework cloud native power lies in this, open standards have been widely supported. As a customer using the cloud, based on this open standard, we always have the right to negotiate with the service provider, and the service goes wherever the service is good, otherwise it may be locked by a company. For cloud manufacturers, more customers can be accessed through open standards, and the specific implementation under this standard will be supported according to their own characteristics, which is also the core competitiveness of cloud manufacturers. Typical Application scenarios of Knative
After introducing so much, let's sort out the scenarios in which Knative is suitable for use. Serverless choreography scenario Knative Serving is used to choreograph the application in terms of traffic. First of all, Knative takes over the traffic of the service based on Istio Gateway, which can split the traffic according to the percentage. The split traffic can be directly used for grayscale publishing, such as transferring the split traffic by percentage to a Revision, and accurately controlling the percentage of traffic accepted by each Revision, so as to achieve the range of accurately controlling the application of the grayscale version to online services. Knative's elastic strategy acts on each Revision, and different Revision scales independently according to "its own rhythm", realizing the perfect combination of traffic, grayscale and elasticity. All application elastic hosting demands can be realized through Knative. The following scenarios are very suitable for using Knative to solve: for example, hosting micro-service applications such as hosting Web services such as Eventing hosting gRPC service event-driven Knative provides a complete event model that can easily access events from various external systems. After the event is accessed, it flows internally through the CloudEvent standard, and the Broker/Trigger mechanism provides a very good way for event handling. This complete event system can easily implement event-driven services, such as IOT scenarios such as event docking with various cloud products, so as to automatically trigger a service to update the status of cloud products, such as Tekton-based Pipeline to build a CICD system based on Tekton, for example, when the gihub code is submitted, it automatically triggers the image construction and service release process when the docker image warehouse has an image submission By automatically testing images and publishing MicroPaaS-based Knative Servering deployment services such as services, you don't have to manipulate Kubernetes resources manually, which greatly reduces the barriers to using Kubernetes. So if you are not maintaining the Kubernetes system, or want to do complex development based on Kubernetes, you can use Knative to manage your own services, which is very convenient. Customer case
What are the typical customer cases of Aliyun Knative now? Web service hosting Web hosting service is actually the MicroPaaS type scenario described earlier, and customers use Knative to simplify the complexity of using Kubernetes. Even if you don't use the flexibility of Knative, you can improve the efficiency of application hosting. Apply Serverless to orchestrate micro-service hosting scenarios web application hosting and elastic Mini Program, official account backend e-commerce service backend AI service hosting task queue-based elastic scaling using ECI for flexibility Effectively reduce the cost of long-term retained resources SaaS service hosting SaaS users automatically build images for users after submitting codes, SaaS users push an image themselves, and then automatically help users publish services CMS system SaaS providers can easily deploy a new set of services to users through Helm Chart, SpringCloud micro-service hosting registers the address of Knative Service with the registry Through the ability of Knative to achieve micro-service traffic segmentation, grayscale publishing and flexibility. In this way, Knative gives Serverless capabilities to ordinary micro-service applications. Construction of CICD system based on git Code Warehouse automatic Construction and Service Publishing CICD system when there is a new image in the docker image warehouse automatically execute testing or service release OSS events when a new file is added in OSS, machine learning tasks are automatically triggered, images are analyzed, identified, etc. When a new video file is added in OSS, a task is automatically triggered to process video For example, video content recognition and other TableStore events Feed streaming system design, social information sending notifications and other Demo demonstrations
.
The original link to this article is the original content of Yunqi community and may not be reproduced without permission.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.