In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
In this issue, Xiaobian will bring you an example analysis of ASK and Knative in K8s. The article is rich in content and analyzed and described from a professional perspective. After reading this article, I hope you can gain something.
Why do I need a Kinetic?
K8s has become the mainstream operating system in the cloud native market. K8s exposes infrastructure capabilities through data abstraction, such as Service, Ingress, Pod, Deployment, etc., which are exposed to users through K8s native APIs; K8s provides some standard interfaces for infrastructure access, such as CNI, CRI, CRD, so that cloud resources can enter K8s system in a standardized way.
K8s is in a connecting position. Cloud native users use K8s for the purpose of delivering and managing applications, including grayscale publishing, expansion and reduction, etc. However, for users, implementing these capabilities is inevitably somewhat complicated by directly manipulating the K8s API. Resource cost savings and flexibility are also increasingly important to users.
So, how can we simply use K8s technology and achieve on-demand use, and ultimately achieve the goal of cost reduction and efficiency? The answer is Kinetic.
II. Introduction to Knative 1. What is Kinetic?
defined
Knative is a Serverless orchestration engine based on Kubernetes. One of Knative's important goals is to set cloud-native cross-platform orchestration standards by integrating container builds, workloads, and event drivers.
The current contributors to the Knative community are mainly Google, Pivotal, IBM, Red Hat, which shows its strong lineup. In addition, PAAS providers such as CloudFoundry and OpenShift are also actively participating in the construction of Knative.
core module
The core module of Knative mainly consists of two parts: Eventing, an event-driven framework, and Serving, which provides workload.
2. Traffic Gray Release
Take a simple scenario:
Flow-based grayscale publishing in K8s
If you want to implement traffic-based grayscale publishing in K8s, you need to create corresponding Service and Deployment, elasticity related needs to be done by HPA, and then create a new version when traffic grayscale publishing.
For example, the original version is v1. To achieve traffic grayscale publishing, we need to create a new version v2. When creating v2, you need to create corresponding Service, Deployment and HPA. After the creation, set the corresponding traffic proportion through Ingress, and finally realize the function of traffic gray release.
Flow-based Grayscale Publishing in Knative
As shown in the figure above, in order to achieve gray-scale publishing based on traffic in Knative, you only need to create a Knative Service, and then gray-scale traffic based on different versions, which can be represented by Revision1 and Revision2. In different versions, automatic elasticity has been included. From the two simple legends above, we can see that when implementing traffic grayscale publishing in Knative, there are significantly fewer resources that need to be directly manipulated.
3. Knave Serving Architecture
**Service **
Service corresponds to the abstraction of Serverless orchestration, which manages the application lifecycle through Service. There are two main components: Route and Configuration.
Route
Route corresponds to a routing policy. Routes requests to Revisions and can forward different proportions of traffic to different Revisions.
Configuration
Configuration is the corresponding resource information. Configuration of the current desired state. Every time Service is updated, Configuration is updated.
Revision
Each time Configuration is updated, a snapshot will be obtained accordingly. This snapshot is Revision, which realizes multi-version management and grayscale publishing through Revision.
We can think of it this way: Knative Service ≈ Ingress + Service + Deployment + Resilience (HPA).
4. Rich elastic strategies
Of course, Serverless framework cannot be separated from elasticity, and the following rich elasticity policies are provided in Knative:
Automatic scaling based on traffic requests: KPA;
Automatic scaling based on CPU and Memory: HPA;
Support timing + HPA automatic scaling strategy;
Event Gateway (precise elasticity based on traffic requests).
III. Fusion of Knative and ASK 1. ASK:Serverless Kubernetes
If you want to prepare ECI resources, you need to do capacity planning in advance, which undoubtedly violates the original intention of Serverless. In order to get rid of the shackles of ECI resources, it is not necessary to carry out ECI resource planning in advance, Alibaba Cloud proposes serverless-ASK. Users can deploy container applications directly without purchasing nodes, eliminating the need for node maintenance and capacity planning. ASK provides K8s-compatible capabilities while dramatically lowering the barriers to use for K8s, allowing users to focus on the application rather than the underlying infrastructure.
ASK provides the following capabilities:
OPM-free
Out of the box, node-free management and O & M, node-free security maintenance, node-free NotReady, simplified K8s cluster management.
Ultimate elastic expansion
No capacity planning, second-level expansion, 30s 500 pods.
low-cost
Create pods on demand, support Spot, reserve instance coupons.
Compatible with K8s
Support Deployment/statfulset/job/service/ingress/crd etc.
storage Mount
Support cloud disk, NAS, OSS storage coupons.
Knative on ASK
Automatic elasticity based on application flow, out-of-the-box, downsizing to minimum specifications.
Elastic Workload
ECI mixed scheduling by volume and Spot is supported.
Integrated cloud products such as ARMS/SLS
2. Knative operational complexity
There are three main problems in Knative operation and maintenance: Gateway, Knative control components and cold start problems.
As shown in the above figure, the control component in Knative involves the corresponding Activator, which is a component from 0 to 1;Autoscaler is a component related to expansion and contraction;Controller is its own control component and gateway. For the operation and maintenance of these components, if they are done at the user level, it will undoubtedly increase the burden, and at the same time, these components will also consume costs.
In addition, the cold start problem from 0 to 1 also needs to be considered. When the application requests come, it takes some time for the first resource to start from the beginning to the completion. If the response is not timely during this period, it will cause the request timeout, which will lead to the cold start problem.
For these problems mentioned above, we can solve them by ASK. See below how ASK works.
3. Gateway and SLB Fusion
Compared to the capabilities provided by Istio before, we need to operate and control Istio-related components, which undoubtedly increases the cost of control. In fact, for most scenarios, we are more concerned about the capabilities of the gateway, and some of the services of Istio itself (such as service grid) are actually not needed.
In ASK, we replaced the gateway layer with SLB:
Cost reduction: reduce more than ten components, greatly reduce operation and maintenance costs and IaaS costs;
More stable: SLB cloud product services are more stable, more reliable, and easier to use.
4. Control component sinking
ASK does some hosting for the Knative governance component:
Out of the box: Users use Serverless Framework directly without installing it themselves;
Operation and maintenance free, low cost: Knative components and K8s clusters are integrated, so users do not have to bear the burden of operation and maintenance, nor do they have to bear additional resource costs;
High governance: All components are deployed on the governance side, making upgrades and iterations easier.
5. Elegant reserved instances
In the ASK platform, we provide the ability to gracefully retain instances, which acts as a cold-boot-free feature. By keeping instances, the cold start time from 0 to 1 is eliminated. When we shrink to 0, we don't really shrink the instance to 0, but shrink it to a low-spec reserved instance to reduce costs.
Cold Start Free: Eliminates 30 seconds cold start time from 0 to 1 by retaining specifications;
Cost Controllable: The cost of a burst performance instance is 40% lower than that of a standard specification instance. If combined with a Spot instance, the cost can be further reduced.
The above is an example analysis of ASK and Knative in K8s shared by Xiaobian. If there is a similar doubt, please refer to the above analysis for understanding. If you want to know more about it, please pay attention to the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.