In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article introduces the relevant knowledge of "what is the role of KubeVela+KEDA". In the operation of actual cases, many people will encounter such a dilemma. Then let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Telescopic Kubernetes
When managing Kubernetes clusters and applications, you need to monitor various things carefully, such as:
Cluster capacity-do we have enough available resources to run our workload?
Application workload-does the application have enough resources available? Can it keep up with the work to be done? (like queue depth)
In order to automate, you usually set alerts to get notifications, or even use automatic scaling. Kubernetes is a good platform that can help you achieve this out-of-the-box feature. The cluster can be easily scaled by using the Cluster Autoscaler component, which monitors the cluster to find pod that cannot be scheduled due to resource shortages and starts to add / remove nodes accordingly. Because Cluster Autoscaler only starts when pod is overscheduled, you may have an interval during which your workload is not up and running. Virtual Kubelet (a CNCF sandbox project) is a huge help, allowing you to add a "virtual node" to a Kubernetes cluster on which pod can schedule.
By doing so, platform vendors (such as Alibaba, Azure, HashiCorp, and others) allow you to overflow pending pod out of the cluster until it provides the required cluster capacity to alleviate the problem. In addition to scaling clusters, Kubernetes allows you to scale applications easily:
Horizontal Pod Autoscaler (HPA) allows you to add / remove more Pod to your workload to scale in/out (add or remove copies).
Vertical Pod Autoscaler (VPA) allows you to add / remove resources to your Pod to scale up/down (add or remove CPU or memory).
All of this provides a good starting point for your scalable application.
Limitations of HPA
Although HPA is a good starting point, it focuses on the metrics of pod itself, allowing you to scale it based on CPU and memory. That is, you can fully configure how it should scale automatically, which makes it powerful. While this is ideal for some workloads, you usually want to scale based on metrics elsewhere such as Prometheus, Kafka, cloud providers, or other events. Thanks to external metrics support, users can install metrics adapters, provide various metrics from external services, and automatically scale them by using metrics servers. However, it is important to note that you can only run one metrics server in the cluster, which means that you must choose the source of the custom metrics.
You can use Prometheus and tools such as Promitor to get your metrics from other providers and scale them as a single source of truth, but it takes a lot of plumbing and work to scale. There must be an easier way. Yes, use Kubernetes Event-Driven Autoscaling (KEDA)!
What is KEDA?
Kubernetes Event-Driven Autoscaling (KEDA) is a single-purpose event-driven automatic scaler for Kubernetes that can be easily added to the Kubernetes cluster to scale applications. Its goal is to make it easy for applications to scale automatically and to optimize costs by supporting scaling to zero (scale-to-zero). KEDA removes all scaling infrastructure and manages everything for you, allowing you to scale on more than 30 systems or use your own scaler. Users just need to create a ScaledObject or ScaledJob to define the objects you want to scale and the triggers you want to use; KEDA will take care of the rest!
You can scale anything; even if it is the CRD of another tool you are using, as long as it implements the / scale subresource. So, has KEDA reinvented the wheel? No, no! Instead, it uses our external metrics by using HPA at the bottom to extend Kubernetes,HPA, which is provided by our own metrics adapter, which replaces all other adapters.
Last year, KEDA joined CNCF as a CNCF sandbox project and plans to upgrade the proposal to an incubation phase later this year.
Alibaba's practice based on OAM/KubeVela and KEDA
Enterprise distributed Application Service (EDAS), as the main enterprise PaaS product on Ali Cloud, has served countless developers on the public cloud on a huge scale for many years. From an architectural point of view, EDAS is built with the KubeVela project. Its overall architecture is shown in the following figure.
In production, EDAS integrates ARMS monitoring service on Aliyun to provide fine-grained indicators for monitoring and application. The EDAS team added an ARMS Scaler to the KEDA project to perform automatic scaling. They also added some features and fixed some bug in the KEDA v1 version. These include:
When there are multiple triggers, these values are summed rather than left as separate values.
When creating a KEDA HPA, the length of the name is limited to 63 characters to avoid triggering a DNS complaint.
Triggers cannot be disabled, which can cause trouble in production.
The EDAS team is actively sending these fixes to the upstream KEDA, although some of them have been added to the V2 version.
Why Aliyun standardizes KEDA as an automatic scaler for its application
When it comes to auto-extension features, EDAS initially uses the CPU and memory of the upstream Kubernetes HPA as two metrics. However, as the user base grew and requirements diversified, the EDAS team quickly discovered the limitations of upstream HPA:
Support for custom metrics is limited, especially for fine-grained metrics at the application level. Upstream HPA focuses on container-level metrics, such as CPU and memory, which are too rough for applications. Metrics that reflect application load, such as RT and QPS, are not readily supported. Yes, HPA can be extended. However, this capability is limited when it comes to application-level metrics. EDAS teams are often forced to fork code when trying to introduce fine-grained application-level metrics.
Scaling to zero is not supported. When their microservices are not being used, many users have a need to scale to zero. This requirement is not limited to FaaS/ serverless workloads. It saves costs and resources for all users. Currently, upstream HPA does not support this feature.
Predetermined scaling is not supported. Another strong demand from EDAS users is the predetermined scalability. Similarly, upstream HPA does not provide this functionality, and the EDAS team needs to look for alternatives that are not vendor-locked.
Based on these requirements, the EDAS team began planning a new version of the EDAS auto-scaling feature. At the same time, EDAS introduced OAM in early 2020, overhauling its underlying core components. OAM provides a standardized, pluggable application definition for EDAS to replace its internal Kubernetes application CRD. The extensibility of this model allows EDAS to easily integrate with any new functionality of the Kubernetes community. In this case, the EDAS team tried to combine the need for EDAS's new auto-scaling feature with the standard implementation of OAM's auto-scaling feature. Based on the use case, the EDAS team summarized three criteria:
The auto-scaling feature should present itself as a simple atomic function without the need for any complex solutions.
Metrics should be pluggable, so the EDAS team can customize them and build on them to support various requirements.
It needs to be out of the box to support scaling to zero.
After a detailed evaluation, the EDAS team chose the KEDA project, which is open source by Microsoft and Red Hat and has been donated to CNCF. KEDA provides several useful Scaler by default and supports scaling to zero right out of the box. It provides fine-grained automatic scaling for applications. It has the concept of Scalar and Metric adapters, supports a powerful plug-in architecture, and provides a unified API layer. Most importantly, KEDA is designed to focus only on automatic scaling so that it can be easily integrated into OAM features. Overall, KEDA is a good fit for EDAS.
This is the end of the content of "what is the use of KubeVela+KEDA"? thank you for your reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 230
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.