Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Getting started with Serverless Kubernetes: subtract from kubernetes

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Background Kubernetes, as a general container orchestration system, carries a wide range of applications and scenarios, including CI/CD, data computing, online applications, AI and so on. However, because of its versatility and complexity, managing a kubernetes cluster is challenging for many users, mainly reflected in: high learning costs; high cost of cluster operation and maintenance, including node management, capacity planning, and location of various node exception problems. The computing cost is not optimal in many scenarios, for example, for a cluster that runs Jobs regularly, it is a waste for users to hold a resource pool for a long time, and the resource utilization is not high. Serverless Kubernetes is an exploration of the future evolution of kubernetes by Aliyun CCS team. By subtracting kubernetes, it reduces the burden of operation and maintenance management, simplifies cluster management, and makes kubernetes from complex to simple. We believe that in the future users will pay more attention to the development of applications than the maintenance of infrastructure for subtractive nodless management of Kubernetes clusters. Reflected in the kubernetes cluster, we hope that users can pay attention to the semantics of application choreography such as pod/service/ingress/job, while reducing the attention to the underlying node. The cost of cluster operation and maintenance can be significantly reduced without managing nodes. According to statistics, most of the common abnormal problems in kubernetes are related to nodes, such as Node NotReady problems, and there is no need to worry about the security of Node and the upgrade and maintenance of basic system software. In the ASK cluster, we use the virtual node virtual-kubelet instead of the ecs node. The capacity of the virtual node can be regarded as "infinite". Users do not need to worry about the capacity of the cluster and do not need to make capacity planning in advance.

No Master management is the same as ACK hosted version, ASK's Master (apiserver, ccm, kcm, etc.) resources are hosted by CCS platform, and users do not need to manage the upgrade and operation and maintenance of these core components, nor do they have to pay any cost. In addition to the minimalist K8s basic operating environment, in addition to no need to manage nodes and Master, we have also made a lot of simplifications to kubernetes cluster management, including hosting a lot of addon by default, and users no longer need to manage some basic addon or pay for these addon. Relying on Aliyun's native network and storage capabilities, as well as the unique managed architecture design, we provide a highly simplified but fully functional kubernetes basic operating environment. Functional ACKASK storage requires deployment aliyun-disk-controller/flexvolume does not need to be deployed (in support) CNI network requires deployment terway/flannel daemonset does not need deployment, network communication coredns service based on vpc needs to deploy 2 copies of coredns without deployment, kube-proxy access based on privatezone requires deployment kube-proxy daemonset does not need deployment, privatezone access Ingress requires deployment nginx-ingress-controller does not need deployment ACR image needs to be deployed without deployment based on SLB layer 7 forwarding, no deployment is required for acr-credential-helper, no deployment is required for logtail daemonset log collection by default, metrics-server is not required for deployment by default support for metrics statistics, deployment is not required for eip to be mounted out of the box, terway is not required for deployment, and annotaion is used to specify cloud disk to create mount dependent aliyun-disk-controller with pod without deployment. The default support for auto scaling requires the deployment of cluster-autoscaler without the deployment of GPU plug-ins and the deployment of Nivida-docker without deployment. From the summary, you can see that the ACK cluster requires at least 2 ecs machines to run these basic Addon, while the ASK cluster makes these basic Addon invisible, and can achieve a zero cost to create an out-of-the-box available kubernetes cluster. Simplify auto scaling because there is no need to manage nodes and capacity planning, so when a cluster needs to expand, it does not need to consider the expansion at the node level, but only needs to pay attention to the expansion of pod, which is a great improvement in the speed and efficiency of expansion. At present, some customers specify to use ASK/ECI to quickly cope with the peak of business traffic. Currently, ASK/ECI supports 500 pod fully launched in 30s (to Running status), and a single pod startup can reach less than 10s. In addition to the low-cost creation of the ASK cluster itself, the on-demand use of pod also optimizes resource utilization in many scenarios. For many Jobs or data computing scenarios, users do not need to maintain a fixed resource pool for a long time, and ASK/ECI can well support these requirements. It has been proved that when the running time of pod is less than 16 hours a day, the ASK/ECI method is more economical than keeping the ecs resource pool. ECI: elastic Computing Service for Fast delivery of Container Resources when it comes to ASK, we must talk about ECI, the resource base of ASK. ECI is a stable, efficient and highly elastic container instance service provided by Aliyun based on ECS IaaS resource pool. ECI makes the container a first-class citizen of the public cloud, and users can deploy container applications directly without purchasing and managing ecs. This simplified container instance product form forms a perfect combination with ASK. Users can directly use ECI Open API to create container instance resources, but in container scenarios, users generally need an orchestration system to be responsible for container scheduling, highly available orchestration and other capabilities, and ASK is such a kubernetes orchestration layer. For ASK, ECI frees ASK CCS from the need to build a backend computing resource pool, let alone worry about the capacity of the underlying computing resource pool. Based on ECI means that based on the entire Ali Cloud IaaS large-scale resource pool, it naturally has the advantages of inventory and flexibility (for example, the ecs specifications corresponding to the underlying eci can be specified through Annotation, and most ecs specifications can be used in ASK to meet the needs of a variety of computing scenarios). In addition, ECI and ECS reuse resource pool means that we can maximize the release of large-scale dividends and provide users with lower-cost computing services. Container Ecology support ASK provides complete support for kubernetes Container Ecology. A large number of customers have used ASK to support the following scenarios. CI/CD:gitlab-runner,jenkins/jenkins-x data calculation: spark/spark-operator,flink,presto,argoAI:tensorflow/arenaServiceMesh: istio,knative Test: locust,selenium

The original link to this article is the content of Aliyun and may not be reproduced without permission.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report