Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to build Serverless Container based on K8S multi-rent capability

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

At present, Kubernetes has become a veritable enterprise container orchestration specification, and many cloud platforms have begun to provide container services compatible with Kubernetes interfaces. In the aspect of multi-user support, most platforms choose to provide dedicated virtual machine cluster directly, and users need to spend a lot of energy to deal with cluster scale, resource utilization, cost and so on.

This sharing brings Huawei Cloud's exploration and practice in the process of building an enterprise-level Serverless Container platform based on K8S, involving container security isolation, multi-lease management, the landing of Serverless concepts on the Kubernetes platform and other related content.

The course of Kubernetes in Huawei Cloud

First of all, let's take a look at the development of Huawei Cloud in Kubernetes. Huawei Cloud began to study and use Kubernetes in 2014, with an early focus on applying Kubernetes in a private cloud environment. In 2016, Huawei released the Container engine platform (CCE) on the public cloud, which is similar to most public cloud Kubernetes services (such as GKE and AKS) in the market and provides users with a complete set of hosted K8S clusters. At the beginning of this year, Huawei Cloud released Kubernetes Container instance Service (Serverless Container), which is different from some traditional container instance services in the industry.

The three benefits of containers are created for application

It is well known that container technology has three major benefits. First, it provides resource isolation, and it is easy for users to improve resource utilization through application composition; second, it has the ability to be flexible in seconds. Because of the technical characteristics of the container itself, there is no need to load heavy virtualization, so it can expand and scale up very quickly. Third, container mirroring technology solves the consistency problem including applications and their dependent environment, and simplifies the business delivery process.

But in the real environment, how much terminal convenience does container technology bring? It also starts with the use of Kubernetes.

Common usage patterns of Kubernetes

Private Cloud deployment Kubernetes

A common way for people to use Kubernetes is to build clusters in their own data centers.

The advantages of this approach are: first, you can enjoy the fun and fulfillment of the DIY process (of course, it may also turn into suffering with more and more problems as you use it). Second, under the full privatisation model, data requests are processed locally and there are no privacy concerns. Third, resource planning, cluster installation and deployment upgrades are all end-to-end control by users.

But the shortcomings are also obvious: first of all, many people only set their sights on Kubernetes when they build themselves, and have not done in-depth research on the surrounding supporting systems. In the process of implementation, they will face the problem of network, storage and other supporting system selection. Secondly, users need to bear 100% of the operation and maintenance costs, and the input of resources is often one-off (or phased), the threshold of input cost is very high. In addition, in a self-built environment, the number of clusters and the size of a single cluster in Kubernetes are often not very large, so when the scale of business deployment is relatively large, auto scaling is also limited by the scale of underlying resources, but the expansion speed of hardware resources is often unimaginably slow. Finally, developers are used to making more resource reservations, so resource utilization is also very limited. In other words, the self-builders have to pay for the full set of resource utilization.

Public cloud semi-managed Kubernetes dedicated cluster

The second common form of Kubernetes is a (semi) managed cluster in the public cloud. It can be understood that users buy a set of virtual machines, and the cloud platform automatically deploys a set of Kubernetes on these machines, while the semi-managed meaning is that some platforms may be shipped with their control plane.

The advantages of this form are:

(1) users own clusters, so they don't have to worry about a series of interference problems caused by sharing a set of Kubernetes with other users.

(2) the cloud platform often undergoes a lot of testing and tuning when providing Kubernetes services, so giving the configuration of the cluster is the best practice on your own platform. By running Kubernetes on the cloud in this mode, users can get a much better experience than deploying their own OPS.

(3) after the new version is released by the Kubernetes community, the cloud platform will do at least one additional round of testing and problem fixes, then go online and recommend users to upgrade. This saves users the workload of evaluating the timing of the upgrade. On the other hand, users who directly use the open source version will have to step on a lot of holes if they follow the new version too quickly, but if they want to postpone the upgrade to which version, they will have to keep up with the progress of bug and fix in the community, which is time-consuming and laborious.

(4) when users have problems with their Kubernetes, they can get professional technical support from the cloud platform. Therefore, using (semi -) managed Kubernetes services on the public cloud is a good way to transfer costs, and the operation and maintenance costs are shared with the cloud platform.

Of course, there are still some obvious shortcomings, the first is the price, when the user buys a set of virtual machines, the price is the virtual machine Flavor unit price multiplied by the number of nodes N. Secondly, because users own a set of Kubernetes cluster, the specification will not be too large, and the overall resource utilization is still relatively low. It is not effective even if you try to tune, and in most cases the user name cannot fully customize the configuration of the control plane component. In addition, when the cluster has few free resources and the business needs to expand, the cluster must be expanded first, and the end-to-end expansion will be limited by the creation time of the virtual machine.

Container instance service

Third, strictly speaking, users use the container instance service of public cloud in the form of containers.

Its advantages are obvious: users do not perceive the underlying cluster and do not need operation and maintenance; the granularity of resource pricing is fine enough and you can buy as much as you want; real second expansion and expansion, and second billing.

The disadvantage is that the container instance services of many platforms mainly provide private API, which is not compatible with kubernetes's API, and is easy to be bound by vendors.

In order to meet the needs of users to use K8S API, these container instance services have also launched compatible solutions based on virtual-kubelet projects. The whole container instance service is virtualized as a node in the Kubernetes cluster and docked with kubernetes master to handle the operation of Pod.

However, because the entire container instance service is virtualized into a super node. A series of highly available related features originally designed for multi-node applications in Kubernetes do not work. Another problem is that this virtual-kubelet project-based compatibility solution is incomplete on the data side, including the swing of project members at the Kube-proxy deployment level and how to be compatible with container stores that still have no news.

How to build Serverless Container based on K8S multi-rent capability

With so much background, you can't help but ask: why not try to use Kubernetes's multi-rent scheme to build Serverless Container services? In fact, there are many advantages in building container instance services based on Kubernetes multi-rent, the biggest of which is that it supports K8S native API and command line. The applications developed by users around Kubernetes are deployed and run directly on K8S-based Serverless Container. Because containers can be billed in seconds, users can enjoy the lower price threshold of container instance services. In addition, in this form, the cloud platform is usually used to operate and maintain a large resource pool. Users only need to pay for the resources of the business container, do not need to care about the resource utilization of the underlying cluster, and do not have the cost of cluster operation and maintenance.

The main challenge of this form is that the K8S native only supports soft multi-rent, isolation and other aspects.

Next, let's review the typical multi-rent scenarios in K8S.

The first is the SaaS platform, or other services based on K8S encapsulation, which does not directly expose the API of K8S. Because there is a layer of its own API encapsulation, the platform can do a lot of extra work, such as implementing the tenant definition, so the tenant isolation requirements for the K8s control plane are low. The application comes from the end user and is not trusted, so in fact, when the container is running, it needs strong data-side resource isolation and access control.

The internal platform of the second small company. Users and applications are from within the company, the degree of mutual trust is relatively high, the control plane and data plane do not need to do too much additional isolation enhancement. The original K8S can meet the needs.

The third is the platform of large enterprises, in this scenario, the users of K8S basically come from all departments within the enterprise, and the applications developed and deployed can only be put online after internal verification. Therefore, the behavior of the application is credible, and the data side does not need to be isolated too much. What is more, it is necessary to do some protective control on the control plane to avoid management interference between different departments and businesses, such as when calling API, it is necessary to achieve current limit for tenants.

The fourth scenario is to provide a multi-rental K8S platform on the public cloud, which has the highest requirements on the control plane and data plane. Because the source of the application is out of control, it is likely to contain some malicious code. While the API of K8S is directly exposed to the end user, the isolation ability of the control plane, such as API current restriction and access control to distinguish tenants, is indispensable.

To sum up, for K8S, there are three major challenges that need to be addressed if Serverless Container services are to be provided in a public cloud scenario. One is the introduction of the concept of tenant and the implementation of access control. At present, K8S still does not have the concept of original tenant, and the one with Namespace as the boundary can not well adapt to multi-rent scenarios. Second, the isolation of nodes (computing resources) and the security of Runtime. Third, network isolation, K8S default network access mode in this situation there will be a lot of problems.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report