Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Is serverless the next generation computing paradigm?

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article introduces whether serverless is the next generation computing paradigm. The content is very detailed. Interested friends can use it for reference. I hope it will be helpful to you.

Introduction

In the past HC2020, Huawei, facing the era of diversified computing power, released three development kits for DC distributed computing, one of which is Yuanrong component. Yuanrong is a distributed parallel application development framework based on functional computing, hoping to help developers define the development mode and running mode of DC distributed computing. With regard to the function calculation here, colleagues keep asking about the relationship or difference between this and Serverless?

In different scenarios of the company, it has been two years to promote the use of serverless technology, and now I also use this introduction to talk about my understanding.

The essence of 1.Serverless

The current relatively formal definition of Serverless (CNCF white paper) has several features: it is a further development of the cloud computing form, which brings two key benefits over the current cloud computing: NoOPS and Pay as You Run. At present, the realization form of Serverless technology is represented by Lambda released by AWS, and others include Microsoft Azure Function, Google Cloud Functions and so on. In the vision of "Cloud Programming Simplified" released by Berkeley in 2019, Serverless is defined as the next generation computing paradigm of cloud computing. The evolution of cloud computing from micro-service technology to Serverless technology, we can better understand the logic behind these technologies by looking at the nature of cloud computing, and we can understand why Berkeley focused on Serverless technology after successfully asserting the rise of cloud computing.

Figure 1: the stage and shape of current serverless technology

1.1 the evolution from the rise of cloud computing to cloud native ecology

The rise of cloud computing, after the rapid development of CPU hardware capabilities, benefits from the OS+ISV software ecology and the maturity of virtualization technology. Cloud computing cleverly continues the OS+ISV ecology, so that ISV can be seamlessly migrated to the cloud. Cloud vendor uses virtualization technology to provide IAAS services to customers. Meet the customer: 1, the operating conditions of the application software have not changed; 2, there is no need to maintain the physical host, only need to pay attention to the application software itself.

First of all, from the perspective of the service form of cloud computing, enterprise applications and their infrastructure are now two levels of users and infrastructure providers, as shown in the following figure. The division of this logical level is very important. In the software ecology, the original infrastructure platform and application software are managed and maintained by the users themselves. At this time, the role of a professional platform provider appears to provide infrastructure.

Figure 2: cloud computing brings the concept of infrastructure provider

Second, let's go back to the process of the rise of cloud computing. As shown in figure 3, cloud Vendor uses the maturity of virtualization technology to provide IAAS services to users without changing the ecological play of OS+ISV, so that users' software is almost seamlessly migrated to the infrastructure of cloud vendors. In this way, cloud vendors quickly gather some enterprise users on the cloud. After this stage, cloud vendors such as AWS innovate rapidly, in addition to IAAS services, cloud middleware, cloud security, and third-party services integrate a large number of cloud application running and business logic services. Gradually build up the ecological environment needed by the cloud primary ecology. After this first stage, container technology continues to evolve, cloud native software ecology begins to take shape, and it is obvious that the interface of software ecology rises from GuestOS to container level, and the deployment of application software is also completed by platform providers, so users no longer have to pay attention to the operating system running by the infrastructure. In this software stack, the scope of cloud vendors, that is, platform provider cover, has gone a step further. This change is not only the change brought about by the cloud's native ecology, but also the need for cloud vendors' business logic.

Figure 3: schematic diagram of the emergence and evolution of cloud computing

Why would you say that? You can see the next section.

1.2 the business logic of cloud computing is based on economies of scale

At present, cloud computing is concentrated in several cloud vendors, and successful manufacturers are based on their own business that consumes a lot of infrastructure. Cloud business gradually expands and develops. For example, AWS and Aliyun are based on their own e-commerce service platform. Google Cloud and Azure gradually gained market share after finding their own mobile user service and SAAS service scale operation respectively.

Observing the development process of cloud computing, we can say that cloud computing manufacturers follow the development model of economies of scale. Combined with economies of scale, there are two important phenomena or laws. Understanding these two phenomena can help us to understand the evolution direction of cloud technology.

First of all, it can be interpreted as economies of scale. To put it simply, with the expansion of production scale (cloud computing), the average unit output (service benefit) cost (infrastructure cost) tends to decline. Jeffrey West of Britain studies the laws of urban population and industrial development and concludes that the output of economies of scale is superlinear, while costs follow a sublinear law, as shown in the chart below.

Knowing this phenomenon, we can understand why cloud vendors strive for scale. AWS launched in 2002, continued to promote cloud services, and did not enter the profit period of economies of scale until AWS released its results in 2013. AWS now invests US $10 billion a year in CAPS and continues to build cloud scale, with more than 500W servers worldwide. Based on the advantage of scale cost, it has constructed a virtuous circle driven by long-term value cost and technology ecology, and mastered the pricing strategy of cloud services. in 19 years, reInvent claimed that it had achieved normal price reductions of 70 times, and at the same time, it could also obtain an operating profit margin of 20% for cloud computing business.

Figure 4: cloud computing follows economies of scale

Secondly, it can be interpreted as the effectiveness of scale. The scale of production continues to expand, and when the unit cost of infrastructure is reduced to the lowest, it reaches the optimal scale of production. If the production technology does not change, then continue to expand the scale of production, the average unit cost of output gradually increases. At present, AWS, which has entered a virtuous circle of scale effects, its capex/ income ratio is basically maintained at about 40-50%. Although it is relatively stable, it also needs to seek room for continued cost reduction.

Figure 5: LAC curve of economies of scale

At the same time, the current cloud vendor principal service IAAS, which provides virtual machine resources for tenants, has encountered the problem of low resource utilization, including CPU utilization and memory utilization. Industry data: the utilization rate of CPU resources in data centers provided by cloud vendors is no more than 20-30%. When tenants buy virtual machines with fixed VCPU and memory configuration, cloud vendors actually use packing algorithms on the platform and assemble them into the spare space of the data center according to the needs of tenants. Tenants purchase resources according to their business peak, in this case, a large number of tenant resources are in the business off-peak state for a long time, and there is basically nothing cloud vendors can do about resource utilization. At the same time, cloud vendors operate their own business through mixed parts of different businesses, SLA scheduling and other technologies, such as Google has long announced that the improved version of Brog can achieve 90% CPU resource utilization in the data center. Such a situation is also the reason why cloud vendors propose shared computing instances, such as AWS T instances, and so on. Through the user SLA strategy, the user can obtain the control over the sharing of VCPU when the user knows it, and achieve high CPU utilization.

Back to the previous point of view, we mentioned both the user and the platform provider. First of all, cloud manufacturers hope to gain more control over resources, so that super-large-scale cloud computing continues to enjoy economies of scale, so that unit resource costs continue to decline. Secondly, tenants are concerned about the stability of their business operation on the one hand, and hope to focus more on the business itself on the other. So we can understand the direction of cloud computing technology: the software stack level of cloud vendor management will certainly be higher and higher, and cloud computing technology must be able to solve the flexibility and high scalability of user business. Cloud vendors gain maximum control over application running resources, pursue high resource utilization and low cost, and tenants obtain applications guaranteed by business SLA.

Serverless technology is an option for cloud vendors based on economies of scale.

1.3 Serverless technology is the choice to match cloud's native economies of scale.

As shown in figure 3, Serverless further implements computing abstraction after the container Runtime, and the software stack managed by the cloud vendor is further promoted to Runtime. Here the author separates the function calculation from the Serverless technology. Function computing is an abstraction of a computational paradigm, which further divides the computational abstraction into two levels, the function (code logic) and the function runtime (resources, libraries, etc.), that is,

Function calculation = function + function runtime

Serverless computing also uses functions to calculate the above abstraction. In the cloud native environment, users can further focus on the business code logic and directly use the Runtime provided by cloud vendors. Compared with containers, the software stack managed by cloud vendors has improved to another level. The author classifies Serverless technology as cloud native technology, because serverless provides services for tenants and has to rely on a large number of back-end services provided by cloud vendors and their runtime, that is,

Serverless= FaaS+BaaS

Function computing at this level of leisure, with the help of end-users or IOT and other event-based applications, separate the code from its runtime, and cloud vendors provide the runtime of function code and its physical resources. As shown in the following figure, the platform provider gets the maximum controllable range of the software stack, while users only need to focus on their code. Therefore, the application of function granularity allows platform providers to get the maximum technology space, based on this space, the scale cost of cloud computing can be further reduced, so Serverless technology is a necessary choice for cloud manufacturers.

Figure 6:Serverless gives platform providers the largest space for controllable technology in the software stack

However, in the current stage of Serverless, the scope of application is mainly event-based, short-term task-based applications. The user writes the function of the task, which has a constraint on the execution time and resources. Based on this, the platform provider gets the maximum scheduling authority, so it provides a pricing strategy of pay-per-view and on-demand pricing. The service-driven scenario of massive terminals has been well applied, giving full play to the advantages of serverless on-demand flexibility and on-demand billing. Obviously, this scope of application is not enough to meet the expectations of cloud manufacturers. So from Serverless= FaaS+BaaS, another BaaS aspect, cloud vendors want to promote the rapid evolution of serverless computing.

NoOPS and Pay as You Run are two key features seen from the user's point of view in the official definition of Serverless. The form that conforms to these two characteristics is also Serverless technology, so the concept of Serverless technology is more extensive than functional computing, so it is not necessarily based on the abstraction of functional computing, as long as the business that can provide users with NoOPS and Pay as You Run can be classified into Serverless technology. So cloud vendors are constantly pushing two things.

One thing is to make BaaS serverless, which we have all seen, cloud DB, cloud storage serverless products have been launched. There is another thing that cloud vendors need to promote the current application server lessening, that is, current users continue to talk about their own application services and their runtime, but the autoscaling and parallelization templates of the services are provided by cloud vendors. This can be clearly seen in Google's product strategy.

Google has two clear technology lines for Serverless: one is the cloudRun product, the Serverless platform based on the K8S container platform, which is equivalent to the Serverless application platform, which promotes the evolution of the existing micro-service applications to server lessening; the other is the CloudFunction+MBaaS products of mobile applications. Two technology lines promote the evolution of Serverless technology. AWS certainly will not lag behind, although AWS does not have strong ecological control of the runtime platform like Google, but AWS also directly provides AutoScaling, ASM and other services to guide the application service Serverlessization.

Based on the above business logic behind serverless technology, it has become a must for cloud vendors, and it is where Berkeley asserts that serverless is the next generation of computing paradigm in the cloud era.

About whether serverless is the next generation computing paradigm or not, I hope the above content can be helpful to you and learn more knowledge. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report