Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the relevant knowledge points of Serverless

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces the relevant knowledge of "what are the relevant knowledge points of Serverless". The editor shows you the operation process through actual cases, the operation method is simple and fast, and it is practical. I hope this article "what are the relevant knowledge points of Serverless" can help you solve the problem.

The evolution from Serverfull to Serverless

The figure above is a typical case after the deployment of Web applications for MVC architecture. The entire blue part of the image above is the boundary of the server, which is the online operation and maintenance of the application or code. The boundary of the problem to be solved by Serverless is the boundary of the server, that is, the operation and maintenance of the server.

Well, let's first take a look at the development history of service-side operation and maintenance, that is, the development history from the very beginning to Serverless. Suppose there is a Web application, and the development of this Web application involves two roles: the R & D engineer and the operation engineer. R & D engineers are only concerned with the business logic of the application. Specifically, he is responsible for the development of the entire MVC architecture Web application, that is, from the server interface View layer, to the business logic Control layer, to the data storage Model layer, the version management and online bug repair of the entire Web application belong to the R & D engineer. On the other hand, the operation and maintenance engineer is only concerned with the server-side operation and maintenance transactions of the application. He is responsible for deploying online Xiao Cheng's Web applications, binding domain names and log monitoring. When the user has a large number of visits, he wants to expand the capacity of the application; when the user has a small number of visits, he wants to reduce the size of the application; when the server dies, he has to restart or change the server.

The era of Serverfull. In the beginning, R & D engineers didn't have to worry about anything related to deployment. Every time the R & D engineer releases a new application, the operation engineer is responsible for deploying the latest code online. Operation and maintenance engineers need to manage the release of iterative versions, merge branches, launch applications, and roll back problems. If there is an online failure, you also need to grab the log and send it to the R & D engineer.

The era of Serverfull completely separates R & D from operation and maintenance. The benefit of this complete isolation is obvious: research and development projects can concentrate on their own business, but operation and maintenance engineers become tool people, stuck in a large number of operation and maintenance work, dealing with a large number of trivial chores.

The era of DevOps. The operation and maintenance engineer found that many things are repetitive work, and if something goes wrong on the line, they have to catch the log and send it to the R & D engineer, which is very inefficient. Therefore, the operation and maintenance engineer has developed a set of operation and maintenance console to let the R & D engineer handle the work of deployment online and log crawling.

In this way, operation and maintenance engineers can make it a little easier, but they are still responsible for optimizing the architecture and scaling up the resource solution. In addition to the development task, R & D engineers have to release new versions and solve online failures through the operation and maintenance console. At this time, it is the DevOps of R & D and operation and maintenance, and the R & D engineer also works as part of the operation and maintenance engineer, but this part of the work should be the responsibility of the R & D project (such as version control, online failure, etc.). Moreover, the operation and maintenance engineer has turned this part of work into tools, which is more efficient, and there is a trend of less.

The industrial age. Based on the R & D engineer's development process, the operation and maintenance engineer further improves the operation and maintenance console to achieve automatic code release: code scanning-testing-grayscale verification-online. In this way, the R & D engineer only needs to merge the latest code into the develop branch specified by the Git repository, and the rest is taken care of by the automatic code release pipeline. At this time, R & D engineers do not need operation and maintenance, and without operation and maintenance NoOps, R & D engineers will go back to the beginning and only need to care about their own application business.

At the same time, operation and maintenance engineers found that resource optimization and capacity expansion solutions can also be solved by performance monitoring and traffic estimation. In this way, the operation and maintenance work of the operation and maintenance engineers are all automated. So for R & D engineers, the sense of existence of operation and maintenance engineers is getting weaker and weaker, and the work that needs to be done by operation and maintenance engineers is less and less, which is replaced by automation tools. This is Serverless.

The future. After realizing the exemption of operation and maintenance, operation and maintenance engineers have to transform to do lower-level services, do infrastructure construction, and provide smarter, more resource-saving and more thoughtful services. On the other hand, R & D engineers can not be bothered by operation and maintenance at all, and can focus on doing their own business well, improving user experience and thinking about business value.

Free operation and maintenance NoOps does not mean that the service side operation and maintenance does not exist, but through omniscience and omniscience service to cover all the needs of R & D deployment, so that R & D engineers have less and less awareness of it. In addition, NoOps is ideal because we can only approach NoOps infinitely, so it is said to be less, not ServerZero.

Serverless's Server defines the boundary of Serverless to solve the problem, that is, server-side operation and maintenance; less explains the purpose of Serverless to solve the problem, that is, free operation and maintenance NoOps. Therefore, Serverless should be called server-side operation and maintenance-free, which is the problem that Serverless wants to solve.

What is Serverless?

What Serverless needs to solve is to make the work of operation and maintenance engineers completely transparent, while R & D engineers only care about business logic and do not care about the various problems of deployment, operation and launch. To achieve this state, it means that the operation and maintenance work of the entire Internet server should be extremely abstract. The more abstract things are, the more information they contain, so the more difficult it is to define.

However, in general, Serverless has two meanings:

In a narrow sense, Serverless (most common) refers to Serverless computing architecture = FaaS architecture = Trigger (event-driven) + FaaS (Function as a Service, function as a service) + BaaS (Backend as a Service, back-end as a service, persistence or third-party service) = FaaS + BaaS.

In a broad sense, Serverless refers to the service-side operation and maintenance-free, that is, cloud services with Serverless characteristics.

Narrow sense of Serverless

The Serverless mentioned in our daily work refers to the narrow sense of Serverless.

This is mainly due to historical reasons. In November 2014, Amazon launched its first real Serverless FaaS service: Lambda. Since then, the concept of Serverless has come into the eyes of most people, so Serverless was once equal to FaaS. FaaS, function as a service, also has a name called Serverless Computing, which allows us to create, use, and destroy a function anytime, anywhere.

The usual process of using a function: it needs to be loaded into memory from the code, that is, instantiated, and then executed when it is called by other functions. The same is true in FaaS, where the function needs to be instantiated and then called by the trigger Trigger. The biggest difference between the two is in Runtime, that is, the context of the function. The Runtime of FaaS is pre-set and provided by cloud service providers, which we can use but cannot control. And the Runtime of FaaS is temporary. When the function of FaaS is called, the cloud service provider will destroy this power and recover resources, which means that the temporary Runtime will be destroyed together with the function. Therefore, FaaS recommends stateless functions, that is, as long as the parameters of a function are fixed, the results returned must also be fixed.

So what if the initial Web application of MVC architecture is changed into Serverless? the View layer is the content displayed by the client, which usually does not require the computing power of the function; the Control layer is the typical usage scenario of the function. In MVC architecture, a HTTP data request often corresponds to a Control function, so this Control function can be completely replaced by the FaaS function. When the data request volume of HTTP is large, the FaaS function will automatically expand the capacity of multiple instances and run at the same time; when the data request volume of HTTP is small, it will automatically scale down; when there is no HTTP request, it will also scale down to 0 instance. As shown in the following figure:

The Control function becomes stateless, and the instance of the function is constantly expanding and reducing, so what if you want to persist some data at this time? Of course, the Control function can still be implemented in the way of operating the database command. However, this approach does not make sense, because the approach of the Control layer has changed, and if the Model layer is still the way it used to be, then the architecture will definitely fall apart. At this point, you need BaaS, that is, BaaS the Model layer, and BaaS is specifically used with FaaS. The following Model layer takes MySQL as an example. In the Model layer, it is best to package the commands that operate the database into OpenAPI of HTTP, provide them to FaaS calls, and control the request frequency of this API and the reduction of current limit. The Model layer itself can be optimized through connection pooling, MySQL clustering, and so on. As shown in the following figure:

At this point, based on the Serverless architecture, the traditional MVC architecture has been completely transformed into the combination of View + FaaS + BaaS. There is no doubt that Serverless is popular because of the FaaS architecture. Our common Serverless refers to Serverless Computing architecture, that is, applications composed of Trigger, FaaS and BaaS architecture.

Generalized Serverless

In a broad sense, Serverless actually means that the server is free of operation and maintenance, and it is also the trend of the future. To achieve NoOps, you need to have:

There is no need for users to care about the server (fault tolerance, disaster tolerance, security verification, automatic expansion and scaling, log debugging)

Pay by usage (number of calls, duration, etc.), low cost and high performance in parallel, saving money in most scenarios.

Fast iteration-trial and error capability (multi-version control, grayscale, CI&CD, etc.).

Why do you need Serverless?

In 2009, there were two competing cloud phantom methods:

The Amazon EC2,EC2 instance looks like physical hardware, and the user can control the entire software stack up from the kernel.

Google App Engine, another domain-specific application platform, is an application structure that classifies the stateless computing layer from the stateful storage layer.

In the end, Amazon was used in the market as a low-level virtual machine for cloud computing, and the main reason for the success of this low-level virtual machine is that early cloud computing users want to re-create the same computing environment in the cloud as on their local computers to simplify the task of migrating their load to the cloud. It is clear that this actual need is more important than rewriting new programs for the cloud, especially at a time when it was not clear whether cloud computing would be successful.

Then the disadvantage of this approach is that developers have to manage virtual machines themselves, so they either become system administrators or set up the environment with them. These have prompted customers who only use simplified applications to put forward new requirements to cloud service providers in the hope that there will be an easier way to run these simple applications. For example, suppose an application wants to send pictures from a mobile app to the cloud, which requires creating tiny images and putting them on web. This task may only take dozens of lines of JavaScript code, which is negligible compared to setting up the appropriate server environment to run this code.

Driven by these needs, Amazon launched a new service called AWS Lambda service in 2015. The user only needs to write code, and the service provider is responsible for server provisioning and task management. The code is packaged as FaaS (Function as a service), which represents the core of Serverless computing, but the cloud platform also provides a special Serverless framework to meet specific program requirements, such as BaaS (Backend as a Service). To put it simply, service-free computing is defined as follows: Serverless Computing = FaaS + BaaS. At the same time, if the service is regarded as no service, then it must be able to expand and scale automatically and be billed according to the actual usage.

★ Cloud functions (i.e., FaaS) provide general compute and are complemented by an ecosystem of specialized Backend as a Service (BaaS) offfferings such as object storage, databases, or messaging.

"Serverless VS Serverful

In Serverless, users only need to write cloud functions in high-level language and select the event that triggers the cloud function to run. For example, when loading an image into cloud storage or adding a small image to the database, the user only needs to write the corresponding code, and the rest is processed by the Serverless system, such as instance selection, scaling, deployment, fault tolerance, monitoring, logging, security patches, and so on. Next, we summarize the differences between Serverless and the traditional approach, which we call Serverful.

The three most critical differences between Serverless and Serverful computing are:

* * decouple computing from storage. * * Storage and computing resources are provided separately, which means that the allocation and pricing of the two resources are independent. Generally speaking, storage resources are provided by an independent cloud service, and computing is stateless.

* * execute code without managing resource allocation. * * unlike traditional cloud computing users who need to request resources, serverless means that the user submits a piece of code, and the cloud automatically allocates resources to it and executes it.

* * pay based on the amount of resources actually used, rather than based on the number of resources allocated. * * serverless billing is calculated based on a series of execution-related factors, such as the execution time of the code, rather than on the cloud platform, such as the size and number of allocated VM

If described in assembly and high-level languages, Serverful computing is similar to programming in low-level assembly language, while Serverless computing is similar to programming in high-level languages such as python. For example, the assembly programmer for a simple expression of c = a + b must display and select one or more registers, load values into those registers, perform operations, and then store the results. This is similar to several steps of Serverful cloud programming: first provide resources or identify available resources, then load those resources with the necessary code and data, perform calculations, return or store results, and finally manage resource release. Serverless, on the other hand, provides convenience similar to high-level programming languages, and Serverless and high-level programming languages are also very similar. For example, automatic memory management in high-level languages no longer manages memory resources, while Serverless computing eliminates the need for programmers to manage server resources.

Attractiveness of Serverless Computing

For cloud service providers

Serverless can promote business growth because it makes cloud computing easier to program, which in turn helps attract new customers and help existing customers use cloud computing more. For example, a recent survey found that about 24% of Serverless computing users are new to cloud computing, and 30% of existing serverful users also use Serverless computing.

Short uptime, small memory footprint, and stateless features make it easier for cloud providers to find unused resources to run these tasks, thereby improving resource reuse.

You can take advantage of less popular computers (instance types are determined by the cloud provider), such as older servers that attract smaller serverful cloud customers.

The following two points of ★ can maximize existing resources and increase revenue. "

For users,

To benefit from the improvement in programming efficiency, novices do not need to understand the cloud infrastructure to deploy functions, and regular users can save deployment time and focus on the problems of the application itself.

Cost savings, because the cloud service provider increases the utilization of the underlying server, and the function is billed only when the event occurs, and the charge is fine-grained (usually 100 milliseconds). That means that you only need to pay for what they actually use rather than what is reserved for them.

How does FaaS work?

Before the advent of Serverless, how tedious it was to deploy such a "Hello World" application.

We want to buy virtual machine services

Initialize the virtual machine running environment, install the application running environment we need, and try to be consistent with the local development environment.

Then, in order for users to access the application we just started, we need to purchase the domain name, register the domain name with the virtual machine IP, configure Nginx, and start Nginx

Finally, we need to upload the application code.

Start the application

With Serverless, you only need to take three simple steps. Serverless is equivalent to the extreme abstraction of the server operation and maintenance system (abstraction means that the user requests the full link of the HTTP data request, which does not change qualitatively, but simplifies the full link model).

Previously, the running environment of building code on the server side-function service

Prior to the need for load balancing and reverse proxy-HTTP function trigger

Upload code and launch application-function code

The entire startup process is shown in the following figure:

When the user accesses the HTTP function trigger for the first time, the function trigger will Hold the user's HTTP request and generate a HTTP Request event notification function service

The function service checks whether there are any idle function instances, and if there are no function instances, go to the function code repository to pull your code, initialize and start a function instance, and then pass in the HTTP Request object as the function argument to execute the function.

The result of the function execution HTTP Response returns the function trigger, which then returns the result to the waiting user client.

The biggest difference between ★ FaaS and PaaS platforms lies in resource utilization. This is also the biggest innovation of FaaS. The application instance of FaaS can be scaled down to 0, while the PaaS platform needs to maintain at least one service or container. This is mainly because FaaS can start function instances very quickly, while it usually takes dozens of seconds for PaaS to create instances. In order to ensure your service availability, you must always maintain at least one server running your application instance.

Extremely fast start of FaaS

Cold start in FaaS refers to the whole process from the call of the function to the preparation of the function instance. Cold start is concerned with the startup time, the shorter the startup time, the higher the utilization of resources. For today's cloud service providers, based on different language features, the average cold start time is between 100 and 700 milliseconds.

The following figure shows the cold start process of the FaaS application. Among them, the blue part is the responsibility of the cloud service provider, and the red part is the responsibility of the user. Cloud services will constantly optimize the parts they are responsible for. After all, the faster the startup speed, the higher the resource utilization. For example, it takes a long time to download function codes during the cold start process. So once you update the code, the cloud service provider will secretly start dispatching resources and downloading your code to build an image of the function instance. When requesting the first access, the cloud service provider can use the built cache image to directly skip the download function code step of the cold start and launch the container from the image, which is also called prewarm cold startup. In addition, there is a reserved instance policy that can also speed up or bypass cold startup time.

The ★ FaaS service starts at 0 and executes a function in 100 milliseconds. This is the main reason why FaaS dares to scale down to zero. Usually we open a web page has a key indicator, the response time of less than 1 second, are considered excellent. By comparison, the start-up time of 100 milliseconds has very little effect on the second opening rate of the web page. " Why can't the application hosting platform PaaS start up very quickly? Because the application hosting platform PaaS must support multi-language compatibility and provide traditional background services, such as MySQL and Redis, in order to adapt to the diversity of users. This means that PaaS has a large number of dependencies and multilingual versions that need to be compatible when initializing the environment, and application code that is compatible with multiple users often increases the time of the application building process.

At the beginning of FaaS design, the controllability of users and application scenarios are sacrificed to simplify the code model, and the hierarchical structure further improves the utilization of resources.

Layering of FaaS

When the FaaS instance executes, it has at least three layers of structure, as shown in the following figure: container, runtime runtime, and specific function code.

In the current FaaS implementation scheme, the container scheme may be Docker container, VM virtual machine, or even Sandbox sandboxie environment.

The runtime Runtime is the context context in which your function executes. Runtime's information includes the language and version in which the code runs, such as Node.js v10 Magi Python 3.6; callable objects, such as aliyun SDK; system information, such as environment variables, and so on.

The advantage of this layering is that the container layer is more applicable, and cloud service providers can preheat a large number of container instances to fragment the computing of physical servers. The applicability of Runtime is low, and it can be preheated in a small amount. Once the container and Runtime are fixed, download your code and execute it. Through layering, we can achieve overall resource optimization, so that your code can be executed quickly and cheaply.

In addition, once the container & Runtime is started, it will be maintained for a period of time, during which the function instance can directly process user data requests. When no user request event occurs within a period of time (the time and policy for each cloud service provider to maintain the instance are different), the function instance will be terminated.

FaaS process model

From the point of view of the process running the function instance, there are two models:

Destroy after use: after the function instance is ready, the execution of the function ends directly. The purest use of FaaS.

Resident process: after the function instance is ready, the execution of the function does not end, but returns to wait for the next function to be called. Even if the FaaS is a resident process, if no event is triggered for a period of time, the function instance will still be terminated by the cloud service provider.

★ can actually see from the figure below that the trigger is a resident process model, but the trigger is handled by the cloud service provider. "

Assuming that we are deploying a Web service with a MVC architecture, then:

Previously, assuming that there is no FaaS, we will deploy the application to the managed platform PaaS; when the Web service is started, the main process initializes the connection to MongoDB, and after initialization, it continues to listen on port 80 of the server until the handle to the listening port is closed or the main process receives the termination signal. When port 80 establishes a TCP link with the client and a HTTP request comes, the server forwards the request to the main process of the Web service, and the main process creates a child process to process the request.

In the FaaS resident process mode, we first need to modify the code. The Server object of Node.js uses the Server object provided by FaaS Runtime. Then we change the listening port to listen for HTTP events. When the Web service is started, the main process initializes the connection to MongoDB, and after initialization, it continues to listen for HTTP events until the parent process controlled by the cloud service provider shuts down the collection.

When the HTTP event occurs, our Web service master process creates a child process to handle the request event as before. The main process is like the blue dot we drew above. When the HTTP event occurs, the child process it creates is the blue arc arrow, and when the child process is finished, it will be reclaimed by the main process.

From the above example, you can see that the resident process is specifically designed for FaaS on traditional MVC architecture deployments (it seems unnatural, FaaS native or disposable). Of course, it is also possible to deploy Web services in MVC architecture using the out-of-use model, but this is not recommended because it is too expensive to retrofit the traditional MVC.

★ from the perspective of controllability and transformation cost of Web services, the server-side deployment solution is the most suitable for hosting platform PaaS or running on IaaS with its own services. As I said last time, using FaaS must be used within the constraints of FaaS, and the best practice should be to choose FaaS development in the first place. "scenarios that apply to disposable data orchestration and service orchestration.

Data arrangement

At present, the most successful and widespread design pattern is the MVC pattern. However, as the front-end MVVM framework becomes more and more popular, the front-end View layer gradually moves forward and develops into a SPA single-page application; the back-end Control and Model layers gradually sink and develop into service-oriented programming back-end applications. In this case, the front and rear end is more thoroughly decoupled, the front-end development can rely on the Mock data interface to completely break away from the back-end restrictions, while the back-end students can develop for the data interface, but this also produces a data gateway layer with high network Ibano.

Node.js 's asynchronous non-blocking and JavaScript are naturally close to front-end engineers and naturally take over the data gateway layer. Therefore, the BFF layer (Backend For Frontend) of Node.js is born, and the BFF layer acts as the middle glue layer, gluing the front and back ends. The back-end data and the back-end interface are arranged and adapted into the data structure needed by the front end, which is provided to the front end for use.

★ raw data, which we call metadata Raw Data, is almost unreadable to the average user. So we need to combine the useful data and process the data to make it valuable. For the combination and processing of data, we call it data arrangement. "

The BFF layer is usually taken care of by Node.js applications that are good at dealing with high network I / O. The traditional server-side operation and maintenance Node.js application is still relatively heavy, which requires us to purchase virtual machines or use applications to host the PaaS platform. However, because the BFF layer is only stateless data orchestration, we can replace the BFF layer Node.js application with the FaaS disposable model, which is called SFF (Serverless For Frontend) in the recent circle.

Now let's string the new request link logic. When a data request from the front end comes, the function trigger triggers our function service; after our function is started, we call the metadata interface provided by the back end, and process the returned metadata into the data format needed by the front end; our FaaS function can rest completely.

Service arrangement

Service orchestration is very similar to data orchestration, the main difference is that service orchestration combines and processes various services provided by cloud service providers (that is, service providers provide some API, and we integrate these API to achieve the functions we want).

Before FaaS, there was the concept of service choreography, but service orchestration was limited by the SDK language version supported by the service. If we want to use these services or API, we have to find the corresponding SDK through the programming language we are familiar with, load the SDK in our code, and use the secret key to call the SDK method for orchestration. If there is no SDK, you need to implement SDK according to the interface or protocol provided by the platform.

However, with FaaS, it is much more convenient for us. If our service provider does not provide us with SDK in the language we are familiar with, then we can write an orchestration program in other languages that will orchestrate the service provider's services. After that, we can call the choreographed program, and the choreographed program can be used up and destroyed. For example, our Web service needs to send a CAPTCHA email. We look at Aliyun's mail service documents and find that Aliyun only provides SDK for Java, PHP and Python, but not SDK for Node.js. At this point, we can refer to the PHP documentation of the mail service and use PHP's SDK to create a FaaS service to send mail (the function of sending mail is very simple).

This is also a highlight of FaaS: language independence. It means that your team is no longer limited to a single development language, you can take advantage of the respective language advantages of Java, PHP, Python, and Node.js to develop complex applications. FaaS service choreography is paid special attention to by cloud service providers precisely because of its openness. Using FaaS can create a variety of complex service orchestration scenarios, but also independent of language, which greatly increases the usage scenarios of various services of cloud service providers. Of course, this also puts forward requirements for developers, which requires developers to learn more about the various services provided by cloud service providers.

This is the end of the content about "what are the relevant knowledge points of Serverless". Thank you for your reading. If you want to know more about the industry, you can follow the industry information channel. The editor will update different knowledge points for you every day.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report