Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Revenue or Challenge? What exactly does Serverless bring to the front end?

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Author | Huang Ziyi (Ziyi) Ali front-end technical expert

Introduction: front-end developers are the first to enjoy the benefits of "Serverless", because browsers are an out-of-the-box environment that doesn't even have to pay for computing! Serverless brings the front-end development experience to the back-end, using FaaS and BaaS to create a set of out-of-the-box back-end development environment. The author of this article will talk about the benefits and challenges of Serverless from a front-end perspective.

Introduction

Serverless is a "serverless architecture" that allows users to focus on business logic without worrying about the environment, resources and quantity of the program.

Now that the company has achieved DevOps, it is moving towards Serverless, but why the front end × × × erless?

To the business front-end students:

It will change the definition specification of front-end and front-end interfaces; it will certainly change the joint debugging mode of front-end and front-end, allowing the front-end to participate in server logic development and even Node Java mixed parts; and greatly reduce the threshold for Nodejs server maintenance. As long as you can write JS code, you can maintain Node services without learning DevOps related knowledge.

For a freelance developer:

In the future, server deployment is more flexible and less expensive; deployment is faster and less error-prone.

The front-end framework always brings back-end thinking, while Serverless brings front-end thinking into back-end operation and maintenance.

Front-end developers are actually the first to enjoy the benefits of "Serverless". They don't need to have their own services or even their own browsers to make their JS code run evenly and load-balanced on every user's computer.

And every user's browser, like the most fashionable and mature Serverless cluster today, starts from loading JS code remotely, and even takes the lead in cold startup: using JIT acceleration to enable code to achieve millisecond cold startup. Not only that, the browser also realizes the perfect environment of BaaS service, we can call any function to get the user's Cookie, environment information, local database service, regardless of what computer the user is using, what kind of network he is connected to, or even the size of the hard disk.

This is the concept of Serverless. Through FaaS (function as a service) and BaaS (background as a service) in an attempt to create a development environment that front-end developers are used to, so front-end developers should be better able to understand the benefits of Serverless.

Intensive reading

FaaS (function as a service) + BaaS (background as a service) can be called a complete implementation of Serverless. In addition, there is the concept of PaaS (platform as a Service). Usually, the platform environment is implemented through container technology, and ultimately in order to achieve NoOps (unmanned operation and maintenance), or at least DevOps (development & operation and maintenance).

Briefly introduce these nouns to prevent everyone from getting dizzy:

FaaS-Function as a service

Function is a service, and each function can be written in any language. In addition, you don't need to care about any operation and maintenance details, such as computing resources, flexible expansion, postpaid and event-driven. All the big cloud vendors in the industry support FaaS, and each has its own workbench or visual workflow to manage these functions.

BaaS-Backend as a service

Back-end and services are integrated with many middleware technologies, which can invoke services regardless of the environment, such as data-as-a-service (database service), cache service and so on. Although there are many XAAS below, only FaaS + BaaS make up the concept of Serverless.

PaaS-Platform as a service

Platform as a service, users can automatically continuously integrate and enjoy highly available services as long as the source code is uploaded. If the speed is fast enough, it can be considered similar to Serverless. However, with the rise of container technology represented by Docker, PaaS deployment with container granularity has gradually become the mainstream and the most commonly used way of application deployment. Such as middleware, database, operating system and so on.

DaaS-Data as a service

Data as a service, data collection, governance, aggregation, services are packaged to provide. DaaS services can apply the architecture of Serverless.

IaaS-Infrastructure as a Service

Infrastructure as a service, such as computer storage, networks, servers and other infrastructure facilities are provided in the form of services.

SaaS-Software as a Service

Software as a service, such as ERP, CRM, mailbox service, etc., provides services with software granularity.

Container

Container is a virtual program execution environment that isolates the physical environment, and the environment can be described and migrated. The more popular container technology is Docker. As the number of containers increases, there is a technology for managing container clusters, and the more famous container orchestration platform is Kubernetes. Container technology is not only a choice of Serverless architecture implementation, but also the basis of implementation.

NoOps

That is, unmanned operation and maintenance, which is more idealistic, may need the ability of AI to achieve complete unmanned operation and maintenance.

Unmanned operation and maintenance does not mean that Serverless,Serverless may also need human operation and maintenance (at least for now), but developers no longer need to care about the environment.

DevOps

The author thinks it can be understood as "development is operation and maintenance". After all, when something goes wrong, development should be held accountable, and a mature DevOps system can make more developers assume the responsibility of OP, or cooperate more closely with OP.

Back to Serverless, the future experience of back-end development may be similar to that of front-end development: no need to care about which server (browser) the code is running on, no need to care about the server environment (browser version), no need to worry about load balancing (the frontend has never been worried about), and middleware services are called at any time (LocalStorage, Service Worker).

The front-end students should be particularly excited about Serverless. Take the author's personal experience as an example.

Starting from making a Game

The author is very infatuated with nurturance Game, and the most common way to develop Game is to build and collect resources, or to read seconds of computing resources when you hang up. When developing Game, the author initially divides the client code and server code into two sets of implementations:

/ /... UI section, draw a countdown logging yard construction progress bar const currentTime = await requestBuildingProcess (); const leftTime = new Date (). GetTime ()-currentTime;//. Continue the countdown to read the bar / / after reading the bar, the wood output per hour is + 100, update to the client timer store.woodIncrement + = 100

For the Game experience, users can read the progress of the lumberyard construction without refreshing the browser, as well as "bang" when it is finished, and find that it gets an extra 100 points of wood per second! However, when the logging yard is completed, the browser is refreshed at any point in time before, during and after the completion, the logic should be unified, and the data needs to be calculated offline at the back end. This is the time to write the back-end code:

/ / each time you log in, verify the current login const currentTime = new Date (). GetTime () / obtain the current status of the logging yard if (/ * under construction * /) {/ / return to the client the current time const leftTime = building.startTime-currentTime res.body = leftTime} else {/ / completed store.woodIncrement + = 100}

Soon, there will be more types of buildings, and different states and grades of output will be different. The cost of separate maintenance at the front and rear will become higher and higher, and we need to do configuration synchronization.

Configure synchronization

To synchronize the configuration of the front and back end, you can host the configuration separately and share it with each other. For example, create a new configuration file to store Game information:

Export const buildings = {wood: {name: "..", maxLevel: 100, increamentPerLevel: 50, initIncreament: 100} / *. And so on.. * /}

Although this reuses the configuration, both the front and back ends have some common logic that can be reused, such as judging the state of the building according to the construction time, judging the output of the building after N seconds, and so on. Serverless brings room for further optimization.

Do Game in Serverless environment

Imagine that code can be executed at a functional granularity on the server, and we can abstract Game logic like this:

/ / judging building status according to construction time export const getBuildingStatusByTime = (instanceId: number, time: number) = > {/ * * /}; / / judging building production export const getBuildingProduction = (instanceId: number, lastTime: number) = > {const status = getBuildingStatusByTime (instanceId, new Date (). GetTime ()); switch (status) {case "building": return 0 Case "finished": / / based on (current time-last opened time) * Total output per second return; / * * /}; / / Front-end UI layer calls getBuildingProduction function every other second to update production data / / Front-end entry function export const frontendMain = () = > {/ * * /} / / the backend calls the getBuildingProduction function once according to each opening time and / / the backend entry function export const backendMain = () = > {/ * * /}

Using the PaaS service, write the front and back end logic together and upload the getBuildingProduction function fragments to the FaaS service, so that the front and back end logic can be shared at the same time!

Under the folder view, you can do the following structural planning:

.├── client # front-end entry ├── server # back-end entry ├── common # shared tool functions, which can contain 80% of general Game logic

Some people may ask: front and back sharing code can not only be achieved with Serverless.

Indeed, if the code abstraction is good enough and supported by a mature engineering solution, you can export a piece of code to the browser and the server respectively. But Serverless based on function granularity is more in line with the concept of front-end reuse code, its emergence may promote a wider range of front-end code reuse, which is not a new invention, but it is enough to be called a great change.

From the front-end perspective, the front-end developers will find that the background service becomes easier; for the back-end developers, they will find that the service has become thicker and face more challenges.

Simpler backend service

When traditional ECS servers are leased, the choice of environment between CentOS and AliyunOS is annoying enough. It is difficult for individual developers to build a complete continuous integration service, and the choices we face are dazzling:

The database and other services can be installed on the server, and the database of the local directly connected server can be developed; the Docker connection local database service can be installed locally, and the environment can be packaged as a mirror and deployed to the server as a whole; the front-end code is separated, the front-end code is developed locally, and the server code is developed on the server.

Even the stability of the server needs tools such as PM2 to manage. When the server is faced with attack, restart, disk failure, open a complex workbench or log in to Shell to recover after a single operation. How does this make people focus on what needs to be done?

Serverless solves this problem, because what we want to upload is only a code snippet, and we no longer need to face environmental problems such as server, system environment, resources, and so on. External services are also supported by encapsulated BaaS systems.

In fact, before Serverless came out, many back-end teams used the concept of FaaS to simplify the development process.

In order to reduce the interference of environment and deployment problems when writing back-end business logic, many teams abstract business logic into Block, corresponding to code snippets or Blockly, which can be maintained and released independently, and finally injected into the main program, or dynamically loaded. If you are used to this kind of development, it is also easier to accept Serverless.

Thicker backend servic

From a backstage point of view, things get more complicated. Instead of providing simple servers and containers, it is now necessary to shield the execution environment from the user and make the service thicker.

Through some articles, the author learned that the implementation of Serverless is also faced with the following challenges:

There are many kinds of Serverless implementations. If you want your business to be deployed in multiple clouds, you need to smooth out the differences; mature PaaS services are actually pseudo-Serverless, so how to standardize them later; cold start of FaaS requires reloading of code and dynamic allocation of resources, resulting in slow cold start. Besides preheating, economical optimization is also needed. For applications with high concurrency (such as double 11-second kill) scenarios, it is dangerous not to need capacity evaluation, but if it can be completely elastic, it will save the troublesome capacity evaluation and how to migrate stock applications. Most of the Serverless service vendors in the industry do not solve the problem of stock application migration; the characteristics of Serverless lead to statelessness, while complex Internet applications are stateful, so the challenge is to support state without changing development habits.

Fortunately, these problems are already being actively dealt with, and many of them have landed solutions.

The benefits of Serverless to the background far outweigh the challenges:

Promote front and rear integration. Further lower the threshold for Node to write server-side code and avoid the learning cost of application operation. The author has encountered the interruption of the application service caused by the migration of the database service he applied for to other computer rooms, and there is no need to worry anymore, because as a BaaS service, the database does not need to care about where to deploy, whether to cross the computer room, and how to do the migration, and to improve the efficiency of resource utilization. Eliminating the application of exclusive resources and replacing it with on-demand loading will certainly reduce unnecessary resource consumption, and distribute the services equally to each machine in the cluster, leveling the CPU level of the cluster, and lowering the threshold for the use of cloud platforms. No need for operation and maintenance, flexible expansion, value-based services, high availability, these capabilities not only attract more customers, but also reduce user overhead and achieve a win-win situation. Try to open the service with Serverless

The author is responsible for the construction of a large-scale BI analysis platform in the company. One of the underlying capabilities of the BI analysis platform is visual construction.

So how to open up the visual building capabilities? Now it is easier to open components, after all, the front end can be relatively decoupled from the back-end design, and the loading system using AMD is more mature.

One of the challenges now is that the back-end capabilities are open, because when there are custom requirements for data fetching capabilities, you may need to customize the logic of back-end data processing. At present, what can be done is to use maven3 and jdk7 to build local development environment tests. If you want to go online, you still need the assistance of back-end students.

If the back-end builds a unique Serverless BaaS service, it can do online Coding, debugging, and even grayscale release for pre-release testing, just like the front-end components. Now there has been a lot of mature exploration of front-end cloud development. Serverless can unify the experience of front-end code development in the cloud without caring about the environment.

Serverless Application Architecture Design

Looking at some Serverless application architecture diagrams, it is found that most businesses can apply such an architecture diagram:

Cdn.com/bc941ec5ce067007e555c3d22d40631ec6ca86d9.png ">

Abstract business functions into FaaS functions, and abstract database, cache, acceleration and other services into BaaS services; the upper layer provides Restful or event trigger mechanism calls to different ends (PC, mobile); if you want to expand the platform capabilities, you only need to open (component access) and open FaaS services (back-end access) on the end. Benefits and challenges

Serverless brings both benefits and challenges, this article stands in the front-end point of view to talk.

Benefit one: the front end is more Focus in the front end to experience the technology, but does not need to have too much application management knowledge.

Recently, I have read a lot of summary articles written by front-end predecessors, and the biggest experience is to recall "what role the front-end has played in the past few years." We tend to exaggerate our sense of existence. in fact, the meaning of front-end existence is to solve the problem of human-computer interaction. In most scenarios, it is an icing on the cake rather than a necessity.

Recalling your proudest work experience, you may have mastered the operation and maintenance knowledge of Node applications, the construction of front-end engineering system, the optimization of R & D efficiency, the formulation of standards and specifications, etc., but the part that really works for the business is precisely the business code that you think is least worth mentioning. The front end spends too much time on peripheral technology, while reducing a lot of thinking about business and interaction.

Even for large companies, it is difficult to recruit people who are proficient in using Nodejs and have rich knowledge of operation and maintenance. at the same time, they are required to have excellent front-end skills and a deep understanding of the business, so that they can hardly have both fish and bear's paw.

Serverless can effectively solve this problem. Front-end students only need to be able to write JS code without any operation and maintenance knowledge to quickly realize their whole set of ideas.

Admittedly, it is necessary to understand the knowledge of the server, but from the point of view of reasonable division of labor, the front end should be focus in the front end technology. The core competitiveness of the front end or the business value it brings will not be supplemented with more operation and maintenance knowledge. on the contrary, it will engulf the time that we could have brought more business value.

The evolution of language, browser and server are all the process from complex to simple, from bottom to encapsulation, while Serverless is the further encapsulation process of back-end + operation and maintenance as a whole.

Benefit 2: the code brought by logical choreography is highly reused and maintainable, which expands the ability of cloud + end.

Cloud + end is the next form of front-end development, providing powerful cloud coding capabilities, or building the end into a cloud-like development environment through plug-ins. Its biggest advantage is to block the details of the front-end development environment, and the concept is similar to that of Serverless.

Many teams have tried to use GraphQL to make the interface "more resilient" before, but Serverless is a more thorough solution.

My own team has tried the GraphQL solution, but because the business is so complex that it is difficult to describe the requirements of all scenarios with a standard model, it is not suitable to use GraphQL. It is precisely a set of visual back-end development platform based on Blockly that has persisted and achieved amazing development efficiency. This set of Blockly generalization abstractions can almost be replaced by Serverless. Therefore, Serverless can solve the problem of improving the efficiency of back-end R & D in complex scenarios.

With the integration of cloud development, Serverless can further visually adjust the execution order and dependencies of functions through logical orchestration.

The author has used this platform to calculate offline logs in Baidu Advertising data processing team before. After visualization of each MapReduce computing node, you can easily see which node is blocking during the failure, see the longest execution link, and reassign the execution weight to each node. Even if logical orchestration does not solve all the pain points of development, it can certainly make a difference in a specific business scenario.

Challenge 1: can Serverless completely remove the threshold from front-end to back-end?

The most common fault that front-end students make when writing Node code is memory overflow.

Browser + Tab is a natural scenario, and UI components and logic are created and destroyed frequently, so there are few front-end students and they don't need to care about GC issues. However, GC is a habit that has long been formed in the back-end development scenario, so the cache overflow of Nodejs programs is very common.

Serverless applications are loaded dynamically and will be released if they are not used for a long time, so generally speaking, you don't need to worry too much about GC. Even if the memory overflow, the process may be freed before the memory is full, or an exception may be detected to force Kill to drop.

But after all, the loading and release of the FaaS function is completely controlled by the cloud, and it is possible for a commonly used function not to be unloaded for a long time, so the FaaS function should still pay attention to control the side effects.

Therefore, although Serverless has smoothed the operation and maintenance environment, the server still needs to understand the basic knowledge, and must be aware of whether the code runs at the front end or the back end.

Challenge 2: performance issues

The cold start of Serverless will cause performance problems, and let the business take the initiative to care about the execution frequency or performance requirements of the program, and then turn on the preheating service to drag R & D back into the abyss of operation and maintenance.

Even the most mature Amazon Serverless cloud service in the industry cannot easily cope with second-kill scenarios without caring about the frequency of calls.

Therefore, at present, Serverless may be more suitable to be used in combination with appropriate scenarios, rather than forcing any application to apply Serverless.

Although it is possible to keep the program Online by running FaaS services on a regular basis, the author believes that this still violates the concept of Serverless.

Challenge 3: how to ensure code portability

There is a classic Serverless positioning description:

Network, storage, service, virtual home, operating system, middleware, runtime, data do not need to care about, and even the application layer only needs to care about the function part, and does not need to care about other parts such as startup and destruction.

The front always takes this as an advantage, but it can also be regarded as a disadvantage. When your code is completely dependent on a public cloud environment, you lose control of the overall environment, and even the code can only run on a specific cloud platform.

The BaaS service specifications provided by different cloud platforms may be different, and the entry and implementation of FaaS may be different. If you want to adopt multi-cloud deployment, you must overcome this problem.

Many Serverless platforms are now considering standardization, but there are also bottom-up toollibraries that smooth out some differences, such as Serverless Framework.

When we write FaaS functions, we also try to write the platform-bound entry functions as light as possible, and put the real entries in general functions such as main.

Summary

The value of Serverless far outweighs the challenge, and its concept can effectively solve many R & D efficiency problems.

But at present, the development stage of Serverless is still in the early stage, the domestic Serverless is also in the trial stage, and there are many restrictions in the implementation environment, that is, it does not fully realize the beautiful concept of Serverless, so if everything is set up, it will step on the pit.

Maybe in 3-5 years, these pits will be filled, so do you choose to join the filling army, or choose a suitable scene to use Serverless?

"Alibaba Yun × × icloudnative × × erverless, containers, Service Mesh and other technical fields, focusing on cloud native popular technology trends, cloud native large-scale landing practice, to do the best understanding of cloud native development × ×

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report