Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to explore in depth what is the Serverless architecture pattern

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article to share with you is about how to deeply explore what is Serverless architecture mode, Xiaobian think it is very practical, so share it with you to learn, I hope you can gain something after reading this article, not much to say, follow Xiaobian to see it.

What exactly is Serverless Architecture?

What is Serverless Architecture? According to CNCF's definition of Serverless computing, Serverless architecture should be a design that uses FaaS (function as a service) and BaaS (backend service) services to solve problems. This definition makes our understanding of Serverless a little clearer, and it may also cause some confusion and controversy.

With the development of demand and technology, there are some Serverless computing services in other forms besides FaaS, such as Google Cloud Run, application-oriented Serverless application engine service launched by Alibaba Cloud and Serverless K8s. These services also provide Auto Scaling capability and charge-by-use charging mode. They have the form of Serverless service, which can be said to further expand the camp of Serverless computing;

In order to eliminate the impact of cold start, FaaS services such as Alibaba Cloud's function computing and AWS Lambda have successively introduced reservation functions, becoming less "pay-per-use";

Some server-based backend services have also launched Serverless products, such as AWS Serverless Aurora and Alibaba Cloud Serverless HBase services.

In this way, the boundaries of Serverless are somewhat vague, and many cloud services are evolving in the direction of Serverless. How does a vague thing guide us in solving business problems? Serverless has a fundamental concept that has not changed, that is, let users maximize their focus on business logic, other features such as server indifference, automatic elasticity, pay-as-you-go, etc., are to achieve this concept and services.

The famous Serverless practitioner Ben Kehoe describes Serverless native mind as something we can appreciate when thinking about what to do in business:

What is my business?

Does this make my business stand out?

If not, why should I do it instead of letting someone else solve the problem?

There is no need to solve technical problems before solving business problems.

When practicing Serverless architecture, the most important mental task is not to choose which popular services and technologies to solve, but to keep the business logic in mind at all times, so that it is easier for us to choose the right technology and service, and to know how to design the application architecture. People have limited energy, organizations have limited resources, and the idea of Serverless allows us to better solve problems that really need to be solved with limited resources. It is precisely because we do less things and let others do these things instead that we can do more in business.

Next, we'll cover some common scenarios and explore how to support them using Serverless architecture. We will mainly adopt computing, storage and messaging technologies to design the architecture, and measure the advantages and disadvantages of the architecture from the perspectives of operability, security, reliability, scalability and cost. To keep this discussion from getting too abstract, we'll use specific services as references, but the ideas of these architectures are generic and can be implemented in other similar products.

Scenario 1: Static Web site

If we want to do an information display website, the requirements are very simple, just like the early Chinese Yellow Pages, information updates are very few, there are probably the following main options:

Buy a server in IDC room hosting, operation site;

Go to the cloud vendor to buy a Cloud Virtual Machine to run the site, and buy a Load Balancer service and multiple servers to solve the problem of high availability;

Take a static site approach, supported directly by object storage services such as OSS, and use CDN to source OSS.

These three ways are from cloud to cloud, from management server to no management server, that is, Serverless. What changes does this series of changes bring to users? The first two schemes need budget, expansion, high availability, self-monitoring, etc. These are not what Teacher Ma wanted in those years. He only wants to show information and let the world know China. This is his business logic. Serverless is an idea that maximizes focus on business logic. The third way is to use Serverless architecture to build a static site, which has the advantages that other solutions cannot match, such as:

Operability: No need to manage servers, such as operating system security patch upgrade, fault upgrade, high availability, these cloud services (OSS, CDN) all help to do;

Scalability: No need to estimate resources and consider future expansion, because OSS itself is elastic, using CDN makes the system less latency, lower cost and higher availability;

Cost: Pay according to actual resources used, including storage fees and request fees, no request fees when no request;

Security: Such a system doesn't even see the server, doesn't need to log in via SSH, and DDoS attacks are left to cloud services to resolve.

Scenario 2: Monolithic and microservice applications

Static pages and sites are suitable for scenarios with little content and low update frequency, whereas dynamic sites are needed. For example, Taobao's commodity page, using static page management commodity information is unrealistic. How do you dynamically return results based on user requests? Let's look at two common solutions:

Web monolithic application: all application logic is completed in one application, combined with database, this layered architecture can quickly implement some applications with lower complexity;

Microservice application: With the development of business, more functions, higher visits, and larger teams, it is generally necessary to split the logic in a single application into multiple execution units, such as comment information, sales information, and delivery information on a product page, all of which can correspond to a single microservice. The benefit of this architecture is that each unit is highly autonomous and easy to develop (i.e. using different technologies), deploy, and scale. However, this architecture also introduces some problems of distributed systems, such as Load Balancer of inter-service communication, failure handling, etc.

Organizations at different stages and sizes can choose their own way to solve their top business problems, and Taobao was initially accepted not because of the technology architecture it used. But whatever architecture we choose, the Serverless native mind mentioned above helps us focus on the business. For example:

Whether you need to purchase your own server to install databases, achieve high availability, manage backups, upgrade versions, etc., or can you hand these things over to hosted services such as RDS; whether you can use Serverless database services such as table storage and Serverless HBase to achieve flexible expansion and contraction according to usage;

Whether monolithic applications need to buy their own servers to run, or can they be handed over to hosted services, such as functional computing and Serverless application engines;

Whether lightweight microservices can be implemented through functions depends on the capabilities provided by function calculation, such as Load Balancer, automatic scaling, pay-on-demand, log collection, system monitoring, etc.;

Whether microservice applications based on Spring Cloud, Dubbo, HSF, etc. need to purchase server deployment applications, manage service discovery, Load Balancer, Auto Scaling, fuse, system monitoring, etc., or can these tasks be delegated to services such as Serverless Application Engine?

The architecture on the right of the above figure introduces API gateway, function computing or Serverless application engine to implement the computing layer, giving a lot of work to cloud services, allowing users to focus on implementing business logic to the greatest extent. The interaction of multiple microservices within the system is shown in the following figure. By providing a commodity aggregation service, multiple internal microservices are presented to the outside in a unified manner. Microservices here can be implemented via SAE or function.

Such an architecture can be extended further, such as how to support access by different clients, as shown on the right side of the figure above. In reality, this kind of demand is common, different clients may need different information, mobile phones can make relevant recommendations according to location information. How can mobile clients and different browsers benefit from Serverless architecture? This leads to another term-Backend for fronted (BFF), which is admired by front-end development engineers. Serverless technology makes this architecture popular because front-end engineers can write BFF directly from a business perspective without having to manage server-related things that are more troublesome for front-end engineers. For more practice, see: BFF Architecture Based on Functional Computation.

Scenario 3: Event Trigger

The dynamic page generation mentioned above is completed by synchronous request, and there are also common scenarios in which request processing usually takes a long time or more resources, such as image and video content management in user comments, involving how to upload images and process images (thumbnails, watermarks, reviews, etc.) and videos to adapt to the playback needs of different clients.

How to upload multimedia files in real-time processing it? The technical architecture of this scenario has evolved as follows:

Server-based monolithic architecture: multimedia files are uploaded to the server, processed by the server, and display requests for multimedia are also completed by the server;

Server-based microservices architecture: multimedia files are uploaded to servers, server processing is transferred to OSS, and then file addresses are added to message queues, files are processed by another group of servers, and processing results are saved to OSS. Display requests for multimedia are completed by OSS and CDN;

Serverless architecture: multimedia is uploaded directly to OSS, function is triggered directly by OSS event triggering ability, function processing result is saved to OSS, display request for multimedia is completed by OSS and CDN.

Server-based monolithic architectures face the following problems:

How to deal with massive files? Single server space is limited, buy more servers;

How do I extend my Web application server? Are Web application servers suitable for CPU-intensive tasks?

How to resolve high availability of upload requests?

What if the resolution shows high availability of requests?

How do you handle peaks and valleys in request load?

Server-based microservices architecture solves most of the above problems well, but still faces some problems:

Manage high availability and resiliency of application servers;

Managing resilience of file processing servers;

Manage message queue resiliency.

The third Serverless architecture solves all of these problems well. Load Balancer, high availability and Auto Scaling of servers, and message queues that developers originally needed to do were moved to the service itself. We can see that as the architecture evolves, developers are doing less and less, the system is more mature, the business is more focused, and the delivery speed is greatly improved.

The main values of Serverless architecture here are:

Event triggering capability: Native integration of function computing services with event sources (OSS) allows users to process uploaded multimedia files in real-time without managing queue resources.

High elasticity and pay-as-you-go: The computing resource specifications required for images and videos (videos of different sizes) are different, and the peak and trough of traffic demand for resources are different. Now this elasticity is provided by the service, which expands and shrinks according to the actual use of users, allowing users to utilize resources 100% without paying for idle resources.

Event triggering is an important feature of FaaS services, and this Pub-Sub event-driven model is not a new concept, but before Serverless became popular, the producers, consumers, and intermediate connection hubs of events were all user responsibility, just like the second architecture in the previous architecture evolution.

Serverless lets producers send events and maintain connectivity hubs out of the user's responsibility, focusing instead on consumer logic, and that's where the value of Serverless lies.

Function Computing Services also integrates with other cloud service event sources, making it easier to use common patterns in your business, such as Pub/Sub, Event Stream patterns, and Event Sourcing patterns. For more information on function combination patterns, see: N Ways of Function Combination.

Scenario 4: Service orchestration

Although the previous product page is complex, all operations are read operations, and the aggregation service API is stateless and synchronous. Let's take a look at one of the core scenarios in e-commerce-the order process.

This scenario involves multiple distributed writing problems, one of the most troublesome problems caused by the introduction of microservices architecture. Monolithic applications can handle this process somewhat easily because they use a database and can maintain data consistency through database transactions. However, in reality, you may have to deal with some external services, and you need a certain mechanism to ensure that the process goes forward and backward smoothly. A classic pattern to solve this problem is the Saga pattern, and there are two different architectures to implement this pattern:

One approach is to adopt an event-driven pattern that drives the process to completion. In this architecture, there is a message bus, and interested services such as inventory services listen for events. Listeners can use servers or functions. With the integration of function computation and message themes, this architecture can also be server-free altogether.

This architectural module is loosely coupled and has clear responsibilities. The downside is that as the process gets longer and more complex, the system becomes difficult to maintain. For example, it is difficult to intuitively understand the business logic, and the state during execution is not suitable for tracking, so the operability is relatively poor.

Another architecture is the workflow based Saga pattern. In this architecture, services are independent and do not pass information through events. Instead, there is a centralized coordinator service to schedule individual business services, and business logic and state are maintained by the centralized coordinator. Coordinators who achieve this centralization typically face the following problems:

Write a lot of code to implement orchestration logic, state maintenance, and error retry functions that are difficult to reuse by other applications;

Maintain infrastructure to run orchestrated applications to ensure high availability and scalability of orchestrated applications;

Consider state persistence to support multi-step, long-running processes and ensure that processes are transactional.

Relying on cloud services, such as Alibaba Cloud's Serverless workflow service, these things can be handed over to the platform to do, and users return to the state of focusing only on business logic.

On the right side of the figure below is the process definition, which we can see achieves the effect of the previous event-based Saga pattern, and the process is greatly simplified and observability is improved.

Scenario 5: Data Pipeline

As the business develops further, the data becomes more and more, and the value of the data can be mined at this time. For example, analyze user behavior on the site and make recommendations accordingly. A data pipeline includes data acquisition, processing, analysis and other links. Such a service is feasible if built from scratch, but it is also complex. The business we are discussing here is e-commerce, not to provide a data pipeline service. With such a goal, our choices become simple and clear.

Log Service (SLS) provides data collection, analysis and delivery functions.

Function computation (FC) can process the data of log service in real time and write the result to other services, such as log service and OSS;

Serverless workflow service can process data in batches at regular intervals, define flexible data processing logic through functions, and construct ETL jobs;

Data Lake Analytics (DLA) provides Serverless interactive query services that use standard SQL to analyze data from multiple data sources such as Object Store (OSS), Database (PostgreSQL / MySQL, etc.), NoSQL (TableStore, etc.).

The above is how to explore in depth what is the Serverless architecture model, Xiaobian believes that some knowledge points may be what we will see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report