Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

BFF Architecture based on function Computing

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

What is BFF?

BFF, whose full name is Backends For Frontends (serves the back end of the front end), originated from a 2015 Sam Newman blog post "Pattern: Backends For Frontends-Single-purpose Edge Services for UIs and external parties".

With the popularity of micro-services and front-end separation, there is usually an API layer on the back-end service boundary to downgrade multiple micro-services in the system, and after some series of processing such as aggregation, adaptation and cropping, the API of HTTP protocol is provided up to the front end.

Then, with the rise of the mobile terminal, there are many development scenarios such as H5, iOS and Android coexisting. Because the screen size of the mobile terminal is relatively small, the information displayed will be quite different from the traditional Web terminal, and the mobile terminal also has higher requirements for the number of access connections and the amount of data. At this point, the development of the general API layer will encounter some difficulties, such as the need to provide different API for different sides. The design of these API is strongly related to the presentation logic on different ends, so it is not suitable for the back-end team or API team to be responsible for. Because these API maintenance personnel will be sandwiched between the front and rear to make coordination and trade-offs, very tired.

Sam Newman has successively practiced independent Backend API for different sides in REA and SoundCloud, which is called BFF. To solve the problem of differentiated needs of API at different ends.

Benefits of BFF

Business support left over from history

The interface specifications of some old systems may be old, such as those of Restful. With the help of BFF layer to do some interface conversion, better adapt to the needs of technology development on the end.

Coordinate stable middle station and changeable end demand

The rapid change on the end is mainly reflected in two aspects:

Technological innovation: the technology update on the end is relatively fast, and the js framework is not endless. There are also many options for mobile, such as H5, Java/OC, Kotlin/Swift, React Native, Flutter and so on. Business change: front-end products tend to change more frequently than back-end businesses.

Make up the differential delivery ability from the end to the side.

When some products are launched in different countries, languages and people, you can make some changes in the BFF layer. For example, the back-end error report can be translated here related to the user's language.

Horizontal aggregation and aggregation-based optimization

There are some product modules that involve multiple middleware services, and BFF can be used as an edge service layer to aggregate API.

End-to-end business effectiveness evaluation

Trying a new experience on the end will inevitably change the API. If there is no BFF, both the front end and the API need to be modified for the Aamp B test. What if both the mobile and Web teams need to run the A & B test? One team may need to wait for another team.

BFF allows different teams to experiment independently. You may find it more convenient to first implement experimental API changes in BFF, then migrate the trial to the Amax B test, and then migrate it to the core API.

Some problems of BFF

High cost of resources

No matter how simple BFF is, you need to provide a server to run, and strictly speaking, you need to provide several sets of environment deployment. For example, inside some big companies.

No matter how simple the application is, four servers are required, and the approval process for the server may be slow.

Concurrency is difficult to guarantee

The BFF layer is generally developed by the front-end students, but ensuring the high availability of BFF is often a challenge for the front-end students. When there is a sudden increase in traffic, the BFF layer may be exploded first, resulting in a drop in the availability of the entire system architecture.

Difficulties in operation and maintenance

Who develops which operation and maintenance, and then the front-end students may lack the operation and maintenance online application experience, the operation and maintenance of BFF is also a big problem.

Serverless For Backend

As a result of Serverless especially function calculation, after the application deployment, if there is no access volume, it will not consume computing resources, let alone incur costs. When the number of visits increases, the platform will expand the capacity of the application at the speed of 100 milliseconds, and the computing resources (function instances) behind the decline in traffic will also shrink. At the same time, it also provides users with out-of-the-box monitoring alarm and log retrieval functions.

The advantages of auto scaling, pay-by-quantity and free operation and maintenance in function calculation exactly correspond to the disadvantages of traditional BFF. Therefore, the deployment of BFF to the function computing platform can perfectly solve the above BFF problem.

When deployment costs are reduced, it also makes it possible for BFF to be dismantled less. At this time, the end side can organize the corresponding BFF module according to the business module. For example, the front-end development of the operating platform is responsible for the corresponding BFF module development, while the front-end of the equipment center is responsible for its own BFF, so that there can be less conflict with each other.

A scheme based on function calculation

The BFF architecture scheme of the functional computing platform has four layers: end-side, gateway layer, BFF layer and middle platform service.

The end-to-end side can maintain its own familiar technical solutions for development. For example, the web side can choose React or Vue.js, and the mobile side can choose Java/Kotlin or Objective C/Swift. You can also choose React Native or Flutter, which spans multiple sides.

There are two options for the gateway layer: API Gateway and HTTP Trigger. API Gateway is rich in features and supports current restriction, but it will incur additional costs. HTTP Trigger supports simple route mapping and binding domain names. Although it does not support flow restriction, it is free, and is suitable for lightweight applications.

In the BFF layer, it is recommended to split according to the business module. Different functional modules build different functions. If the interfaces of the modules at different ends are different, they can also be split into different functions. These functions are then organized into several projects through the Fun tool. The disassembly of the project can be considered according to the maintenance team, and different teams maintain different projects to reduce the intersection and conflict.

SFF R & D process

Let's take a look at the SFF R & D process from three aspects: local development, release process and service monitoring.

Local R & D

The local project is divided into three parts

End-to-side SFF-FC functions such as APP/H5-React Native or Vue.js. The common functions are express or egg Intermediate API interface-you can choose API Mock or directly connected test environment.

Debug locally. Developers who prefer the command line can use the funcraft tool to start the service locally through fun local start. Developers who prefer desktop GUI can use functions to calculate the VSCode Plugin provided.

Unit testing can choose your favorite testing framework: Mocha or Jest

Here is a proposed project structure

Sffdemo ├── README.md ├── function │ ├── package.json │ ├── template.yml │ └── user.js ├── package.json └── src ├── component ├── layout ├── model ├── page └── service

The src directory places the code for APP or H5. The function directory places the bff code, which can be described by the ROS template template.yml and published using the fun tool.

Release proc

Daily developers recommend using the command line to publish, install and configure the fun tool, place a template.yml ROS description file in the BFF project, and then deploy quickly with the help of the fun deploy command.

Novices can also choose to go to the function calculation console and publish it by uploading the ZIP package.

CI/CD can be configured for more complex scenarios. For example, the code repository selects Gitlab/Github, the build system selects Travis CI/Gitlab CI/Jenkins, and submitting code to the code repository automatically triggers the build and release. For more details, please refer to Serverless-Funcraft + OSS + ROS for CI/CD.

Service monitoring

In terms of observability, function calculations provide out-of-the-box monitoring, logging and alarms.

Cost advantage

Users' application loads usually have many types, and the specifications and flexibility requirements for resources are different. Function calculation provides prepaid and postpaid metering models to help you gain significant cost advantages in different scenarios. Prepaid means that users judge the resource requirements of the application and purchase a specified number of resource vouchers in advance before using them. The advantage of prepaid is that the unit price is low, which is about 70% cheaper than postpaid; the disadvantage is that the application load changes dynamically, and purchasing resources according to the peak will lead to lower resource utilization. Post-payment means that users pay on demand according to the resources actually used by the application. The function calculates the postpaid resources based on the time it takes for the instance to execute the request, accurate to 100 milliseconds. If there is no request, there is no need to pay. Therefore, it can be considered that the utilization rate of resources by quantity is 100%. The advantage of post-payment is the high utilization rate of resources, and the disadvantage is the high unit price. The automatic scaling of function calculation allows you to seamlessly combine prepaid and postpaid resources to obtain competitive costs in different scenarios.

For more specific cost calculation and cost optimization schemes, you can refer to the best practice of function calculation cost optimization.

Summary

Everyone may have a different understanding of the definition and landing of Serverless. With the advantage of Serverless brought by function calculation, BFF really achieves who enjoys who is responsible, low cost and free of operation and maintenance.

"Alibaba Cloud Native focus on micro-services, Serverless, containers, Service Mesh and other technology areas, focus on cloud native popular technology trends, cloud native large-scale landing practice, to be the best understanding of cloud native developers of the technology circle."

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report