In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article mainly explains "how to achieve micro-service front-end data loading". The content of the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought. Let's study and learn "how to achieve micro-service front-end data loading"!
At present, the micro-service architecture has been gradually implemented in many teams. For example, BFF (Backend For Frontend), which is very popular in the front-end circle, is actually a variation of the micro-service architecture, that is, let the front-end team maintain a set of "glue layer / access layer / API layer" services, call several micro-services provided by the background team, and logically assemble the results of the micro-service, thus packaging the external API.
Under this architecture, services can be broadly divided into two roles:
The front-end service (Frontend) wraps the underlying micro-services and exposes the callable API directly. In BFF architecture, for example, it is likely to be a HTTP Server written by Node.js.
Background micro service (Microservices), usually a single service provided by the back-end team, carries the functions of different modules and provides a series of internal calling interfaces.
The simplest situation
Let's first consider the simplest case, that is, whenever an external request comes in, the front-end service will request data from several backend micro-services, then process it logically and return a response:
There is an obvious problem with this austere model: each external request triggers multiple internal service invocations, which is a waste of resources because for most internal microservices, the result of the request is cacheable for a certain period of time.
For example, a user's avatar may only be updated every few days, weeks or even months. In this case, the front-end service can cache the user's avatar for a period of time. During this period, the user's avatar can be read directly from the cache without the need to request the backend, which greatly saves the burden of the backend service.
Join the local cache
So we added the local memory cache (Local Cache) to the front-end service to make most requests hit the local cache, thus reducing the burden on the backend service:
The local cache is usually placed in memory, and the memory space is limited, so we need to introduce a cache elimination mechanism to limit the maximum memory capacity. There are many specific cache elimination algorithms, such as FIFO, LFU, LRU, MRU, ARC and so on. You can choose the appropriate algorithm according to the actual situation of the business.
After the introduction of local caching, there will still be a problem: caching can only take effect on a single service instance (service instance can be understood as server, K8S Pod and other concepts), while most front-end services are generally stateless in order to scale up horizontally, so there will be a large number of coexisting instances.
That is, the local cache may only take effect on one server, while there is no cache on other parallel servers, and if the request hits the server that does not have a cache, it still fails to hit the cache.
Another problem is that caching logic and application logic are coupled, and there may be logic like this in the code of each interface:
Var cachedData = cache.get (key) if (cachedData! = = undefined) {return cachedData} else {return fetch ('...')}
Don't Repeat Yourself! We obviously need to abstract this cache logic to avoid duplicating code.
Add Cache layer and centralize caching
To solve the above two problems, we continue to improve our architecture:
Add centralized remote caches (such as Redis, Memcache) so that remote caches can be applied to all instances
The logic of non-application layer such as cache and RPC is abstracted into separate components (Cache Layer), which are used to encapsulate the logic related to read and write, local cache and remote cache of background micro-services.
With such a layer of Cache Layer abstracted, we can further evolve our services.
Add cache refresh mechanism
Although we have a centralized cache, caching is only effective in the short term. Once the cache expires, you still have to request data from the backend service, under this critical condition, the request time will increase, resulting in time-consuming burrs (a small number of requests become more time-consuming every once in a while).
So is there any way to keep the cache "fresh"? This requires the mechanism of cache refresh. generally speaking, cache refresh can be divided into active refresh and passive refresh:
Active refresh
Active refresh means that whenever the data is updated, the cache is refreshed, and the downstream service always reads only the data in the cache.
The background service with more reading and less writing is very suitable for this mode, because read requests are never sent to the database, but are diverted to cache components with high performance and scalability, thus greatly reducing the pressure on the database.
Of course, active refresh is not perfect, it means that the front and back-end services must have coupling on the cache components (such as the need to agree on the naming and data structure of the cache key, etc.), which brings some hidden troubles. Once the back-end microservice writes to the cache incorrectly, or the cache component has availability problems, the result is likely to be catastrophic. So this model is more suitable within a single service than between multiple services.
Passive refresh
Passive refresh means that when reading cached data, you decide whether to refresh the cache asynchronously (similar to stale-while-revalidate of HTTP protocol) according to the remaining validity period of the cache or similar metrics.
Compared with active refresh, this mode has the advantage of less coupling between services, but the disadvantage is 1. 5%. Can only be cached according to access hotspots, not full caching; 2. It can only be passively refreshed according to the relevant indicators, reducing the immediacy of the data.
If the team's front-end services (such as BFF) and back-end services are developed and maintained by two sets of personnel, it is more appropriate to use this caching mode. Of course, the specific choice of mode has to be decided according to the actual situation.
Caching is a very flexible and versatile component. I won't go any further because of the limited space here. For more information on caching design patterns, please refer to here:
Donnemartin/system-design-primer github.com
Request convergence
For businesses with high traffic, hundreds of requests may be sent to the same frontend service instance at the same time, and these requests will trigger a large number of read requests for cache and backend services. In most cases, these concurrent read requests can be grouped into a few requests.
This idea is very similar to Facebook's open source dataloader, which brings together parallel requests with the same parameters, thus reducing the pressure on back-end services (this problem can easily occur in GraphQL usage scenarios).
Disaster recovery cache
We might as well consider an extreme situation: if the background service is all down, can the front-end service use the cache to "hold" for a period of time? This is the concept of disaster recovery cache, that is, when the service is abnormal, it is degraded to use the data in the cache to respond to external requests to ensure a certain degree of availability. The logic of disaster recovery cache can also be abstracted into Cache Layer.
Thank you for your reading. the above is the content of "how to achieve data loading in the front-end of micro-service". After the study of this article, I believe you have a deeper understanding of how to achieve data loading in the front-end of micro-service. the specific use situation also needs to be verified by practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.