Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Analysis and solution of three Common caching problems in web Front end

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

Web front-end common three major cache problem analysis and solutions, many novices are not very clear about this, in order to help you solve this problem, the following editor will explain in detail for you, people with this need can come to learn, I hope you can gain something.

Generally speaking, the three common cache problems are cache penetration, cache breakdown, and cache avalanche. What the three have in common is high concurrency, cache update and cache invalidation. And the three will worsen each other, making the problem more serious, so once there is a problem, it needs to be solved immediately, so as not to cause the most "avalanche". This paper will analyze the three major problems of cache concurrency, cache avalanche and cache breakdown, and put forward the corresponding solutions.

First, cache penetration

1, definition: under normal circumstances, we go to query data is there. Then the request to query a piece of data that does not exist in the database at all, that is, neither the cache nor the database can query this data, but the request will be called to the database every time. The phenomenon that there is no data in this kind of query is called cache penetration.

2. Solution:

(1) empty value: traversal occurs because there is no key to store these empty data in the cache. As a result, every query goes to the database. Then we can set the corresponding values of these key to null and throw them into the cache. When there is a request to query this key, return null directly. In this way, you don't have to walk around the database, but don't forget to set the expiration time.

(2) BloomFilter:BloomFilter is similar to a hbase set used to determine whether an element (key) exists in a collection. This method is widely used in big data scenarios, such as Hbase to determine whether the data is on disk. And in the crawler scene to determine whether the url has been crawled. This scheme can be added to the first scheme, adding a layer of BloomFilter before caching, go to BloomFilter to query whether key exists when querying, return it directly if it does not exist, and then go to the cache-> check DB.

Second, cache breakdown

1. Definition: cache breakdown is the second problem we may encounter when using a cache scheme. In the usual high concurrency system, when a large number of requests query a key at the same time, the key just fails, which will cause a large number of requests to go to the database. This phenomenon is called cache breakdown.

2. Solution: the above phenomenon is that multiple threads query this data in the database at the same time, so we can use a mutex to lock it on the first request to query the data. When other threads get to this point, they wait until the first thread queries the data, and then cache it. The subsequent thread comes in and finds that there is already a cache and goes straight to the cache.

III. Cache avalanche

1. Definition: the case of cache avalanche means that when a large-scale cache invalidation occurs at a certain time, for example, when your cache service goes down, a large number of requests will come in and call DB directly. The result is that DB can't hold up and hang up.

2. Solution: first of all, use cluster cache to ensure the high availability of cache services. This solution is to achieve high availability of cache clusters before an avalanche occurs. If Redis is used, master-slave + Sentinel and Redis Cluster can be used to avoid the complete collapse of Redis. Then, ehcache local cache + Hystrix current limit & downgrade to avoid MySQL being killed. The purpose of using ehcache local cache is also to consider that ehcache local cache can support for a while when Redis Cluster is completely unavailable. Use Hystrix for current limiting & downgrading, for example, if 5000 requests come in a second, we can set the assumption that only 2000 requests can pass through this component per second, then the remaining 3000 requests will follow the flow limiting logic. Then call the downgrade component (downgrade) developed by ourselves, such as some default values set. To protect the final MySQL from being killed by a large number of requests. Finally, enable the Redis persistence mechanism to restore the cache as soon as possible. Once the cluster is restarted, the data can be automatically loaded from disk to recover the data in memory.

Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report