Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Solutions to cache breakdown, cache penetration and cache avalanche in Redis

2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces the solutions to the three major cache phenomena in Redis: cache breakdown, cache penetration and cache avalanche. In daily operation, it is believed that many people have doubts about the solutions to cache breakdown, cache penetration and cache avalanche in Redis. The editor consulted all kinds of materials and sorted out simple and useful operation methods. I hope it will be helpful for you to answer the questions of "the solution of cache breakdown, cache penetration and cache avalanche in Redis". Next, please follow the editor to study!

This article focuses on three problems that are easy to occur in Redis: cache breakdown, cache penetration and cache avalanche. However, before introducing these three problems, we first need to understand the expired phase-out mechanism of key in Redis. As we all know, Redis can set the expiration time for the cached data stored in Redis. For example, the SMS verification code we obtained usually expires in 10 minutes. At this time, we need to add an expiration time of key when the verification code is stored in Redis. However, there is a problem that needs to be paid special attention to: the expiration time of key will not be deleted by Redis. So how does Redis delete expired key? There are two strategies for deleting expired key in Redis: periodic deletion and lazy deletion.

Delete on a regular basis: Redis defaults to randomly selecting some Key with expiration time set every 100ms to check whether they have expired, and delete them if they expire. Why is it randomly selected instead of checking all key? Because if you set up thousands of key, check all the existing key every 100ms, which will bring more pressure to the CPU.

Lazy deletion: periodic deletion due to random sampling may result in a lot of expired Key not being deleted by the expiration time. So when the user fetches data from the cache, redis checks to see if the key has expired and deletes the key if it expires. At this point, the expired key is removed from the cache when queried.

However, if you only use the regular delete + lazy delete mechanism, it will still leave a serious hidden danger: if regular deletion leaves a lot of expired key, and the user has not used these expired key for a long time, the expired key can not be deleted lazily, resulting in the expired key accumulating in memory all the time, resulting in Redis memory blocks being consumed. So how to solve this problem? At this time, the Redis memory obsolescence mechanism came into being. The Redis memory obsolescence mechanism provides six data obsolescence strategies:

Volatile-lru: select the least recently used data to be eliminated from datasets with an expiration time set.

Volatile-ttl: select the expired data to be eliminated from the dataset with an expiration time set.

Volatile-random: optionally select data elimination from a dataset with an expiration time set.

Allkeys-lru: removes the least recently used key when there is not enough memory to hold newly written data.

Allkeys-random: optionally select data from the dataset to be eliminated.

No-enviction: when there is not enough memory to hold new write data, the new write operation reports an error.

In general, the volatile-lru policy is recommended, and the expiration time should not be set for important data such as configuration information, so that Redis will never eliminate these important data. For general data, you can add a cache time, and when the data expires, the request will be obtained from DB and re-stored in Redis.

Cache breakdown

After talking about the expiration mechanism of Redis's key, let's get to the point: why cache breakdown, cache penetration, and cache avalanche? First of all, let's take a look at how the request fetches the data: when a user request is received, we first try to get the data from the Redis cache. If the data can be fetched from the cache, the result is returned directly. If there is no data in the cache, the data is obtained from the DB. If the database successfully fetches the data, the Redis is updated, and then the data is returned. If the DB has no data, an empty result is returned. Then under what circumstances will there be three major problems? Let's first take a look at cache breakdown:

Definition: in the case of high concurrency, a popular key suddenly expires, causing a large number of requests not to find cached data in Redis, and then all go to access DB request data, causing DB pressure to increase instantly.

Solution: in the case of cache breakdown, it is generally not easy to cause DB downtime, but will only cause periodic pressure on DB. Generally speaking, the solution to cache breakdown can be like this: the data in Redis does not set the expiration time, and then an attribute is added to the cached object to identify the expiration time. Each time the data is obtained, the expiration time property in the object is verified. If the data is about to expire, asynchronously initiate a thread to actively update the data in the cache. However, this solution may cause some requests to get expired values, which depends on whether the business can accept it. If the data must be new, the best solution is to set the hot data to never expire, and then add a mutex to ensure cached single-thread writing.

Cache penetration

Definition: cache traversal refers to data that does not exist in the query cache or DB. For example, when querying product information through id, the id is generally greater than 0, and the attacker will deliberately send id as-1 to query. If the cache misses, the data will be obtained from the DB. This will cause each cache to miss the data and cause each request to access the DB, resulting in cache penetration.

Solution: the solution of cache traversal can be divided into two parts: first, basic verification is added to the API layer: user authentication verification and id verification. For example, user authentication fails or requests with id < 0 are intercepted directly. Secondly, when the cache and DB cannot get the data, the key-value will be stored as key-null to Redis, and the expiration time can be stored at a short point, such as 60s, to prevent attackers from constantly initiating requests in a short period of time, which will lead to database downtime.

Cache avalanche

Definition: if a large number of caches in the cache expire within a period of time, a large number of cache breaks will occur, and all requests will fall on the DB. Due to the huge amount of query data, it will cause excessive pressure on DB and even lead to DB downtime.

Solution: there is generally no perfect solution to cache avalanches, but we can try our best to analyze user behavior to ensure that the expiration time of key is relatively average, prevent a large number of cached data from expiring at the same time, and set hot data to never expire. At the same time, if it is a distributed environment, distributed locks are used to ensure single-thread write of the cache, which can avoid a large number of cache invalidation at the same time, resulting in all requests falling on the DB. In my opinion, if it is acceptable for some requests to get the expiration value, the most reasonable solution is to use cache breakdown: the data in Redis does not set the expiration time, and then an attribute is added to the cached object to identify the expiration time. Every time the data is obtained, the expiration time attribute in the object is verified. If the data is about to expire, a thread is initiated asynchronously to update the data in the cache.

At this point, the study on the solution of cache breakdown, cache penetration and cache avalanche of the three major cache phenomena in Redis is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report