In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article shares with you the content of a sample analysis of Redis caching issues. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
First, the application of Redis cache
In our actual business scenario, Redis is generally used in conjunction with other databases to relieve the pressure on the back-end database, such as with the relational database MySQL.
Redis caches the data that is often queried in MySQL, such as hot data, so that when users come to visit, they do not need to query in MySQL, but directly obtain the cached data in Redis, thus reducing the reading pressure on the back-end database.
If there is no Redis for the data queried by the user, the user's query request will be transferred to the MySQL database. When the MySQL returns the data to the client, it will cache the data to Redis at the same time, so that when the user reads it again, the data can be obtained directly from the Redis. The flowchart is as follows:
Of course, it's not always plain sailing when we use Redis as a caching database, and we encounter three common caching problems:
Cache penetration
Cache breakdown
Cache avalanche
II. Introduction to cache penetration 2.1
Cache traversal means that when a user queries a certain data, the data does not exist in the Redis, that is, the cache does not hit, and the query request will be directed to the persistence layer database MySQL, and it is found that the data does not exist in the MySQL. MySQL can only return an empty object, indicating that the query failed. If there are too many kinds of requests, or if users use such requests to carry out malicious attacks, it will put a lot of pressure on the MySQL database, or even crash, this phenomenon is called cache penetration.
2.2 solution cache empty object
When MySQL returns an empty object, Redis caches the object and sets an expiration time for it. When the user initiates the same request again, an empty object will be obtained from the cache, and the user's request will be blocked in the cache layer, thus protecting the backend database. However, this approach also has some problems. Although the request cannot enter the MSQL, this strategy will take up the cache space of the Redis.
Bloom filter
First of all, all the key of the hotspot data that the user may access is stored in the Bloom filter (also known as cache prefetch). When there is a user request, the Bloom filter will first go through the Bloom filter, and the Bloom filter will determine whether the requested key exists. If it does not exist, the request will be directly rejected, otherwise the query will continue to be executed, first to the cache, and then to the database if the cache is not available. Compared with the first method, the Bloom filter method is more efficient and practical. The schematic diagram of the process is as follows:
Cache warm-up: when the system starts, the relevant data is loaded into the Redis cache system in advance. This avoids loading data when requested by the user.
2.3 comparison of solutions
Both solutions can solve the problem of cache penetration, but they use different scenarios:
Cache null object: it is suitable for scenarios where the number of key for empty data is limited and the probability of repeated key requests is high.
Bloom filter: it is suitable for scenarios with different key of empty data and low probability of repeated requests in key.
III. Introduction to cache breakdown 3.1
Cache breakdown means that the data cache queried by the user does not exist in the data cache, but the backend database does exist. This phenomenon is generally caused by the expiration of key in the cache. For example, a hot data key, which receives a large number of concurrent access all the time, if the key suddenly fails at a certain moment, it will cause a large number of concurrent requests to enter the back-end database, resulting in an instant increase of its pressure. This phenomenon is called cache breakdown.
3.2 solution change expiration time
Set hotspot data to never expire.
Distributed lock
The method of distributed lock is used to redesign the use of cache. The process is as follows:
Locking: when we query the data through key, we first query the cache. If not, we add the lock through the distributed lock. The first process to acquire the lock enters the back-end database query and slows down the query result to Redis.
Unlock: when other processes find that the lock is occupied by a process, they enter the waiting state until unlocked, and then the other processes access the cached key in turn.
3.3 comparison of solutions
Never expire: this scheme does not actually have a series of hazards caused by hot key because it does not set the real expiration time, but there will be data inconsistencies and code complexity will increase.
Mutex: the idea of this solution is relatively simple, but there are some hidden dangers. If there is a problem or takes a long time to build the cache, there may be the risk of deadlock and thread pool blocking. However, this method can better reduce the back-end storage load and do better in terms of consistency.
IV. Introduction to cache avalanche 4.1
Cache avalanche means that a large number of key in the cache expires at the same time, and the data access volume is very large, resulting in a sudden increase in pressure on the back-end database, or even hang up. This phenomenon is called cache avalanche. Unlike cache breakdown, cache breakdown is when a hot spot key expires suddenly when the concurrency is particularly large, while cache avalanche is a large number of key expires at the same time, so they are not of the same order of magnitude at all.
4.2 solution processing expiration
Cache avalanches are similar to cache breakdown, so you can also use the method that hot spot data never expires to reduce the simultaneous expiration of large quantities of key. In addition, the random expiration time is set for key to avoid centralized expiration of key.
Redis High availability
A Redis may die due to an avalanche, so you can add a few more Redis and set up a cluster. If one is dead, the others can continue to work.
Thank you for reading! This is the end of this article on "sample Analysis of Redis caching problems". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 233
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.