In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
The solution to avalanche caused by redis? This problem may be often seen in our daily study or work. I hope you can gain a lot from this question. The following is the reference content that the editor brings to you, let's take a look at it!
The cause of the avalanche:
The popular and simple understanding of cache avalanche is: due to the invalidation of the original cache (or the data has not been loaded into the cache), all requests that should have accessed the cache have gone to query the database during the absence of the new cache (the cache is normally obtained from Redis, as shown below), which will cause great pressure on the database CPU and memory, which will seriously cause database downtime and system crash.
The basic solution is as follows:
First, most system designers consider locking or queuing to ensure that there will not be a large number of threads reading and writing to the database at one time, so as to avoid putting too much pressure on the database when the cache is invalid. although it can relieve the pressure on the database to a certain extent, it also reduces the throughput of the system.
Second, analyze the user's behavior and distribute the cache expiration time evenly as far as possible.
Third, if it is due to the downtime of a cache server, you can consider acting as the master / slave, for example, redis master / slave, but double cache involves updating transactions. Update may read dirty data, which needs to be solved.
Solutions to Redis avalanche effect:
1. You can use a distributed lock, or a stand-alone local lock.
2. Message middleware
3. Primary and secondary cache Redis+Ehchache
4. Share the failure time of key of Redis equally.
Explanation:
1. When there are suddenly a large number of requests to the database server, impose request restrictions. Use the mechanism to ensure that there is only one thread (request) operation. Otherwise, wait in queue (cluster distributed lock, stand-alone local lock). Reduce server throughput and low efficiency.
Add the lock!
Ensure that only one thread can enter and only one request is actually performing the query operation.
You can also use the current restriction policy here ~
2. Solve the problem by using message middleware
This kind of plan is the most reliable one!
Message middleware can solve high concurrency!
If there is no value for Redis when a large number of requests are accessed, the result of the query will be stored in the message middleware (taking advantage of the asynchronous MQ feature)
3. Do secondary cache. A1 is the original cache and A2 is the copy cache. When A1 expires, you can access the A1 cache with the expiration time set to short-term and A2 to long-term (this is a supplement)
4. Set different expiration time for different key to make the cache expiration time as uniform as possible.
Thank you for reading! After reading the above, do you have a general understanding of the solution to the avalanche of redis? I hope the content of the article will be helpful to all of you. If you want to know more about the relevant articles, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.