In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces the relevant knowledge of "how to solve the Redis cache avalanche problem". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
The cache layer carries a large number of requests, which effectively protects the storage layer. However, if a large number of requests arrive at the storage layer due to a large number of cache failures or the overall failure of the cache to provide services, it will increase the load of the storage layer (a large number of requests query the database). This is the scene of the cache avalanche.
Solving cache avalanches can start with the following points:
1. Keep the cache layer highly available
Using Redis Sentinel mode or Redis cluster deployment mode, that is, individual Redis nodes are offline, the entire cache layer can still be used. In addition, Redis can be deployed in multiple computer rooms, so that even if the computer room crashes, the high availability of the cache layer can still be achieved.
two。 Current limiting and degraded assembly
Both the cache layer and the storage layer have the probability of error and can be thought of as resources. As a distributed system with large concurrency, if a resource is not available, it may cause all threads to get the resource exception and make the whole system unavailable. Downgrading is very normal in high concurrency systems, such as recommendation services, if personalized recommendation services are not available, you can downgrade to supplement hot data, so as not to cause the entire recommendation service to be unavailable. Common current limiting and degraded components such as Hystrix, Sentinel and so on.
3. Cache does not expire
The key saved in Redis never expires, so there is no problem of a large number of caches being invalidated at the same time, but with it, Redis needs more storage space.
4. Optimize cache expiration time
When designing the cache, select the appropriate expiration time for each key to avoid a large number of key invalidation at the same time, resulting in cache avalanches.
5. Rebuild the cache using mutexes
In high concurrency scenarios, in order to avoid a large number of requests reaching the storage layer to query data and rebuild the cache at the same time, mutex control can be used, such as querying the data in the cache layer according to key, locking the key when the cache layer is hit, and then querying the data from the storage layer, writing the data to the cache layer, and finally releasing the lock. If another thread finds that it failed to acquire the lock, let the thread sleep for a period of time and try again. For lock types, Lock under Java concurrent package can be used in a stand-alone environment, and distributed locks (the SETNX method in Redis) can be used in a distributed environment.
Reconstruction of cache pseudo code by mutex in distributed environment
/ * Mutex establishment cache * * / public String get (String key) {/ / value String value = redis.get (key) corresponding to query key in redis; / / cache missed if (value = = null) {/ / mutex String key_mutex_lock = "mutex:lock" + key / / Mutex lock successfully if (redis.setnx (key_mutex_lock, "1")) {/ / returns 0 (false), 1 (true) try {/ / sets the mutex timeout. Here, the expiration time of the lock is set, not the key failure time redis.expire (key_mutex_lock,3*60). / / query value = db.get (key) from the database; / / data write cache redis.set (key,value);} finally {/ / release lock boolean keyExist = jedis.exists (key_mutex_lock) If (keyExist) {redis.delete (key_mutex_lock);}} else {/ / failed to lock. Thread retries Thread.sleep (50) after 50ms break; return get (key); / / returns cache result}
Using Redis distributed lock to realize cache reconstruction in distributed environment, the advantage is that the design idea is simple, and the data consistency is guaranteed; the disadvantage is that the code complexity increases, which may cause users to wait. Suppose that under high concurrency, the key is locked during cache reconstruction. if 1000 requests are currently concurrent, 999 of them are blocking, which will cause 999 user requests to block and wait.
6. Rebuild cache asynchronously
Under this scheme, the asynchronous strategy is adopted to build the cache, which will obtain threads from the thread pool to build the cache asynchronously, so that all requests will not reach the storage layer directly. In this scheme, each Redis key maintains a logical timeout. When the logical timeout is less than the current time, the current cache has expired and the cache should be updated. Otherwise, the current cache is not invalidated and the value value in the cache is returned directly. For example, set the expiration time of key to 60 min in Redis and set the logical expiration time to 30 min in the corresponding value. In this way, when the key reaches the logical expiration time of 30 min, the cache of the key can be updated asynchronously, but the old cache is still available during the time the cache is updated. This way of rebuilding cache asynchronously can effectively avoid a large number of key invalidation at the same time.
/ * Asynchronous rebuild cache: ValueObject is the corresponding encapsulated entity model * * / public String get (String key) {/ / query the ValueObject object corresponding to key in the re-cache ValueObject valueObject = redis.get (key); / / get the corresponding value String value = valueObject.getValue () in the storage / / get the cache expiration time in the entity model: timeOut = current time when caching + expiration time (such as 30 seconds, 60 seconds, etc.) long logicTimeOut = valueObject.getTimeOut (); / / conversion to long type / / currently can logically invalidate if (logicTimeOut)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.