In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
It takes about 3.7 minutes to read this article.
I. Preface
To design a cache system, we have to consider the avalanche effect of cache penetration, cache breakdown and failure.
Second, cache penetration
Cache traversal refers to querying a data that must not exist. Because the cache is written passively when it misses, and for the sake of fault tolerance, if the data cannot be found from the storage layer, it will not be written to the cache. This will cause this non-existent data to query to the storage layer every time, losing the meaning of the cache. When the traffic is heavy, DB may be dead. If someone takes advantage of the non-existent key to attack our application frequently, this is a loophole.
III. Solutions
There are many ways to effectively solve the problem of cache penetration, the most common is to use a Bloom filter to hash all possible data into a large enough bitmap, and a certain non-existent data will be intercepted by this bitmap, thus avoiding the query pressure on the underlying storage system. There is also a more simple and rough method (this is what we use). If the data returned by a query is empty (whether the data does not exist or a system failure), we still cache the empty result. but its expiration time will be very short, no more than five minutes.
IV. Cache avalanche
Cache avalanche means that the same expiration time is used when we set the cache, which causes the cache to expire at the same time, and all requests are forwarded to the DB,DB instantaneous overpressure avalanche.
V. solution
The avalanche effect of cache failure has a terrible impact on the underlying system. Most system designers consider locking or queuing to ensure cached single-thread (process) writes so as to avoid a large number of concurrent requests falling on the underlying storage system in case of failure. A simple solution here is to spread the cache expiration time. For example, we can add a random value to the original expiration time, such as 1-5 minutes, so that the repetition rate of each cache expiration time will be reduced. It is difficult to cause collective failure events.
VI. Cache breakdown
For some key with expiration time set, if these key may be accessed concurrently at some point in time, it is a very "hot" data. At this point, you need to consider a problem: the cache is "broken". The difference between this and the cache avalanche is that there is a lot of key for a certain key cache.
When the cache expires at a certain point in time, there are a large number of concurrent requests for the Key. These requests find that the cache expiration will generally load the data from the back-end DB and set it back to the cache. At this time, large concurrent requests may overwhelm the back-end DB.
VII. Solution
1. Use mutex (mutex key)
A common practice in the industry is to use mutex. To put it simply, when the cache expires (it is determined that the value obtained is empty), instead of going to load db immediately, you should first use some operations of the cache tool with the returned values of successful operations (such as Redis's SETNX or Memcache's ADD) to set a mutex key, and then perform the load db operation and reset the cache when the operation returns success; otherwise, retry the entire get cache method.
SETNX, which stands for "SET if Not eXists", is set only when it does not exist, and can be used to achieve the effect of a lock. The expiration time of setnx was not implemented in previous versions of redis2.6.1, so here are two version code references:
/ / String get (String key) {String value = redis.get (key) before 2.6.1; if (value = = null) {if (redis.setnx (key_mutex, "1")) {/ / 3 min timeout to avoid mutex holder crash redis.expire (key_mutex, 3 * 60) value = db.get (key); redis.set (key, value) Redis.delete (key_mutex);} else {/ / other threads rest for 50 milliseconds and then retry Thread.sleep (50); get (key);}
The latest version code:
Public String get (key) {String value = redis.get (key); if (value = = null) {/ / indicates the cache value expires / / sets the 3min timeout to prevent the del operation from failing. The next cache expiration cannot be load db if (redis.setnx (key_mutex, 1,3 * 60) = = 1) {/ / indicates a successful setting value = db.get (key) Redis.set (key, value, expire_secs); redis.del (key_mutex);} else {/ / this time means that other threads at the same time have been load db and set back to the cache. At this time, you can sleep (50) and get (key) by retrying to get the cache value. / / retry}} else {return value;}}
Memcache Code:
If (memcache.get (key) = = null) {/ / 3 min timeout to avoid mutex holder crash if (memcache.add (key_mutex, 3 * 60 * 1000) = = true) {value = db.get (key); memcache.set (key, value); memcache.delete (key_mutex);} else {sleep (50); retry ();}}
2. Use mutex (mutex key) "in advance":
Set a timeout value (timeout1) inside the value, and the timeout1 is smaller than the actual memcache timeout (timeout2). When reading from cache to timeout1 and finds that it has expired, immediately extend the timeout1 and reset it to cache. The data is then loaded from the database and set into cache. The pseudo code is as follows:
V = memcache.get (key); if (v = = null) {if (memcache.add (key_mutex, 3 * 60 * 1000) = true) {value = db.get (key); memcache.set (key, value); memcache.delete (key_mutex);} else {sleep (50); retry ();}} else {if (v.timeout)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.