In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
###一、前言
在我们日常的开发中,无不都是使用数据库来进行数据的存储,由于一般的系统任务中通常不会存在高并发的情况,所以这样看起来并没有什么问题,可是一旦涉及大数据量的需求,比如一些商品抢购的情景,或者是主页访问量瞬间较大的时候,单一使用数据库来保存数据的系统会因为面向磁盘,磁盘读/写速度比较慢的问题而存在严重的性能弊端,一瞬间成千上万的请求到来,需要系统在极短的时间内完成成千上万次的读/写操作,这个时候往往不是数据库能够承受的,极其容易造成数据库系统瘫痪,最终导致服务宕机的严重生产问题。
为了克服上述的问题,项目通常会引入NoSQL技术,这是一种基于内存的数据库,并且提供一定的持久化功能。
redis技术就是NoSQL技术中的一种,但是引入redis又有可能出现缓存穿透,缓存击穿,缓存雪崩等问题。本文就对这三种问题进行较深入剖析。
###二、初认识
缓存穿透:key对应的数据在数据源并不存在,每次针对此key的请求从缓存获取不到,请求都会到数据源,从而可能压垮数据源。比如用一个不存在的用户id获取用户信息,不论缓存还是数据库都没有,若***利用此漏洞进行***可能压垮数据库。缓存击穿:key对应的数据存在,但在redis中过期,此时若有大量并发请求过来,这些请求发现缓存过期一般都会从后端DB加载数据并回设到缓存,这个时候大并发的请求可能会瞬间把后端DB压垮。缓存雪崩:当缓存服务器重启或者大量缓存集中在某一个时间段失效,这样在失效的时候,也会给后端系统(比如DB)带来很大压力。
###三、缓存穿透解决方案
一个一定不存在缓存及查询不到的数据,由于缓存是不命中时被动写的,并且出于容错考虑,如果从存储层查不到数据则不写入缓存,这将导致这个不存在的数据每次请求都要到存储层去查询,失去了缓存的意义。
有很多种方法可以有效地解决缓存穿透问题,最常见的则是采用布隆过滤器,将所有可能存在的数据哈希到一个足够大的bitmap中,一个一定不存在的数据会被 这个bitmap拦截掉,从而避免了对底层存储系统的查询压力。另外也有一个更为简单粗暴的方法(我们采用的就是这种),如果一个查询返回的数据为空(不管是数据不存在,还是系统故障),我们仍然把这个空结果进行缓存,但它的过期时间会很短,最长不超过五分钟。
粗暴方式伪代码:
//伪代码public object GetProductListNew() { int cacheTime = 30; String cacheKey = "product_list"; String cacheValue = CacheHelper.Get(cacheKey); if (cacheValue != null) { return cacheValue; } cacheValue = CacheHelper.Get(cacheKey); if (cacheValue != null) { return cacheValue; } else { //数据库查询不到,为空 cacheValue = GetProductListFromDB(); if (cacheValue == null) { //如果发现为空,设置个默认值,也缓存起来 cacheValue = string.Empty; } CacheHelper.Add(cacheKey, cacheValue, cacheTime); return cacheValue; }}
###四、缓存击穿解决方案
key可能会在某些时间点被超高并发地访问,是一种非常"热点"的数据。这个时候,需要考虑一个问题:缓存被"击穿"的问题。
使用互斥锁(mutex key)
业界比较常用的做法,是使用mutex。简单地来说,就是在缓存失效的时候(判断拿出来的值为空),不是立即去load db,而是先使用缓存工具的某些带成功操作返回值的操作(比如Redis的SETNX或者Memcache的ADD)去set一个mutex key,当操作返回成功时,再进行load db的操作并回设缓存;否则,就重试整个get缓存的方法。
SETNX,是「SET if Not eXists」的缩写,也就是只有不存在的时候才设置,可以利用它来实现锁的效果。
public String get(key) { String value = redis.get(key); if (value == null) { //代表缓存值过期 //设置3min的超时,防止del操作失败的时候,下次缓存过期一直不能load db if (redis.setnx(key_mutex, 1, 3 * 60) == 1) { //代表设置成功 value = db.get(key); redis.set(key, value, expire_secs); redis.del(key_mutex); } else { //这个时候代表同时候的其他线程已经load db并回设到缓存了,这时候重试获取缓存值即可 sleep(50); get(key); //重试 } } else { return value; } }
memcache代码:
if (memcache.get(key) == null) { // 3 min timeout to avoid mutex holder crash if (memcache.add(key_mutex, 3 * 60 * 1000) == true) { value = db.get(key); memcache.set(key, value); memcache.delete(key_mutex); } else { sleep(50); retry(); } }
其它方案:待各位补充。
##五、缓存雪崩解决方案
与缓存击穿的区别在于这里针对很多key缓存,前者则是某一个key。
缓存正常从Redis中获取,示意图如下:
The cache invalidation instant diagram is as follows:
The avalanche effect of cache failure is terrible to the underlying system! Most system designers consider locking or queuing to ensure that there are not a large number of threads reading and writing to the database at once, so as to avoid a large number of concurrent requests falling to the underlying storage system when failure occurs. There is also a simple solution to spread the cache invalidation time. For example, we can add a random value to the original invalidation time, such as 1-5 minutes randomly, so that the repetition rate of each cache expiration time will be reduced, and it is difficult to cause collective invalidation events.
Lock queue, pseudocode is as follows:
//pseudocode public object GetProductListNew() { int cacheTime = 30; String cacheKey = "product_list"; String lockKey = cacheKey; String cacheValue = CacheHelper.get(cacheKey); if (cacheValue != null) { return cacheValue; } else { synchronized(lockKey) { cacheValue = CacheHelper.get(cacheKey); if (cacheValue != null) { return cacheValue; } else { //this is usually sql query data cacheValue = GetProductListFromDB(); CacheHelper.Add(cacheKey, cacheValue, cacheTime); } } return cacheValue; }}
Locking queues are designed to relieve stress on the database and do not improve system throughput. Assuming that the key is locked during cache rebuild at high concurrency, 999 out of 1000 requests are blocking. It also causes users to wait for timeouts, which is a stopgap solution!
Note: locking queue solution concurrency problems in distributed environments, and possibly distributed locks; threads will also be blocked, and the user experience is poor! Therefore, it is rarely used in real high concurrency scenarios!
Random Value Pseudocode:
//pseudocode public object GetProductListNew() { int cacheTime = 30; String cacheKey = "product_list"; //cache tag String cacheSign = cacheKey + "_sign"; String sign = CacheHelper.Get(cacheSign); //Get cache value String cacheValue = CacheHelper.Get(cacheKey); if (sign != null) { return cacheValue; //not expired, return directly } else { CacheHelper.Add(cacheSign, "1", cacheTime); ThreadPool.QueueUserWorkItem((arg) -> { //this is usually sql query data cacheValue = GetProductListFromDB(); //Date is set to twice the cache time for dirty reading CacheHelper.Add(cacheKey, cacheValue, cacheTime * 2); }); return cacheValue; }}
Explanation:
Cache mark: records whether the cached data is expired. If expired, it will trigger another thread to update the cache of the actual key in the background. Cache data: its expiration time is twice longer than the cache mark time. For example, the cache mark time is 30 minutes, and the data cache is set to 60 minutes. In this way, when the cache tag key expires, the actual cache can still return the old data to the calling end, and will not return the new cache until another thread finishes updating in the background.
There are three solutions to cache crashes: using locks or queues, updating the cache with expiration flags, setting different cache expiration times for keys, and a solution called "second level caching."
##VI. Summary
For business systems, it is always a concrete analysis of specific situations. There is no best, only the most appropriate.
For other cache problems, cache full and data loss problems, we can learn by ourselves. Finally, we also mention three words LRU, RDB, AOF, usually we use LRU policy to deal with overflow, Redis RDB and AOF persistence policy to ensure data security under certain circumstances.
See related links:
https://blog.csdn.net/zeb_perfect/article/details/54135506
https://blog.csdn.net/fanrenxiang/article/details/80542580
https://baijiahao.baidu.com/s? id=1619572269435584821&wfr=spider&for=pc
https://blog.csdn.net/xlgen157387/article/details/79530877
Video resources can be obtained directly into Baidu Cloud Group:
https://pan.baidu.com/mbox/homepage? short=btNBJoN
This article is linked to the public number of Mi Dou
https://mp.weixin.qq.com/s/ksVC1049wZgPIOy2gGziNA
Welcome to Mi Dou Java, a Java learning platform focused on sharing and communication.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.