Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to solve the three problems of caching process in Redis

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "how to solve the three major problems existing in the caching process in Redis." Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let Xiaobian take you to learn "How to solve the three major problems existing in the caching process in Redis"!

1. Cache penetration

Cache penetration means that when a user queries the database for data that does not exist, the returned result is null and the result is not stored in the cache. Assuming the user keeps making such requests, it will never access the cache, causing all queries to fall on the database, resulting in the database being killed.

public Object getGoods(Long goodsId) {

//Get goods information from Redis

Object goodsInfo = redisTemplate.opsForValue()

.get(String.valueOf(goodsId));

if (goodsInfo != null) {

return goodsInfo;

}

//Query goods information from database and store it in Redis

goodsInfo = goodsDao.selectByGoodsId(goodsId);

if (goodsInfo != null) {

redisTemplate.opsForValue()

.set(String.valueOf(goodsId), goodsInfo);

}

return goodsInfo;

}

Assuming that goodsId is not negative, if a request with parameter goodsId = -1 is initiated, this data will definitely not exist in the cache, and it will enter the query database every time, and the data query result is null, and the result will not be cached to Redis.

Solution:

1) Intercept these unreasonable requests at the upper level through user authentication, parameter verification, etc.;

2) When the database query result is empty, the data is also cached, but the cache validity period is set short, so as not to affect the normal data cache.

public Object getGoods(Long goodsId) {

//Get goods information from Redis

Object goodsInfo = redisTemplate.opsForValue()

.get(String.valueOf(goodsId));

if (goodsInfo != null) {

return goodsInfo;

}

//Query goods information from database and store it in Redis

goodsInfo = goodsDao.selectByGoodsId(goodsId);

if (goodsInfo != null) {

redisTemplate.opsForValue()

.set(String.valueOf(goodsId), goodsInfo

, 60, TimeUnit.MINUTES);

} else { //query null also stored

redisTemplate.opsForValue()

.set(String.valueOf(goodsId), null, 60,

TimeUnit.SECONDS);

}

return goodsInfo;

}

II. Cache breakdown

Cache breakdown means that when hot spot data storage expires, multiple threads request hot spot data simultaneously. Because the cache has just expired, all concurrent requests will query the database for data.

Solution:

In fact, in most real-world business scenarios, cache breaches occur in real time, but do not put too much pressure on the database because the average company business does not have that high concurrency. Of course, if you are unlucky enough to have this situation, you can set these hot buttons so that they never expire. Another method is to control thread access to query database through mutex lock, but this will lead to system throughput decline, need to be used in practical cases.

III. Cache avalanche

Data is not loaded into the cache, or the cache is invalidated on a large scale at the same time, causing all requests to look up the database, resulting in database, CPU, and memory overload, and even downtime.

A simple avalanche process:

1) Large-scale failure of Redis cluster;

2) cache failure, but there are still a large number of requests to access the cache service Redis;

3) After a large number of Redis requests fail, the request is redirected to the database;

4) database requests increase dramatically, resulting in database being killed;

5) Since most of your application services rely on databases and Redis services, it will soon lead to an avalanche of server clusters, and eventually the entire system will completely collapse.

Solution:

Ex ante: Highly available cache

A highly available cache prevents an entire cache failure. Even if individual nodes, machines and even computer rooms are shut down, the system can still provide services, Redis Sentinel and Redis Cluster can achieve high availability.

In-process: cache degradation (temporary support)

How do we ensure that services remain available when a sharp increase in visits causes service problems? Hystrix is widely used in China, which reduces the loss after avalanche by fusing, degradation and current limiting. As long as the database is not dead, the system can always respond to requests. Isn't this how we come over every year on Spring Festival 12306? As long as he could still respond, at least he still had a chance to get tickets.

After the fact: Redis backup and fast warm-up

1) Redis data backup and recovery

2) Quick cache warm-up

At this point, I believe that everyone has a deeper understanding of "how to solve the three major problems existing in the caching process in Redis," so let's actually operate it! Here is the website, more related content can enter the relevant channels for inquiry, pay attention to us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report