Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to solve the problem of Redis cache exception

2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article will explain in detail how to solve the problem of Redis cache exception. The editor thinks it is very practical, so I share it with you as a reference. I hope you can get something after reading this article.

Cache avalanche

Cache avalanche refers to a large area of cache failure at the same time, so the subsequent requests will fall on the database, causing the database to bear a large number of requests in a short period of time and collapse.

Solution

1. The expiration time of cached data is set randomly to prevent a large number of data expiration at the same time.

2. When the general concurrency is not very large, the most frequently used solution is locking queuing.

3. Add a corresponding cache mark to each cached data to record whether the cache is invalid or not, and update the data cache if the cache tag fails.

Cache penetration

Cache penetration refers to the data that is not available in the cache and database, causing all requests to fall on the database, causing the database to bear a large number of requests in a short period of time and collapse.

Solution

1. Add verification in the interface layer, such as user authentication check, id to do basic verification, id1) k (k > 1) independent hash functions to ensure that the process of judging the weight of elements is completed under a given space and misjudgment rate.

Its advantage is that the space efficiency and query time are far higher than the general algorithm, and the disadvantage is that it has a certain error recognition rate and deletion difficulties.

The core idea of Bloom-Filter algorithm is to use several different Hash functions to resolve "conflicts".

There is a conflict (collision) problem in Hash, and the values of two URL obtained from the same Hash may be the same. To reduce conflicts, we can introduce a few more Hash, and if we conclude from one of the Hash values that an element is not in the collection, then the element is definitely not in the collection. Only when all the Hash functions tell us that the element is in the collection can we be sure that the element exists in the collection. This is the basic idea of Bloom-Filter.

Bloom-Filter is generally used to determine the existence of an element in a collection of large amounts of data.

Cache breakdown

Cache breakdown means that there is no data in the cache but some data in the database (usually the cache time expires). At this time, because there are so many concurrent users, the read cache does not read the data, and at the same time, it goes to the database to fetch the data, which causes the pressure on the database to increase instantly and cause too much pressure. Unlike the cache avalanche, the cache breakdown refers to and check the same data, the cache avalanche is that different data are out of date, a lot of data can not be found in order to check the database.

Solution

1. Set hotspot data to never expire

2. Add mutex lock

Cache warm-up

Cache preheating means that after the system is online, the relevant cache data is loaded directly into the cache system. In this way, you can avoid the problem of querying the database and then caching the data when the user requests it. Users directly query pre-warmed cache data!

Solution

1. Write a cache refresh page directly, and do it manually when you launch.

2. The amount of data is small and can be loaded automatically when the project starts.

3. Refresh the cache regularly

Cache degradation

When there is a sharp increase in traffic, when there is a problem with the service (such as slow response time or non-response), or when non-core services affect the performance of the core process, it is still necessary to ensure that the service is still available, even if it is damaging. The system can be degraded automatically according to some key data, or the switch can be configured to achieve manual degradation.

The ultimate goal of cache degradation is to ensure that core services are available, even if they are lossy. And some services cannot be downgraded (such as adding shopping carts, clearing).

Before downgrading, the system should be combed to see if the system can lose pawn protection, so as to sort out which must be vowed to protect and which can be degraded. For example, you can refer to the log level to set a plan:

1. General: for example, some services can be degraded automatically if they time out occasionally because of network jitter or when the service is online.

2. Warning: if the success rate of some services fluctuates over a period of time (for example, between 95% and 100%), you can downgrade automatically or manually, and send an alarm.

3. Error: for example, if the availability rate is less than 90%, or the database connection pool is knocked out, or the number of visits suddenly soars to the maximum threshold that the system can withstand, you can automatically or manually downgrade according to the situation.

4. Serious error: for example, if the data is wrong due to special reasons, an emergency manual downgrade is required.

The purpose of service degradation is to prevent Redis service failures, resulting in avalanche problems in the database. Therefore, a service degradation strategy can be adopted for unimportant cached data. For example, a more common practice is to return the default value directly to the user instead of querying the database when there is a problem with Redis.

Cache hotspot key

When a Key in the cache (such as a promotional product) expires at a certain point in time, there are a large number of concurrent requests for this Key. These requests find that the cache expiration will generally load data from the back-end DB and set it back to the cache. At this time, large concurrent requests may crush the back-end DB instantly.

Solution

Lock the cache query. If the KEY does not exist, add the lock, then check DB into the cache, and then unlock it. Other processes wait if they find a lock, and then return data or enter the DB query after unlocking.

This is the end of this article on "how to solve the problem of Redis cache exception". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report