In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Distributed cache Redis is a storage system that supports multiple data structures such as Key-Value. It can be used in a variety of scenarios, such as caching, event publishing or subscription, high-speed queuing, and so on. Redis is written in ANSI C language and provides direct access to data types such as string (String), hash (Hash), list (List), set structure (Set, Sorted Set), stream (Stream) and so on. Data reads and writes are based on memory and can be persisted to disk.
We often use Redis in the process of development, editor here to talk about Redis, we often encounter in the process of using a few problems.
1. Expiration strategy and memory elimination mechanism of Redis
This question is very important, in the end whether Redis is used at home, this question can be seen.
For example, you can only store 5 gigabytes of data in Redis, but if you write 10 gigabytes, you will delete 5 gigabytes. How to delete, have you thought about this question?
Also, your data has set the expiration time, but the time is up, the memory occupancy rate is still relatively high, have you thought about the reason?
Answer: Redis adopts the strategy of periodic deletion + lazy deletion.
Why not delete policies regularly?
Delete regularly, use a timer to monitor the Key, and delete automatically when it expires. Although memory is released in time, it consumes CPU resources.
In the case of large concurrent requests, CPU applies time to processing requests rather than deleting Key, so it does not adopt this strategy.
How does regular deletion + lazy deletion work
Delete regularly. By default, Redis checks each 100ms to see if there is an expired Key, and if there is an expired Key, delete it.
It is important to note that Redis does not check all the Key once for every 100ms, but is randomly selected for inspection (if every other 100ms, all Key are checked, the Redis will not be stuck).
Therefore, if you only use the periodic deletion policy, it will result in a lot of Key not being deleted by the time. As a result, lazy deletion comes in handy.
In other words, when you get a Key, Redis will check whether the Key has expired if it is set to expire. If it expires, it will be deleted.
Is there no other problem with regular deletion + lazy deletion?
No, if you delete Key periodically, you don't delete it. Then you didn't immediately request Key, which means that lazy deletion didn't work either. In this way, the memory of Redis will be higher and higher. Then the memory elimination mechanism should be adopted.
There is a line of configuration in redis.conf:
# maxmemory-policy volatile-lru
This configuration is equipped with a memory elimination strategy (what, you didn't match it? Reflect on yourself):
◆ noeviction: when there is not enough memory to hold new writes, new writes report an error. I don't think anyone uses it.
◆ allkeys-lru: when there is not enough memory to hold newly written data, remove the least recently used Key in the key space. Recommended, which is currently used in the project.
◆ allkeys-random: when there is not enough memory to hold newly written data, a Key is randomly removed in the key space. No one should use it, if you don't delete it, at least use Key to delete it randomly.
◆ volatile-lru: when there is not enough memory to hold newly written data, remove the least recently used Key from the key space where the expiration time is set. In this case, Redis is generally used as both caching and persistent storage. Not recommended.
◆ volatile-random: when there is not enough memory to hold newly written data, a Key is randomly removed from the key space where the expiration time is set. Still not recommended.
◆ volatile-ttl: when there is not enough memory to hold newly written data, Key with an earlier expiration time is removed first in the key space where the expiration time is set. Not recommended.
PS: if the Key of expire is not set, the prerequisites (prerequisites) are not met; then the behavior of volatile-lru,volatile-random and volatile-ttl policies is basically the same as that of noeviction (do not delete).
Second, the consistency of Redis and database.
Consistency problem is a common distributed problem, which can be subdivided into final consistency and strong consistency. Database and cache double write, there are bound to be inconsistencies.
To answer this question, first understand a premise. That is, if there are strong consistency requirements for data, you can't slow down the storage. What we do can only ensure the ultimate consistency.
In addition, fundamentally speaking, the plan we have made can only be said to reduce the probability of inconsistencies and cannot be completely avoided. Therefore, data with strong consistency requirements cannot be slowed down.
Answer: first of all, adopt the correct update strategy, update the database first, and then delete the cache. Second, because there may be a failure to delete the cache, you can provide a compensatory measure, such as using message queuing.
How to deal with the problems of cache penetration and cache avalanche
To tell you the truth, it is very difficult for small and medium-sized traditional software enterprises to encounter these two problems. If there are large concurrent projects, the traffic is about millions. These two issues must be considered deeply.
Cache traversal, that is, * deliberately requests data that does not exist in the cache, resulting in all requests being directed to the database, resulting in abnormal database connection.
Cache traversal solution:
◆ uses mutex locks. When the cache expires, first get the lock, get the lock, and then request the database. If you don't get the lock, you'll sleep for a while and try again.
◆ uses an asynchronous update strategy, which returns directly regardless of whether the Key gets a value or not. A cache expiration time is maintained in the Value value. If the cache expires, a thread is asynchronously set up to read the database and update the cache. Need to do cache warm-up (load the cache before the project starts).
◆ provides an interception mechanism that can quickly determine whether a request is valid, for example, using a Bloom filter to internally maintain a series of legitimate and valid Key. Quickly determine whether the Key carried by the request is legal or valid. If it is illegal, return directly.
Cache avalanche, that is, a large area of cache failure at the same time, when there is another wave of requests, resulting in requests to the database, resulting in abnormal database connection.
Cache Avalanche solution:
◆ adds a random value to the cache expiration time to avoid collective failure.
◆ uses mutexes, but the throughput of this scheme is significantly reduced.
◆ dual cache. We have two caches, cache An and cache B. The expiration time of cache An is 20 minutes, and cache B has no expiration time. Do your own cache warm-up operation.
Then subdivide the following points: read the database from cache A, then return directly; A has no data, read data directly from B, return directly, and start an update thread asynchronously, which updates both cache An and cache B.
Fourth, how to solve the problem of concurrent competitive Key of Redis
The problem is that there are multiple subsystems to Set a Key at the same time. Have you thought about what you should pay attention to at this time?
Need to explain, I Baidu in advance, found that the answer is basically recommended to use the Redis transaction mechanism.
I do not recommend using Redis's transaction mechanism. Because our production environment is basically a Redis cluster environment, we have done data slicing operation.
When you have multiple Key operations involved in a transaction, the multiple Key may not be stored on the same redis-server. Therefore, the transaction mechanism of Redis is very chicken.
If you operate on this Key, no order is required
In this case, prepare a distributed lock, everyone to grab the lock, grab the lock to do set operation, it is relatively simple.
If the Key operation is required, order is required.
Suppose you have a key1, system A needs to set key1 to valueA, system B needs to set key1 to valueB, and system C needs to set key1 to valueC.
Expect the value value of key1 to change in the order of valueA > valueB > valueC. In this case, we need to save a timestamp when the data is written to the database.
Suppose the timestamp is as follows:
System A key 1 {valueA 3:00}
System B key 1 {valueB 3:05}
System C key 1 {valueC 3:10}
So, suppose system B grabs the lock first and sets key1 to {valueB 3:05}. Then system A grabs the lock and finds that the timestamp of its valueA is earlier than the timestamp in the cache, so it does not do the set operation, and so on.
Other methods, such as using queues, can also turn the set method into serial access. In short, be flexible.
Summary
This article summarizes the common problems of Redis. Most of them are the questions I like to ask when I meet at work and communicate with others. I hope I can help you. If you have more in-depth understanding of this issue, you can refer to Huawei Cloud help Center.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.