In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
When Redis is used as a cache, old data is automatically evicted if memory space is full. Memcached is this way by default, and most developers are familiar with it.
LRU is the only collection algorithm supported by Redis. This article details the maxmemory instruction used to limit maximum memory usage and delves into the approximate LRU algorithm used by Redis.
maxmemory configuration instruction
maxmemory is used to specify the maximum memory that Redis can use. It can be set in the redis.conf file or modified dynamically during operation via the CONFIG SET command.
For example, to set the memory limit of 100MB, you can configure it in the redis.conf file as follows:
maxmemory 100mb
Set maxmemory to 0 to indicate no memory restriction. Of course, there is an implicit limitation for 32-bit systems: up to 3GB of memory.
When memory usage reaches its maximum limit, Redis may return an error message directly or delete some old data if new data needs to be stored, depending on the configured policies.
eviction policy
When the maximum memory limit is reached (maxmemory), Redis determines the behavior according to the maxmemory-policy configuration policy.
In the current version,Redis 3.0 supports the following policies:
noeviction: Do not delete policy, when maximum memory limit is reached, if more memory is needed, return error message directly. Most write commands result in more memory consumption (with a few exceptions, such as DEL ).
allkeys-lru: Common to all keys; priority is given to deleting the least recently used (LRU) keys.
volatile-lru: Only the expired part is set; priority is given to deleting the least recently used (LRU) key.
allkeys-random: All keys are universal; some keys are deleted randomly.
volatile-random: Only the expired key is set; delete some keys randomly.
volatile-ttl: Only for expired keys; priority is given to deleting keys with short time to live (TTL).
If the expire key is not set and the prerequisites are not met, then the behavior of the volatile-lru, volatile-random, and volatile-ttl policies is basically the same as noeviction.
You need to choose the appropriate eviction strategy based on the characteristics of the system. Of course, you can also dynamically set eviction policies at runtime by using the INFO command, and monitor cache miss and hit.
Generally speaking:
If hot and cold data are separated, the allkeys-lru strategy is recommended. That is, some of the keys are read and written frequently. Allkeys-lru is a good choice if you are unsure of specific business characteristics.
If you need to read and write all keys cyclically, or if the access frequency of each key is similar, you can use the allkeys-random policy, that is, the probability of reading and writing all elements is similar.
If you want Redis to filter keys for deletion based on TTL, use the volatile-ttl policy.
Volatile-lru and volatile-random policies are mainly used in instances with both cache and persistent key. In general, two separate instances of Redis should be used for scenarios like this.
It's worth mentioning that setting expire consumes extra memory, so using the allkeys-lru policy makes more efficient use of memory because you can no longer set expiration times.
Internal realization of expulsion
The expulsion process can be understood as follows:
The client executes a command that causes the data in Redis to grow and consume more memory.
Redis checks memory usage and clears some keys according to policy if maxmemory limit is exceeded.
Continue to the next command, and so on.
In this process, memory usage will continue to reach the limit value, and then exceed, and then delete some keys, the usage will fall below the limit value.
If a command causes a large memory footprint (e.g., saving a large set with a new key), the memory usage may significantly exceed the maxmemory limit for some time.
approximate LRU algorithm
Redis does not use a complete LRU algorithm. The key that automatically evicts is not necessarily the one that best satisfies the LRU characteristics. Instead, we extract a small number of key samples through approximate LRU algorithm, and then delete the key with the oldest access time.
Eviction algorithms, greatly optimized since Redis 3.0, use pools as candidates. This greatly improves the efficiency of the algorithm and is closer to the real LRU algorithm.
In Redis's LRU algorithm, the accuracy of the algorithm can be tuned by setting the number of samples. Configure with the following command:
maxmemory-samples 5
Why not use full LRU implementation? The reason is to save memory. But the behavior of Redis and LRU is basically equivalent. Below is a graph comparing the behavior of Redis LRU with that of full LRU.
In the test, the first key is accessed sequentially, so the first key is the best eviction object.
Three types of dots can be seen in the diagram, forming three different bands.
Light gray indicates the object being evicted.
The grey areas represent "not evicted" objects.
Green indicates objects added later.
In a pure LRU implementation, the first half of the old keys are freed. Redis's LRU algorithm only releases older keys with a greater probability.
As you can see, Redis 3.0 works much better with 5 samples than Redis 2.8. Of course, Redis 2.8 is also good, and the last key to access is basically still in memory. When using 10 samples in Redis 3.0, it is very close to pure LRU.
Note that LRU is only a probabilistic model used to predict future visits to a key. In addition, LRU will perform well for most requests if data access follows a power law distribution.
In our simulations, we found that the difference between pure LRU and Redis results is very, if not visible, if power law access is used.
Of course, you can also increase the sample size to 10, at the cost of some extra CPU consumption, so that the result is closer to the true LRU, and use cache miss statistics to determine the difference.
Setting the sample size is easy with the command CONFIG SET maxmemory-samples
That's where redis configures cache cleanup policies for details. For more, please pay attention to other related articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.