In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
1. Set Redis to use LRU algorithm
LRU (Least Recently Used) least recently used algorithm is one of many permutation algorithms.
There is a concept of maxmemory in Redis, mainly to limit the memory used to a fixed size. The LRU algorithm used by Redis is an approximate LRU algorithm.
(1) set maxmemory
As mentioned above, the purpose of maxmemory is to limit the maximum memory usage of Redis. There are several ways to set its size. One of the methods is set through CONFIG SET, as follows:
127.0.0.1 CONFIG SET maxmemory 100MBOK127.0.0.1:6379 > CONFIG GET maxmemory1) "maxmemory" 2) "0" 127.0.1 CONFIG SET maxmemory 100MBOK127.0.0.1:6379 > CONFIG GET maxmemory1) "maxmemory" 2) "104857600"
Another way is to modify the configuration file redis.conf:
Maxmemory 100mb
Note that on 64bit systems, setting maxmemory to 0 means that Redis memory usage is not limited, while on 32bit systems, maxmemory implicitly cannot exceed 3GB.
When Redis memory usage reaches the specified limit, you need to choose a replacement strategy.
(2) replacement strategy
When the memory usage of Redis reaches maxmemory, you need to select the set maxmemory-policy to replace the old data.
Here are the replacement strategies you can choose:
Noeviction: no replacement, which means that even if the memory reaches the upper limit, it will not be replaced. All commands that can cause memory increase will return error.
Allkeys-lru: priority is given to deleting the most infrequently used key to save new data
Volatile-lru: only select the most infrequently used key from the key with expire set to delete to save new data
Allkeys-random: randomly select some key from all-keys to delete to save new data
Volatile-random: select some key to delete only from the key with expire set to save the new data
Volatile-ttl: only select the key with the shortest survival time (TTL) from the key with expire set to delete to save the new data.
It is important to note that:
(1) the method of setting maxmemory-policy is similar to that of setting maxmemory, which is dynamically modified by redis.conf or CONFIG SET.
(2) if there is no match to a key that can be deleted, then the volatile-lru, volatile-random, and volatile-ttl policies are the same as the noeviction replacement policy-- no key is replaced.
(3) it is very important to choose the appropriate replacement strategy, which mainly depends on the access mode of your application. Of course, you can also dynamically modify the replacement strategy, and use the Redis command-INFO to output the hit rate of cache, and then you can tune the replacement strategy.
Generally speaking, there are some common experiences:
Key is the most frequently used recently, so you need to choose allkeys-lru to replace the most infrequently used key recently. If you are not sure which strategy to use, allkeys-lru is recommended.
If the access probability of all key is the same, then you can choose the allkeys-random policy to replace the data.
If you know enough about the data to be able to specify a hint for key (specified by expire/ttl), you can choose volatile-ttl for replacement.
Volatile-lru and volatile-random are often used when one Redis instance is both cache and persistent, however, it is better to use two Redis instances to solve this problem.
Setting the failure time expire will take up some memory, but with allkeys-lru, there is no need to set the failure time, so as to make more efficient use of memory.
(3) how the replacement strategy works
It is important to understand how the replacement strategy is executed, such as:
The client executes a new command, causing the database to add data (such as set key value)
Redis will check memory usage. If memory usage exceeds maxmemory, some key will be deleted according to the replacement policy.
The new command was executed successfully
Our continuous writing of data will cause memory to reach or exceed the upper limit of maxmemory, but the replacement strategy will reduce memory usage below the upper limit.
If you need to use a lot of memory at a time (such as writing a large set at a time), Redis's memory usage may exceed the maximum memory limit for a period of time.
(4) approximate LRU algorithm
LRU in Redis is not a strict LRU algorithm implementation, but an approximate LRU implementation, mainly to save memory footprint and improve performance. Redis has such a configuration-the LRU of maxmemory-samples,Redis takes out the number of key configured, and then selects one of the most infrequently used key to replace. The default is 5, as follows:
Maxmemory-samples 5
The advantage of speed or accuracy of LRU replacement algorithm can be achieved by adjusting the number of samples.
The reason Redis does not use the real LRU implementation is to save memory usage. Although they are not true LRU implementations, they are almost equivalent in application. The following figure is a comparison between the approximate LRU implementation of Redis and the theoretical LRU implementation:
The test begins by importing a certain number of key into the Redis, and then accessing the last key from the first key, so the first key accessed according to the LRU algorithm should be newly replaced, and then increase the number of key by 50%, resulting in 50% of the old key being replaced.
In the image above, you can see three types of points that make up three different regions:
The light gray one is the replaced key.
The gray one is the key that has not been replaced.
The green one is the newly added key
As we expected in the theoretical LRU implementation, the oldest 50% destination key is replaced, and Redis's LRU replaces a certain percentage of the old key.
You can see that when the sample number is 5, Redis3.0 does much better than Redis2.8, and there are a lot of data in Redis2.8 that should be replaced. With a sample size of 10, Redis3.0 is very close to the real LRU implementation.
LRU is a model that predicts what data we will access in the future. If the form of the data we access is close to our expectation-power law, then the implementation of the approximate LRU algorithm will handle it well.
In the simulation test, we can find that in the power law access mode, the gap between theoretical LRU and Redis approximate LRU is very small or there is no gap.
If you set maxmemory-samples to 10, then Redis will add additional CPU overhead to ensure that it is close to real LRU performance, and you can check the hit ratio to see the difference.
Adjust the number of samples dynamically through CONFIG SET maxmemory-samples and do some tests to verify your conjecture.
2. The realization of LRU
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.