In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article introduces the knowledge of "what happens when Redis runs out of memory". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
As a server, memory is not unlimited, so there will always be memory exhaustion, so when the Redis server runs out of memory, what will Redis do if you continue to execute the request command?
Memory recovery
When using Redis services, in many cases, some key-value pairs will only be valid for a certain period of time. To prevent this type of data from occupying memory all the time, we can set the validity period for key-value pairs. In Redis, you can set the expiration time for a key with four separate commands:
Expire key ttl: sets the expiration time of the key value to ttl seconds.
Pexpire key ttl: sets the expiration time of the key value to ttl milliseconds.
Expireat key timestamp: sets the expiration time of the key value to the specified number of timestamp seconds.
Pexpireat key timestamp: sets the expiration time of the key value to the specified number of timestamp milliseconds.
PS: no matter which command is used, the bottom layer of Redis is implemented using the pexpireat command. In addition, commands such as set can also set the key and add the expiration time, which ensures the atomicity of the setting value and the expiration time.
After the validity period is set, you can query the remaining expiration time through ttl and pttl commands (if no expiration time is set, the following two commands return-1, if an illegal expiration time is set, both return-2):
Ttl key returns the number of seconds left for key to expire.
Pttl key returns the number of milliseconds remaining for the expiration of key.
Expiration policy
If we delete an expired key, we usually have three strategies:
Timing deletion: set a timer for each key and delete the key once the expiration time is up. This strategy is memory-friendly, but not CPU-friendly, because each timer takes up a certain amount of CPU resources.
Lazy deletion: no matter whether the key has expired or not, do not delete it actively, wait until each time to get the key and then determine whether it expires. If it expires, delete the key, otherwise return the corresponding value of the key. This strategy is not memory-friendly and may waste a lot of memory.
Regular scanning: the system scans periodically at regular intervals and deletes expired keys when found. This strategy is relatively a compromise between the above two strategies. It should be noted that the periodic frequency should be controlled according to the actual situation. One drawback of using this strategy is that expired keys may also be returned.
In Redis, it chooses a combination of strategy 2 and strategy 3. However, Redis's regular scan only scans keys with expiration time set, because the key Redis with expiration time is stored separately, so all keys will not be scanned:
Typedef struct redisDb {dict * dict; / / all keys set expiration time to dict * expires; / / to dict * blocking_keys; / / blocked key, such as dict * watched_keys; / / WATCHED keys int id; / / Database ID / / when the client executes blocking instructions such as BLPOP. Omit other attributes} redisDb;8 elimination strategy
If all the keys in Redis do not expire and the memory is full at this time, what will Redis do when the client continues to execute commands such as set? Different elimination strategies are provided in Redis to deal with this scenario.
First, Redis provides a parameter maxmemory to configure Redis maximum memory usage:
Maxmemory
Alternatively, you can modify it dynamically by using the command config set maxmemory 1GB.
If this parameter is not set, Redis uses up to 3GB memory on 32-bit operating systems, but there is no limit on 64-bit operating systems.
Eight elimination policies are provided in Redis, which can be configured with the parameter maxmemory-policy:
The elimination policy states that volatile-lru deletes keys that set the expiration time according to the LRU algorithm until free space is available. If there are no key objects to delete and memory is still insufficient, the error allkeys-lru deletes all keys according to the LRU algorithm until free space is available. If there is no key object to delete and memory is still insufficient, the error volatile-lfu deletes the key that has the expiration time set according to the LFU algorithm until free space is available. If there are no key objects to delete and memory is still insufficient, the error allkeys-lfu deletes all keys according to the LFU algorithm until free space is available. If there is no key object to delete and memory is still insufficient, the error volatile-random randomly deletes the key with the expiration time set until free space is available. If there are no key objects to delete and memory is still insufficient, the error allkeys-random randomly deletes all keys until free space is available. If there is no key object to delete and memory is still insufficient, the error volatile-ttl deletes the data that is about to expire recently according to the ttl property of the key object. If not, the noeviction default policy will be reported directly without any processing, and the error will be reported directly.
PS: the elimination policy can also be dynamically configured using the command config set maxmemory-policy directly.
LRU algorithm
The full name of LRU is Least Recently Used. That is, the longest time has not been used recently. This is mainly aimed at the use of time.
LRU algorithm improved by Redis
In Redis, the traditional LRU algorithm is not used, because the traditional LRU algorithm has two problems:
Additional space is needed for storage.
There may be some key values that are frequently used but have not been used recently and are therefore deleted by the LRU algorithm.
In order to avoid the above two problems, the traditional LRU algorithm in Redis is modified and deleted by sampling.
An attribute maxmemory_samples 5 is provided in the configuration file, and the default value is 5, which means that five key values are randomly selected, and then the five key values are deleted according to the LRU algorithm, so it is obvious that the higher the key value, the higher the accuracy of deletion.
There is a comparison between the sampled LRU algorithm and the traditional LRU algorithm on the Redis official website:
The light gray belt is the deleted object.
Gray bands are objects that have not been deleted.
Green is the added object.
The first image in the upper left corner represents the traditional LRU algorithm, and you can see that when the sampling number reaches 10 (upper right corner), it is very close to the traditional LRU algorithm.
How does Redis manage heat data
When we talked about the string object earlier, we mentioned that there is a lru property in the redisObject object:
Typedef struct redisObject {unsigned type:4;// object type (4 bits = 0.5 bytes) unsigned encoding:4;// encoding (4 bits = 0.5 bytes) unsigned lru:LRU_BITS;// records the count of int refcount;// references when the object was last accessed by the application (24 bits = 3 bytes). A value equal to 0 indicates that it can be garbage collected (32 bits = 4 bytes) void * ptr;// points to the underlying actual data storage structure, such as SDS (8 bytes)} robj
The lru property is written when the object is created and updated when the object is accessed. The normal person's idea is to finally decide whether or not to delete a key must be the current timestamp minus lru, the largest difference will be deleted first. However, this is not done in Redis. Redis maintains a global property, lru_clock, which is updated through a global function, serverCron, which is executed every 100ms and records the current unix timestamp.
The final decision to delete the data is obtained by subtracting the object's lru attribute from the lru_clock. So why would Redis do that? Wouldn't it be more accurate to take the global time directly?
This is because this can avoid taking the global property directly every time you update the lru property of the object, instead of calling the system function to get the system time, thus improving efficiency (there are many such details in Redis to improve performance, which can be said to optimize the performance to the extreme as far as possible).
However, there is another problem. We can see that the lru attribute in the redisObject object has only 24 bits, and the 24 bits can only store the timestamp size of 194 days. Once the timestamp size exceeds 194 days, it will be re-calculated from 0, so it is possible that the lru attribute in the redisObject object is larger than the global lru_clock attribute.
Because of this, the calculation also needs to be divided into two situations:
When global lruclock > lru, use lruclock-lru to get idle time.
When global lruclock
< lru,则使用 lruclock_max(即 194 天) - lru + lruclock 得到空闲时间。 需要注意的是,这种计算方式并不能保证抽样的数据中一定能删除空闲时间最长的。这是因为首先超过 194 天还不被使用的情况很少,再次只有 lruclock 第 2 轮继续超过 lru 属性时,计算才会出问题。 比如对象 A 记录的 lru 是 1 天,而 lruclock 第二轮都到 10 天了,这时候就会导致计算结果只有 10-1=9 天,实际上应该是 194+10-1=203 天。但是这种情况可以说又是更少发生,所以说这种处理方式是可能存在删除不准确的情况,但是本身这种算法就是一种近似的算法,所以并不会有太大影响。 LFU 算法 LFU 全称为:Least Frequently Used。即:最近最少频率使用,这个主要针对的是使用频率。这个属性也是记录在redisObject 中的 lru 属性内。 当我们采用 LFU 回收策略时,lru 属性的高 16 位用来记录访问时间(last decrement time:ldt,单位为分钟),低 8 位用来记录访问频率(logistic counter:logc),简称 counter。 访问频次递增 LFU 计数器每个键只有 8 位,它能表示的最大值是 255,所以 Redis 使用的是一种基于概率的对数器来实现 counter 的递增。r 给定一个旧的访问频次,当一个键被访问时,counter 按以下方式递增: 提取 0 和 1 之间的随机数 R。 counter - 初始值(默认为 5),得到一个基础差值,如果这个差值小于 0,则直接取 0,为了方便计算,把这个差值记为 baseval。 概率 P 计算公式为:1/(baseval * lfu_log_factor + 1)。 如果 R < P 时,频次进行递增(counter++)。 公式中的 lfu_log_factor 称之为对数因子,默认是 10 ,可以通过参数来进行控制: lfu_log_factor 10 下图就是对数因子 lfu_log_factor 和频次 counter 增长的关系图:As you can see, when the logarithmic factor lfu_log_factor is 100, it takes about 10m (10 million) visits to increase the access counter to 255, while the default 10 can support 1m (1 million) visits to counter to reach the 255upper limit, which is sufficient in most scenarios.
Decreasing access frequency
If the access frequency of counter is only increasing all the time, sooner or later, it will all reach 255.This means that the increasing counter cannot fully reflect the heat of a key, so when a certain key is not accessed for a period of time, the counter needs to be reduced accordingly.
The reduction rate of counter is controlled by the parameter lfu-decay-time, which defaults to 1, in minutes. The default value of 1 means that if there is no access within N minutes, the counter will be minus N.
Lfu-decay-time 1
The specific algorithm is as follows:
Get the current timestamp, convert it to minutes and lower it by 16 bits (this value is marked as now to facilitate subsequent calculation).
Fetch the high 16 bits from the lru property in the object (this value is marked as ldt for later calculation).
When lru > now, the default is that one cycle has elapsed (16 bits, maximum 65535), then take the difference 65535-ldt+now: when lru
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.