In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
What are the functions of redis timing deletion? This problem may be often seen in our daily study or work. I hope you can gain a lot from this question. The following is the reference content that the editor brings to you, let's take a look at it!
Redis timeout deletes three possible answers, each of which represents three different deletion strategies:
Timing deletion: while setting the expiration time of the key, create a timer that allows the timer to delete the key immediately when the expiration time of the key is approaching.
Lazy deletion: let the key expire, but every time you get the key from the key space, check whether the key has expired, delete the key if it expires, and return the key if it does not expire.
Delete regularly: every once in a while, the program checks the database and deletes the expired keys. It is up to the algorithm to decide how many expired keys to delete and how many databases to check.
Among the three strategies, the first and the third are active deletion policies, while the second is passive deletion policies.
Scheduled deletion:
The timed delete policy is the most memory-friendly: by using timers, the timed delete policy ensures that expired keys are deleted as quickly as possible and frees up memory occupied by expired keys.
On the other hand, the disadvantage of the timed delete strategy is that it is the least friendly to CPU time: when there are many expired keys, deleting expired keys may take up a considerable part of CPU time. When memory is not tight but CPU time is very tight, using CPU time to delete expired keys that have nothing to do with the current task will undoubtedly affect the response time and throughput of the server.
For example, if there are a large number of command requests waiting for the server to process, and the server is not currently running out of memory, then the server should give priority to CPU time processing the client's command requests rather than deleting expired keys.
In addition, creating a timer requires time events in the Redis server, and the implementation of the current time events-unordered linked list, the time complexity of finding an event is O (N)-can not efficiently handle a large number of time events.
Therefore, it is not realistic at this stage for the server to create a large number of timers to implement the timed deletion policy.
Lazy deletion:
The lazy delete policy is the most friendly for CPU time: the program checks for the expiration of the key only when the key is removed, which ensures that the deletion of the expired key will only be done if necessary, and that the target of the deletion is limited to the currently processed key, and this policy does not spend any CPU time on deleting other unrelated expired keys.
The disadvantage of the lazy delete strategy is that it is the least memory-friendly: if a key has expired and the key remains in the database, the memory it occupies will not be freed as long as the expired key is not deleted.
When using the lazy delete strategy, if there are so many expired keys in the database that are not accessed, they may never be deleted (unless the user executes FLUSHDB manually), and we can even think of this as a memory leak-useless junk data takes up a lot of memory and the server does not release them on its own. This is certainly not good news for Redis servers whose running state is very dependent on memory.
For example, for some time-related data, such as log, access to them will be greatly reduced or even no longer accessed after a certain point in time. If such expired data is heavily overstocked in the database, the user thinks that the server has automatically deleted them, but in fact these keys still exist, and the memory occupied by the keys has not been released. Then the consequences must be very serious.
Delete periodically:
Judging from the above discussion of timed deletions and lazy deletions, both deletions have obvious drawbacks when used alone:
Scheduled deletion takes up too much CPU time, affecting the response time and throughput of the server.
Lazy deletion wastes too much memory and is in danger of memory leakage.
A periodic deletion policy is an integration and compromise of the first two policies:
The periodic deletion policy performs delete expired key operations at regular intervals and reduces the impact of deletion operations on CPU time by limiting the length and frequency of deletion operations.
In addition, by deleting expired keys periodically, the periodic deletion strategy effectively reduces the memory waste caused by expired keys.
The difficulty of deleting a policy on a regular basis is to determine how long and how often the delete operation will be performed:
If deletions are performed too often or for too long, periodic deletion policies degenerate into scheduled deletion policies, so that too much CPU time is spent on deleting expired keys.
If the delete operation is performed too little, or if the execution time is too short, the periodic delete policy will waste memory just like the lazy delete policy.
The periodic deletion policy of expired keys is implemented by the redis.c/activeExpireCycle function. Whenever the Redis server periodically operates the redis.c/serverCron function, the activeExpireCycle function is called. It traverses each database in the server several times within a specified time, randomly checks the expiration time of some keys from the expires dictionary of the database, and deletes the expired keys.
The whole process can be described in pseudo code as follows:
# default number of databases per check DEFAULT_DB_NUMBERS = "default number of keys per database check DEFAULT_KEY_NUMBERS = 2" global variable Record check progress current_db = 0def activeExpireCycle (): # initialize the number of databases to be checked # if the number of databases on the server is smaller than DEFAULT_DB_NUMBERS # then the number of databases on the server shall prevail if server.dbnum < DEFAULT_DB_NUMBERS: db_numbers = server.dbnum else: db_numbers = DEFAULT_DB_NUMBERS # traverse each database For i in range (db_numbers): # if the value of current_db is equal to the number of databases on the server # this means that the checker has traversed all the databases on the server once # reset current_db to 0 Start a new round of traversal if current_db = = server.dbnum: current_db = 0 # get the database to be processed redisDb = server.b [current _ db] # increase the database index by 1 Point to the next database to be processed current_db + = 1 # check the database key for j in range (DEFAULT_KEY_NUMBERS): # if none of the keys in the database have an expiration time Then skip the database if redisDb.expires.size () = 0: break # randomly get a key with expiration time key_with_ttl = redisDb.expires.get_random_key () # check whether the key expires Delete it if it expires if is_expired (key_with_ttl): delete_key (key_with_ttl) # has reached the time limit, stop processing if reach_time_limit (): return
The working mode of the activeExpireCycle function can be summarized as follows:
Each time the function runs, it takes a certain number of random keys from a certain number of databases to check and delete the expired keys.
The global variable current_db records the progress of the current activeExpireCycle function check and processes it the next time the activeExpireCycle function is called. For example, if the current activeExpireCycle function returns while traversing database 10, the next time the activeExpireCycle function executes, it will look for and delete expired keys from database 11.
As the activeExpireCycle function continues to execute, all databases in the server are checked, and the function resets the current_db variable to 0 and starts a new round of checks again.
Thank you for reading! After reading the above, do you have a general understanding of the regular deletion function of redis? I hope the content of the article will be helpful to all of you. If you want to know more about the relevant articles, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.