In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly explains "what to do when the Redis memory is full". The content in the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "what to do when Redis memory is full".
Overview
Redis's article, I previously wrote an article about "three major problems of Redis cache", the cumulative reading is also nearly 800. for only about 3k fans, it is already difficult to reach this reading volume.
This shows that the article is passable and has received a lot of affirmation from many people. Take a look at it with interest. [after reading this three Redis caching questions, make sure you can talk to the interviewer.] .
The "three major cache problems" are only a small part of the knowledge points of Redis. If you want to learn more about Redis, you need to learn more.
So today brings a question often asked in an interview: "what if your Redis memory is full?" "the long-term use of Redis as a cache will one day be full, right?
This interview question does not panic, in Redis there is a configuration parameter maxmemory can "set the size of Redis memory".
In the configuration file redis.conf file of Redis, the size parameters for configuring maxmemory are as follows:
The actual production is definitely not the size of 100mb, ha, do not be misled, here I just let you know this parameter, generally small companies are set to about 3G size.
In addition to the configuration taking effect in the configuration file, you can also configure it in the form of command line parameters, as shown below:
/ / get the size of the maxmemory configuration parameter
127.0.0.1 purl 6379 > config get maxmemory
/ / set maxmemory parameter to 100mb
127.0.0.1 purl 6379 > config set maxmemory 100mb
If the actual storage exceeds the size of the configuration parameters of Redis, there is a "elimination strategy" in Redis, which "eliminates the key that needs to be eliminated and sorts out a clean piece of memory for the new key value to use".
Next, let's talk about the elimination strategy in Redis in detail, and have an in-depth understanding of the principle and application scenario of each elimination strategy.
Elimination strategy
Redis provides "6 phase-out strategies", of which the default is noeviction. The six phase-out strategies are as follows:
Noeviction ("default policy"): if the memory size reaches the threshold, all instructions requesting memory will report an error. Allkeys-lru: all key are eliminated using the "LRU algorithm". Volatile-lru: all key with expiration time set use LRU algorithm to be eliminated. Allkeys-random: all key are eliminated by random elimination. Volatile-random: all key with expiration time set will be eliminated by random elimination. Volatile-ttl: all key with expiration time set will be eliminated according to their expiration time. The earlier they expire, the faster they will be eliminated.
You can use allkeys-lru if the data in Redis is "part of the hot data and the rest of the data is unpopular", or "we don't know the cache access distribution of our application".
If all data accesses are about the same frequency, allkeys-random 's elimination strategy can be used.
If you want to configure a specific elimination policy, you can configure it in the redis.conf configuration file, as shown below:
You only need to open the comments and configure the specified policy method. Another configuration method is the command mode. The specific execution command is as follows:
/ / get maxmemory-policy configuration
127.0.0.1 purl 6379 > config get maxmemory-policy
/ / configure maxmemory-policy to allkeys-lru
127.0.0.1 purl 6379 > config set maxmemory-policy allkeys-lru
In the introduction of six elimination strategies, the LRU algorithm is mentioned, "so what is the LRU algorithm?" "
LRU algorithm
LRU (Least Recently Used) means the least recently used, that is, the least accessed key in the most recent time, and the algorithm eliminates the data according to its historical access record.
Its core idea is: "if a key value is rarely used recently, it will rarely be accessed in the future."
In fact, the LRU implemented by Redis is not a real LRU algorithm, that is, nominally we use the LRU algorithm to eliminate keys, but in fact, the eliminated keys are not necessarily the most unused.
Redis uses an approximate LRU algorithm, which "eliminates key by random collection, randomly selects five key each time, and then eliminates the least recently used key."
The five key here is only the default number, and the specific number can also be configured in the configuration file, as shown in the following figure:
When the value of the approximate LRU algorithm is larger, it will be closer to the real LRU algorithm, it can be understood this way, because "the larger the value, the more complete the data obtained, and the closer the data in elimination is to the least recently used data."
Then in order to implement the LRU algorithm based on time, Redis must add an additional memory space for each key to store the time of each key, the size is 3 bytes.
Some optimizations have been made to the approximate LRU algorithm in Redis 3.0. memory of a candidate pool with a size of 16 is maintained in Redis.
When the sampled data is randomly selected for the first time, the data will be put into the candidate pool, and the data in the candidate pool will be sorted according to time.
When the data selected after the second time, only those that are "less than the minimum time in the candidate pool" will be put into the candidate pool.
When at some point the candidate pool is full, the largest time key will be squeezed out of the candidate pool. When performing the elimination, the key with the least recent access time is selected directly from the candidate pool to be eliminated.
The purpose of this is to select the most recent key value that seems to match the least recently accessed value, which can correctly eliminate the key value, because the minimum time in the randomly selected sample may not be the real minimum time.
But the LRU algorithm has a drawback: if a key value has not been accessed before, but the most recent access, then it will be considered as hot data, will not be eliminated.
However, some data are often accessed before, but not recently, so these data are likely to be eliminated, which will lead to miscalculation and elimination of hot data.
So in addition to the LRU algorithm, a new LFU algorithm was added to Redis 4.0. so what is the LFU algorithm? "
LFU algorithm
LFU (Least Frequently Used) refers to the key that has been frequently used recently, that is, the most frequently visited key in the most recent time period. It takes the frequency of visits in the most recent time period as a criterion.
Its core idea is to eliminate key according to the frequency of recent visits, and give priority to key that is less visited, and vice versa.
The LFU algorithm reflects the heat of a key and will not be considered hot data because of the occasional access to the LRU algorithm.
Volatile-lfu policy and allkeys-lfu policy are supported in the LFU algorithm.
The above introduces the six elimination strategies of Redis. These six elimination strategies are designed to tell us what to do, but when? Without saying this, let's find out in detail when Redis will implement the elimination strategy.
Delete expired key policy
There are three delete operations for this policy in Redis, which are:
"scheduled deletion": create a timer to regularly perform the delete operation on the key. "lazy delete": the expiration time of key is checked only when you visit key again, and delete is performed if it has expired. "Delete regularly": every once in a while, check to delete expired key.
"scheduled deletion" is "memory-friendly", regularly cleaning up clean space, but not "cpu-friendly". The program needs to maintain a timer, which takes up cpu resources.
"lazy deletion" is "cpu-friendly". Cpu does not need to maintain other additional operations, but it is not "memory-friendly" because if some key has not been accessed, it will occupy memory all the time.
Regular deletion is a compromise between the above two solutions * *. Delete expired key at regular intervals, that is, delete key** regularly at a reasonable time according to the specific business.
Delete key by "most reasonably controlling the time interval for deletion" and reduce "less consumption of cpu resources" to rationalize the deletion operation.
Phase-out of RDB and AOF
There are two ways of persistence in Redis: RDB and AOF. For a detailed introduction to these two kinds of persistence, please refer to this article [interview aircraft Series: can you stand it in the face of Redis persistence chain Call?] .
In RDB, you can get a copy of the data at a certain point in time in memory in the form of a snapshot. When you create a RDB file, you can use the save and bgsave commands to create the RDB file.
"neither of these commands will save the expired key to the RDB file", which also has the effect of deleting the expired key.
When you start Redis to load a RDB file, Master does not load expired key, while Slave loads expired key.
In AOF mode, Redis provides optimization measures for Rewite, executing commands REWRITEAOF and BGREWRITEAOF, respectively, "neither of these commands will write expired key to the AOF file, but can also delete expired key."
Thank you for your reading, the above is the content of "Redis memory is full of what to do". After the study of this article, I believe you have a deeper understanding of what to do when Redis memory is full, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.