In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "how to solve the problem of downtime caused by full Redis memory data". The explanation in this article is simple and clear and easy to learn and understand. Please follow the editor's train of thought to study and learn "how to solve the problem of downtime caused by full Redis memory data".
Amount of memory occupied by Redis
We know that Redis is a memory-based key-value database, and because the memory size of the system is limited, we can configure the maximum amount of memory that Redis can use when using Redis.
1. Configure through configuration file
Set the memory size by adding the following configuration to the redis.conf configuration file under the Redis installation directory
/ / set the maximum memory occupied by Redis to 100m maxmemory 100mb
The configuration file of redis does not necessarily use the redis.conf file under the installation directory. When starting the redis service, you can pass a parameter to specify the configuration file of redis.
2. Modify by command
Redis supports dynamic modification of memory size at run time through commands
/ / set the maximum memory occupied by Redis to 100m 127.0.0.1 Redis 6379 > config set maxmemory 100mb / / get the maximum memory size that can be used by set Redis 127.0.0.1 Redis 6379 > config get maxmemory
If you do not set the maximum memory size or set the maximum memory size to 0, the memory size is not limited in 64-bit operating systems, and the maximum use of 3GB memory in 32-bit operating systems
Memory obsolescence of Redis
Now that you can set the maximum memory occupied by Redis, there is a time when the configured memory runs out. Then when the memory is used up, will there be no memory available if you continue to add data to the Redis?
In fact, Redis defines several strategies to deal with this situation:
Noeviction (default policy): no service is provided for write requests, and errors are returned directly (except for DEL requests and some special requests)
Allkeys-lru: phase out using the LRU algorithm from all key
Volatile-lru: use LRU algorithm to eliminate from key with expiration time set
Allkeys-random: randomly eliminates data from all key
Volatile-random: randomly eliminated from key with expiration time set
Volatile-ttl: in a key with an expiration time set, it is eliminated according to the expiration time of the key. The earlier it expires, the more priority it will be eliminated.
When using volatile-lru, volatile-random, and volatile-ttl strategies, if no key can be eliminated, an error will be returned just like noeviction
How to get and set memory elimination strategy
Get the current memory elimination strategy:
127.0.0.1 6379 > config get maxmemory-policy copy code
Set the phase-out policy through the configuration file (modify the redis.conf file):
Maxmemory-policy allkeys-lru copy code
Modify the phase-out policy by command:
127.0.0.1 purl 6379 > config set maxmemory-policy allkeys-lru
LRU algorithm
What is LRU?
It is mentioned above that the maximum memory available for Redis has been used up, and the LRU algorithm can be used for memory elimination, so what is the LRU algorithm?
LRU (Least Recently Used), which is the least recently used, is a cache replacement algorithm. When using memory as a cache, the size of the cache is generally fixed. When the cache is full and you continue to add data to the cache, you need to eliminate some of the old data and free up memory space to store the new data. At this point, you can use the LRU algorithm. The core idea is that if a data is not used recently, it is unlikely to be used in the future, so it can be eliminated.
Using java to implement a simple LRU algorithm
PublicclassLRUCache {/ / capacity privateint capacity; / / current statistics of how many nodes are privateint count; / / cache node privateMap nodeMap; privateNode head; privateNode tail; publicLRUCache (int capacity) {if (capacity)
< 1) { thrownewIllegalArgumentException(String.valueOf(capacity)); } this.capacity = capacity; this.nodeMap = newHashMap(); //初始化头节点和尾节点,利用哨兵模式减少判断头结点和尾节点为空的代码 Node headNode = newNode(null, null); Node tailNode = newNode(null, null); headNode.next= tailNode; tailNode.pre = headNode; this.head = headNode; this.tail = tailNode; } publicvoid put(k key, v value) { Node node = nodeMap.get(key); if(node == null) { if(count >= capacity) {/ / remove a node removeNode () first;} node = newNode (key, value); / / add node addNode (node);} else {/ / move node to header node moveNodeToHead (node) }} publicNode get (k key) {Node node = nodeMap.get (key); if (node! = null) {moveNodeToHead (node);} return node;} privatevoid removeNode () {Node node = tail.pre; / / remove removeFromList (node) from the linked list; nodeMap.remove (node.key) Count--;} privatevoid removeFromList (Node node) {Node pre = node.pre; Nodenext= node.next; pre.next= next; next.pre = pre; node.next= null; node.pre = null;} privatevoid addNode (Node node) {/ / add node to header addToHead (node) NodeMap.put (node.key, node); count++;} privatevoid addToHead (Node node) {Nodenext= head.next; next.pre = node; node.next= next; node.pre = head; head.next= node } publicvoid moveNodeToHead (Node node) {/ / remove removeFromList (node) from the linked list; / / add nodes to the header addToHead (node);} classNode {k key; v value; Node pre; Nodenext; publicNode (k key, v value) {this.key = key This.value = value;}
The above code implements a simple LUR algorithm, the code is very simple, but also annotated, it is easy to read carefully.
Implementation of LRU in Redis
Approximate LRU algorithm
Redis uses the approximate LRU algorithm, which is not quite the same as the conventional LRU algorithm. The approximate LRU algorithm eliminates the data by random sampling, randomly producing 5 (default) key at a time, eliminating the least recently used key from it.
You can modify the number of samples through the maxmemory-samples parameter: for example, the larger the configuration of maxmemory-samples 10 maxmenory-samples, the closer the elimination result is to the strict LRU algorithm.
In order to implement the approximate LRU algorithm, Redis adds an additional 24bit field to each key to store the time when the key was last accessed.
Optimization of approximate LRU by Redis3.0
Redis3.0 makes some optimizations to the approximate LRU algorithm. The new algorithm maintains a candidate pool (size 16). The data in the pool is sorted according to the access time, and the first randomly selected key will be put into the pool, and then each randomly selected key will be put into the pool only when the access time is less than the minimum time in the pool, until the candidate pool is full. When full, if there is a new key to be put in, remove the last access time (most recently accessed) from the pool.
When it needs to be eliminated, just select the key that has the least recent access time (the longest has not been accessed) from the pool to be eliminated.
Comparison of LRU algorithms
We can compare the accuracy of each LRU algorithm through an experiment, first add a certain amount of data n to Redis, so that the available memory of Redis is used up, and then add new data of Redis 2 to Redis. At this time, we need to eliminate part of the data. If we follow the strict LRU algorithm, we should eliminate the data of NUnip 2 which is the first to be added. Generate a comparison diagram of the following LRU algorithms
You can see that there are three different colored dots in the picture:
Light gray is the eliminated data.
Gray is the old data that has not been eliminated.
Green is the new data.
We can see that the graph generated by a Redis3.0 sample number of 10 is closest to the strict LRU. Using the same five samples, Redis3.0 is also better than Redis2.8.
LFU algorithm
LFU algorithm is a new elimination strategy added in Redis4.0. Its full name is Least Frequently Used, and its core idea is to eliminate the key according to the frequency of recent visits, the priority of those who are rarely visited is eliminated, and those who are visited more are left behind.
The LFU algorithm can better express the popularity of a key being visited. If you use the LRU algorithm, a key has not been accessed for a long time, only occasionally, then it is considered to be hot data, will not be eliminated, and some key is likely to be accessed in the future will be eliminated. This will not happen if you use the LFU algorithm, because using it once does not make a key hot data.
LFU has two strategies:
Volatile-lfu: use LFU algorithm to phase out key in key with expiration time set
Allkeys-lfu: use the LFU algorithm to eliminate data in all key
Set to use these two elimination strategies as mentioned above, but it should be noted that the policy can only be set in Redis4.0 and above in these two weeks. If you set it below Redis4.0, an error will be reported.
Thank you for your reading. The above is the content of "how to solve the problem of downtime caused by full Redis memory data". After the study of this article, I believe you have a deeper understanding of how to solve the problem of downtime caused by full Redis memory data, and the specific usage needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.