Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to solve the problem when the Redis memory is full

2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

Redis memory is full of how to solve this problem, this article introduces the corresponding analysis and solution in detail, hoping to help more partners who want to solve this problem to find a more simple and easy way.

1. Configure through configuration file

Set the memory size by adding the following configuration to the redis.conf configuration file under the Redis installation directory.

/ / set the maximum memory occupied by Redis to 100m maxmemory 100mb

The configuration file of redis does not necessarily use the redis.conf file under the installation directory. When you start the redis service, you can pass a parameter to specify the configuration file of redis.

2. Modify by command

Redis supports dynamic modification of memory size at run time through commands

/ / set the maximum memory occupied by Redis to 100m 127.0.0.1 Redis 6379 > config set maxmemory 100mb / / get the maximum memory size that can be used by set Redis 127.0.0.1 Redis 6379 > config get maxmemory

If you do not set the maximum memory size or set the maximum memory size to 0, the memory size is not limited in 64-bit operating systems, and the maximum use of 3GB memory in 32-bit operating systems

Memory obsolescence of Redis

Now that you can set the maximum memory occupied by Redis, there is a time when the configured memory runs out. Then when the memory is used up, will there be no memory available if you continue to add data to the Redis?

In fact, Redis defines several strategies to deal with this situation:

Noeviction (default policy): no service is provided for write requests, and errors are returned directly (except for DEL requests and some special requests)

Allkeys-lru: phase out using the LRU algorithm from all key

Volatile-lru: use LRU algorithm to eliminate from key with expiration time set

Allkeys-random: randomly eliminates data from all key

Volatile-random: randomly eliminated from key with expiration time set

Volatile-ttl: in a key with an expiration time set, it is eliminated according to the expiration time of the key. The earlier it expires, the more priority it will be eliminated.

When using the three strategies of volatile-lru, volatile-random, and volatile-ttl, if no key can be eliminated, an error is returned just like noeviction.

How to get and set memory elimination strategy

Get the current memory elimination strategy:

127.0.0.1 purl 6379 > config get maxmemory-policy

Set the phase-out policy through the configuration file (modify the redis.conf file):

Maxmemory-policy allkeys-lru

Modify the phase-out policy by command:

127.0.0.1 purl 6379 > config set maxmemory-policy allkeys-lru

LRU algorithm

What is LRU?

It is mentioned above that the maximum memory available for Redis has been used up, and the LRU algorithm can be used for memory elimination, so what is the LRU algorithm?

LRU (Least Recently Used), which is the least recently used, is a cache replacement algorithm. When using memory as a cache, the size of the cache is generally fixed. When the cache is full and you continue to add data to the cache, you need to eliminate some of the old data and free up memory space to store the new data.

At this point, you can use the LRU algorithm. The core idea is that if a data is not used recently, it is unlikely to be used in the future, so it can be eliminated.

Use java to implement a simple LRU algorithm.

Public class LRUCache {/ / capacity private int capacity; / / current statistics of how many nodes are private int count; / / cache node private Map nodeMap; private Node head; private Node tail; public LRUCache (int capacity) {if (capacity)

< 1) { throw new IllegalArgumentException(String.valueOf(capacity)); } this.capacity = capacity; this.nodeMap = new HashMap(); //初始化头节点和尾节点,利用哨兵模式减少判断头结点和尾节点为空的代码 Node headNode = new Node(null, null); Node tailNode = new Node(null, null); headNode.next = tailNode; tailNode.pre = headNode; this.head = headNode; this.tail = tailNode; } public void put(k key, v value) { Node node = nodeMap.get(key); if (node == null) { if (count >

= capacity) {/ / remove a node removeNode () first;} node = new Node (key, value); / / add node addNode (node);} else {/ / move node to header node moveNodeToHead (node) }} public Node get (k key) {Node node = nodeMap.get (key); if (node! = null) {moveNodeToHead (node);} return node;} private void removeNode () {Node node = tail.pre; / / remove removeFromList (node) from the linked list NodeMap.remove (node.key); count--;} private void removeFromList (Node node) {Node pre = node.pre; Node next = node.next; pre.next = next; next.pre = pre; node.next = null; node.pre = null } private void addNode (Node node) {/ / add nodes to header addToHead (node); nodeMap.put (node.key, node); count++;} private void addToHead (Node node) {Node next = head.next; next.pre = node; node.next = next; node.pre = head Head.next = node;} public void moveNodeToHead (Node node) {/ / remove removeFromList (node) from the linked list; / / add nodes to the header addToHead (node);} class Node {k key; v value; Node pre; Node next Public Node (k key, v value) {this.key = key; this.value = value;}

The above code implements a simple LUR algorithm, the code is very simple, but also annotated, it is easy to read carefully. Commonly used cache elimination algorithms (LFU, LRU, ARC, FIFO, MRU).

Implementation of LRU in Redis

Approximate LRU algorithm

Redis uses the approximate LRU algorithm, which is not quite the same as the conventional LRU algorithm. The approximate LRU algorithm eliminates the data by random sampling, randomly producing 5 (default) key at a time, eliminating the least recently used key from it.

You can modify the number of samples through the maxmemory-samples parameter:

Example: maxmemory-samples 10

The larger the maxmenory-samples configuration, the closer the elimination result is to the strict LRU algorithm.

In order to implement the approximate LRU algorithm, Redis adds an additional 24bit field to each key to store the time when the key was last accessed.

Optimization of approximate LRU by Redis3.0

Redis3.0 makes some optimizations to the approximate LRU algorithm. The new algorithm maintains a candidate pool (size 16). The data in the pool is sorted according to the access time, and the first randomly selected key will be put into the pool, and then each randomly selected key will be put into the pool only when the access time is less than the minimum time in the pool, until the candidate pool is full. When full, if there is a new key to be put in, remove the last access time (most recently accessed) from the pool.

When it needs to be eliminated, just select the key that has the least recent access time (the longest has not been accessed) from the pool to be eliminated.

Comparison of LRU algorithms

We can compare the accuracy of each LRU algorithm through an experiment, first add a certain amount of data n to Redis, so that the available memory of Redis is used up, and then add new data of Redis 2 to Redis. At this time, we need to eliminate part of the data. If we follow the strict LRU algorithm, we should eliminate the data of NUnip 2 which is the first to be added.

Generate a comparison diagram of the following LRU algorithms

Photo Source: segmentfault.com/a/1190000017555834

You can see that there are three different colored dots in the picture:

Light gray is the eliminated data.

Gray is the old data that has not been eliminated.

Green is the new data.

We can see that the graph generated by a Redis3.0 sample number of 10 is closest to the strict LRU. Using the same five samples, Redis3.0 is also better than Redis2.8.

LFU algorithm

LFU algorithm is a new elimination strategy added in Redis4.0. Its full name is Least Frequently Used, and its core idea is to eliminate the key according to the frequency of recent visits, the priority of those who are rarely visited is eliminated, and those who are visited more are left behind.

The LFU algorithm can better express the popularity of a key being visited. If you use the LRU algorithm, a key has not been accessed for a long time, only occasionally, then it is considered to be hot data, will not be eliminated, and some key is likely to be accessed in the future will be eliminated. This will not happen if you use the LFU algorithm, because using it once does not make a key hot data.

LFU has two strategies:

Volatile-lfu: use LFU algorithm to phase out key in key with expiration time set

Allkeys-lfu: use the LFU algorithm to eliminate data in all key

Set to use these two elimination strategies as mentioned earlier, but it is important to note that these two-week policies can only be set in Redis4.0 and above. If you set them below Redis4.0, an error will be reported.

This is the end of the answer on how to solve the problem when the Redis memory is full. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel to learn more about it.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report