Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the use of LRU algorithm in Redis

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article is to share with you about the usefulness of the LRU algorithm in Redis. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.

Redis is a key-value database based on memory storage. We know that the memory is fast but the space is small. When the physical memory reaches the upper limit, the system will run very slowly. This is because the swap mechanism will transfer part of the memory data to the swap partition and ensure that the system continues to run through the exchange with swap. But swap belongs to hard disk storage, which is not as fast as memory, especially for Redis, a service with a very high QPS, which cannot be accepted. Note that if the swap partition memory is also full, the system will have an error! ) [related recommendation: Redis video tutorial]

Linux operating system can view swap size through free-m:\

So how to prevent this from happening in Redis is very important (it is almost impossible for an interviewer to ask about Redis without asking about this knowledge).

2. Maxmemory configuration

Redis provides maxmemory configuration for the above problems, which can specify the most big data set of Redis storage, which is usually configured in a redis.conf file, or can be configured at once using the CONFIG SET command at run time.

Schematic diagram of configuration items in the redis.conf file:\

The maxmemory configuration item is not enabled by default. According to Redis, there is no memory limit by default for 64-bit operating systems and 3GB implicit memory configuration for 32-bit operating systems. If maxmemory is 0, memory is not limited.

Therefore, when we do the cache architecture, we should make the appropriate maxmemory configuration according to the hardware resources and business requirements.

3. What if the memory reaches maxmemory

It is obvious that the maximum memory is configured, and when maxmemory reaches the maximum limit, it is impossible for Redis to stop working, so how does Redis deal with this problem? This is the focus of this article, Redis provides maxmemory-policy elimination strategy (this article only talks about LRU, not LFU,LFU in the next article), delete the key that meets the requirements, bid farewell to the old and welcome the new.

Maxmemory-policy phase-out strategy:

Noeviction: an error is returned when the memory limit is reached and the client tries to execute a command that may cause more memory to be used. simply put, read operations are still allowed, but new data is not allowed to be written, and del requests can.

Allkeys-lru: eliminated from all key through the lru (Least Recently Used-least recently used) algorithm

Allkeys-random: random elimination from all key

Volatile-lru: from all key with expiration time set, it is eliminated by lru (Least Recently Used-least recently used) algorithm, which ensures that data whose expiration time is not set and needs to be persisted will not be selected.

Volatile-random: random elimination from all key with expiration time set

Volatile-ttl: from all key with expiration time set, by comparing the value of TTL of remaining expiration time of key, the smaller the TTL is, the more obsolete it is.

And volatile-lfu/allkeys-lfu this is left for the following discussion, the two algorithms are different!

For random elimination of random, you only need to delete some key randomly to release memory space. If the expiration time of ttl is small, you can also delete the key with low ttl value by comparing the size of ttl to free up memory space.

So how does LRU work? How does Redis know which key has been used recently and which key has not been used recently?

4. Implementation of LRU algorithm.

We first use the container of Java to implement a simple LRU algorithm. We use ConcurrentHashMap to store the mapping of elements in key-value results, and use ConcurrentLinkedDeque to maintain the access order of key.

LRU implementation code:

Package com.lizba.redis.lru;import java.util.Arrays;import java.util.List;import java.util.concurrent.ConcurrentHashMap;import java.util.concurrent.ConcurrentLinkedDeque;/** *

* simple implementation of LRU *

* * @ Author: Liziba * @ Date: 23:47 on 2021-9-17 * / public class SimpleLru {/ * data cache * / private ConcurrentHashMap cacheData; / * * access sequence record * / private ConcurrentLinkedDeque sequence; / * * cache capacity * / private int capacity; public SimpleLru (int capacity) {this.capacity = capacity; cacheData = new ConcurrentHashMap (capacity); sequence = new ConcurrentLinkedDeque () } / * set value * * @ param key * @ param value * @ return * / public Object setValue (String key, Object value) {/ / determine whether LRU phase-out this.maxMemoryHandle () / / remove the element if it is included, and the newly accessed element is always saved at the front of the queue if (sequence.contains (key)) {sequence.remove ();} sequence.addFirst (key); cacheData.put (key, value); return value } / * achieve maximum memory and eliminate recently least used key * / private void maxMemoryHandle () {while (sequence.size () > = capacity) {String lruKey = sequence.removeLast (); cacheData.remove (lruKey); System.out.println ("key:" + lruKey + "eliminated!") ;}} / * get the access LRU order * * @ return * / public List getAll () {return Arrays.asList (sequence.toArray (new String [] {}));}} copy the code

Test the code:

Package com.lizba.redis.lru;/** *

* Test the least recent use *

* * @ Author: Liziba * @ Date: 0:00 on 2021-9-18 * / public class TestSimpleLru {public static void main (String [] args) {SimpleLru lru = new SimpleLru (8); for (int I = 0; I

< 10; i++) { lru.setValue(i+"", i); } System.out.println(lru.getAll()); }}复制代码 测试结果:\

From the above test results, we can see that the first joined key0,key1 has been eliminated, and the last joined key is also the latest key saved in the head of the sequence line.

Through this scheme, the LRU algorithm can be easily implemented, but the disadvantage is also very obvious. The scheme needs to use additional data structures to save the access order of key, which will increase the memory consumption of Redis. The scheme used to optimize memory itself consumes a lot of memory, which is obviously not good.

5. Approximate LRU of Redis

In view of this situation, Redis uses the approximate LRU algorithm, which does not completely and accurately eliminate the most infrequently used key recently, but the overall accuracy can also be guaranteed.

The approximate LRU algorithm is very simple. In the key object of Redis, 24bit is added to store the system timestamp of the last access. When the client sends the key write-related request to the Redis server, it is found that the memory reaches maxmemory, and lazy deletion is triggered. The Redis service selects five key that meet the criteria through random sampling (note that the random sampling allkeys-lru is randomly sampled from all key and volatile-lru is randomly sampled from all key with expiration time set), and the oldest key; of the five key is eliminated by comparing the most recent access timestamps recorded in the key object. If there is still not enough memory, repeat this step.

Note that 5 is the default random sampling size of Redis, which can be configured through maxmemory_samples in redis.conf:\

For the above random LRU algorithm, Redis officially gives a data graph showing the accuracy of the test:

The top layer of light gray shows the eliminated key. Figure 1 is a schematic diagram of the elimination of the standard LRU algorithm.

The dark gray layer in the middle represents the old key that has not been eliminated.

The lowest light green indicates the recently visited key

When Redis 3.0 maxmemory_samples is set to 10, Redis's approximate LRU algorithm is very close to the real LRU algorithm, but it is obvious that setting maxmemory_samples to 10 consumes more CPU computing time than setting maxmemory_samples to 5, because the calculation time increases with each sample data.

Redis3.0 's LRU is more accurate than Redis2.8 's LRU algorithm, because Redis3.0 adds a phase-out pool of the same size as maxmemory_samples. Each time key is eliminated, it is compared with the key waiting to be eliminated in the elimination pool, and finally the oldest key is eliminated. In fact, the selected key is put together to eliminate the oldest one.

6. Existing problems

The LRU algorithm seems to be easy to use, but it also has some unreasonable points. For example, two key An and B were added to Redis,An at the same time an hour before the elimination occurred. They were visited 1000 times in the first 49 minutes, but not in the last 11 minutes. B was visited only once in the 59th minute in this hour. At this time, if you use the LRU algorithm, if An and B are selected by Redis sampling, A will be eliminated. Obviously this is unreasonable.

In view of this situation, Redis 4.0 adds the LFU algorithm, (Least frequently used) is the least frequently used, and this algorithm is more reasonable than LRU.

Thank you for reading! This is the end of this article on "what is the use of the LRU algorithm in Redis". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it out for more people to see!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report