Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Detailed explanation of redis data elimination strategy

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Today, the editor shares with you a detailed explanation of the redis data elimination strategy, which many people do not quite understand. Today, in order to let you know more about the redis data elimination strategy, the editor summarizes the following contents. Let's look down together. I'm sure you'll get something.

This article talks about the fact that when redis sets the maximum memory, the size of the dataset in the cache exceeds a certain proportion, and the elimination strategy implemented is not the strategy of deleting expired keys, although the two are very similar.

In redis, it is useful to allow users to set the maximum memory size by configuring the value of maxmemory in redis.conf to turn on memory obsolescence.

Setting the maximum memory size ensures that redis provides robust services.

Recommended: redis tutorial

When the redis in-memory dataset size rises to a certain size, the data elimination strategy is implemented. Redis provides 6 data elimination strategies to set policies through maxmemory-policy:

Volatile-lru: select the least recently used data elimination from the dataset with an expiration time set (server. DB [I]. Obsolete).

Volatile-ttl: select the expired data from the dataset (server. DB [I]. Expires) that will expire.

Volatile-random: arbitrarily select data elimination from a dataset with an expiration time set (server. DB [I]. Expires).

Allkeys-lru: select the least recently used data elimination from the dataset (server. DB [I]. Dict).

Allkeys-random: data elimination from any selection of the dataset (server. DB [I]. Dict)

No-enviction (expulsion): prohibition of eviction data

After redis determines to expel a key-value pair, it deletes the data and publishes the data change message locally (AOF persistence) and slave (master-slave connection).

LRU data elimination mechanism

The lru counter server.lrulock is saved in the server configuration, which is updated regularly (redis timer serverCorn ()), and the value of server.lrulock is calculated according to server.unixtime.

In addition, you can see from the struct redisObject that each redis object sets the corresponding lru. It is conceivable that every time the data is accessed, the redisObject.lru is updated.

The LRU data elimination mechanism is as follows: several key-value pairs are randomly selected in the data set, and the largest key-value pair of lru is eliminated. So, you will find that redis is not guaranteed to get the least recently used (LRU) key-value pairs in all datasets, but only a few randomly selected key-value pairs.

/ / redisServer saved lru counters struct redisServer {... unsigned lruclock:22; / * Clock incrementing every minute, for LRU * /...}; / / every redis object saved lru#define REDIS_LRU_CLOCK_MAX ((1id); decrRefCount (keyobj); keys_freed++ / / send the data in the slave reply space to the slave / * When the memory to free starts to be big enough, we may* start spending so much time here that is impossible to* deliver data to the slaves fast enough, so we force the* transmission here inside the loop in time. * / if (slaves) flushSlavesOutputBuffers ();}} / / failed to free space, and the memory used by redis is still excessive. If (! keys_freed) return REDIS_ERR; / * nothing to free... is returned for failure. * /} return REDIS_OK;} applicable scenarios

Let's take a look at the applicable scenarios of several strategies:

Allkeys-lru: if our application's access to the cache follows a power-law distribution (that is, there is relative hot data), or if we are not quite clear about the cache access distribution of our application, we can choose the allkeys-lru policy.

Allkeys-random: if our application has equal access probability to the cached key, we can use this strategy.

Volatile-ttl: this strategy allows us to hint to Redis which key is more suitable for eviction.

In addition, volatile-lru policy and volatile-random policy are suitable when we apply a Redis instance to both cache and persistent storage, but we can also achieve the same effect by using two Redis instances. It is worth mentioning that setting the expiration time of key actually consumes more memory, so we recommend using allkeys-lru policy to use memory more efficiently.

The above is a detailed explanation of the redis data elimination strategy, of course, the detailed use of the above differences still have to be used by everyone to understand. If you want to know more, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report