Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to implement data elimination strategy in redis cache

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

This article will explain in detail how to implement the data elimination strategy in redis cache. The content of the article is of high quality, so I hope you can get something after reading this article.

In redis, it is useful to allow users to set the maximum memory size by configuring the value of maxmemory in redis.conf to turn on memory obsolescence.

Setting the maximum memory size ensures that redis provides robust services.

When the redis in-memory dataset size rises to a certain size, the data elimination strategy is implemented. Redis provides 6 data elimination strategies to set policies through maxmemory-policy:

Volatile-lru: select the least recently used data elimination from the dataset with an expiration time set (server. DB [I]. Obsolete).

Volatile-ttl: select the expired data from the dataset (server. DB [I]. Expires) that will expire.

Volatile-random: arbitrarily select data elimination from a dataset with an expiration time set (server. DB [I]. Expires).

Allkeys-lru: select the least recently used data elimination from the dataset (server. DB [I]. Dict).

Allkeys-random: data elimination from any selection of the dataset (server. DB [I]. Dict)

No-enviction (expulsion): prohibition of eviction data

After redis determines to expel a key-value pair, it deletes the data and publishes the data change message locally (AOF persistence) and slave (master-slave connection).

LRU data elimination mechanism

The lru counter server.lrulock is saved in the server configuration, which is updated regularly (redis timer serverCorn ()), and the value of server.lrulock is calculated according to server.unixtime.

In addition, you can see from the struct redisObject that each redis object sets the corresponding lru. It is conceivable that every time the data is accessed, the redisObject.lru is updated.

The LRU data elimination mechanism is as follows: several key-value pairs are randomly selected in the data set, and the largest key-value pair of lru is eliminated. So, you will find that redis is not guaranteed to get the least recently used (LRU) key-value pairs in all datasets, but only a few randomly selected key-value pairs.

/ / redisServer saved lru counters struct redisServer {... unsigned lruclock:22; / * Clock incrementing every minute, for LRU * /...}; / / every redis object saved lru#define REDIS_LRU_CLOCK_MAX ((1id); decrRefCount (keyobj); keys_freed++ / / send the data in the slave reply space to the slave / * When the memory to free starts to be big enough, we may* start spending so much time here that is impossible to* deliver data to the slaves fast enough, so we force the* transmission here inside the loop in time. * / if (slaves) flushSlavesOutputBuffers ();}} / / failed to free space, and the memory used by redis is still excessive. If (! keys_freed) return REDIS_ERR; / * nothing to free... is returned for failure. * /} return REDIS_OK;}

Applicable scenario

Let's take a look at the applicable scenarios of several strategies:

1. Allkeys-lru: if our application's access to the cache conforms to the power law distribution (that is, there is relative hot data), or we are not quite clear about the cache access distribution of our application, we can choose the allkeys-lru policy.

2. Allkeys-random: if our application has equal access probability to the cache key, we can use this policy.

3. Volatile-ttl: this strategy allows us to hint to Redis which key is more suitable for eviction.

In addition, volatile-lru policy and volatile-random policy are suitable when we apply a Redis instance to both cache and persistent storage, but we can also achieve the same effect by using two Redis instances. It is worth mentioning that setting the expiration time of key actually consumes more memory, so we recommend using allkeys-lru policy to use memory more efficiently.

These are the ways to implement the data elimination strategy in redis cache. have you gained anything after reading it? If you want to know more about it, you are welcome to follow the industry information. Thank you for reading.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report