Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the elimination strategy of Redis cache?

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "what is the elimination strategy of Redis cache". The content of the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "what is the elimination strategy of Redis cache".

Redis (Remote Dictionary Server), the remote dictionary service, is an open source API that is written in ANSI C language, supports the network, can be memory-based and persistent, and provides multiple languages. [related recommendation: Redis video tutorial]

It has the following characteristics:

Based on memory operation, with high performance

It supports distribution and can be expanded infinitely in theory.

Key-value storage structure, efficient query

Provide a variety of development languages API, easy to integrate with existing business systems.

It is usually used in business systems as distributed cache, centralized Session storage, distributed locking and other application scenarios.

Whether it is local cache or distributed cache, in order to ensure high performance, memory is used to save data. due to cost and memory limitations, when the stored data exceeds the cache capacity, the cached data needs to be removed. The general elimination strategies include FIFO elimination of the earliest data, LRU elimination of the least recently used data, and LFU elimination of the least frequently used data.

Redis cache elimination policy triggered

We do not allow swap behavior in redis in a production environment. So the maximum use of memory is generally limited, and redis provides a configuration parameter maxmemory to specify the maximum use of memory.

The following configurations are legal:

Maxmemory 1000KB maxmemory 100MB maxmemory 1GB maxmemory 0 # means no restrictions are imposed and generally will not be used

The redis.conf configuration file is as follows

8 Redis caching strategies

Delete the least frequently used data in the timeout data set by volatile-lru

Allkeys-lru queries the least frequently used data in all key for deletion, which is the most widely used strategy.

Volatile-random randomly deletes data with a timeout set

Allkeys-random queries all key and then randomly deletes them.

Volatile-ttl query all set timeout data, sort them immediately after tracking, and delete the data that will expire immediately.

Noeviction (default) if set to this property, no deletion will be performed, and an error will be returned if memory is overflowed.

Volatile-lfu expels the least frequently used keys from all keys configured with an expiration time

Allkeys-lfu expels the least frequently used key from all keys

LRU and LFU algorithms for Redis species

LRU algorithm

The Redis LRU algorithm is not an exact implementation. This means that Redis cannot choose the best candidate to expel, that is, the one with the most visits in the past. Instead, it tries to run the approximation of the LRU algorithm by sampling a small number of keys and then expelling the best (with the earliest access time) of the sampling keys.

However, since Redis 3.0, the algorithm has been improved, and some good candidates can be selected for expulsion. This improves the performance of the algorithm and makes it closer to the behavior of the real LRU algorithm.

The important thing about the Redis LRU algorithm is that you can adjust the accuracy of the algorithm by changing the number of samples to check each expulsion. This parameter is controlled by the following configuration instructions:

Maxmemory-samples 5

Redis does not use a real LRU implementation because it requires more memory. However, for applications that use Redis, the approximate values are actually equivalent. The following is a comparison of the LRU approximation used by Redis with the real LRU.

The test that generated the above chart populates the Redis server with a given number of keys. From the first to the last access key, so the first key is the best candidate to expel using the LRU algorithm. A 50% key was later added to force the eviction of half of the old key.

You can see three kinds of points in the figure, forming three different bands.

The light gray belt is the object of eviction.

The gray belt is the object that has not been expelled.

The green belt is the added object.

In the theoretical LRU implementation, we expect the first half of the old key to expire. The Redis LRU algorithm will only expire the old key in probability.

LRU is just a model that predicts the likelihood that a given key will be accessed in the future. In addition, if your data access pattern is very similar to power law, most of the access will be in keysets that the LRU approximation algorithm can handle well.

Disadvantages: there may be a large number of cold data accessed within a certain period of time to produce a large number of hot data.

LFU algorithm

Starting with Redis 4.0, the new least commonly used expulsion mode can be used. This model may be better in some cases (providing better hit / miss rates) because using LFU Redis attempts to track how often items are accessed, so rarely used items are expelled, while frequently used items have a better chance of being remembered.

If you think that at LRU, projects that have recently been visited but have almost never been requested will not expire, so the risk is to expel keys that have a higher chance of being requested in the future. LFU does not have this problem and should generally be better adapted to different access modes.

To configure LFU mode, you can use the following policies:

Volatile-lfu uses approximate LFU expulsion in keys with expired sets.

Allkeys-lfu uses an approximate LFU to expel any key.

LFU is similar to LRU: it uses a probability counter, called the Morris counter, so that only a few bits of each object are used to estimate the object access frequency, and combined with decay periods so that the counter decreases over time: at some point, we no longer want to think of keys as frequently accessed keys, even if they were in the past, so that algorithms can adapt to changes in access patterns.

The sampling of this information is similar to what happens in LRU (as described in the previous section of this document) in order to select the expelled candidate.

Unlike LRU, however, LFU has some tunable parameters: for example, how fast should its ranking decline if frequent items are no longer accessed? You can also adjust the range of Morris counters to better adapt the algorithm to specific use cases.

By default, Redis 4.0 is configured to:

Saturate the counter at about one million requests.

Attenuate the counter every minute.

These should be reasonable values and experimentally tested, but users may want to use these configuration settings to select the best values.

Instructions on how to adjust these parameters can be found by redis.conf in the sample file for source code distribution, but in a nutshell, they are:

Lfu-log-factor 10 lfu-decay-time 1

The decay time is obvious. it is the number of minutes that the counter should attenuate when it is sampled and found to be older than this value. A special value of 0 means that the counter is always attenuated every time it is scanned, which is rarely useful.

The logarithmic factor of the counter changes how many hits are required to saturate the frequency counter, which is in the range of 0-255. The higher the coefficient, the more access is required to reach the maximum. According to the following table, the lower the coefficient, the better the resolution of the low access counter:

+-+ | factor | 100 hits | 1000 hits | 100K hits | 1m hits | 10m hits | + -+ | 0 | 104 | 255 | 255 | 255 | 255 | +- -+ | 1 | 18 | 49 | 255 | 255 | 255 | +- -+-+ | 10 | 10 | 18 | 142 | 255 | 255 | +- -+ | 100 | 8 | 11 | 49 | 143 | 255 | +-+

Eliminate the data that has been accessed the least in the most recent period of time, using the number of times as a reference.

Disadvantages:

1. Recently added data is often easy to be deleted because its starting method is relatively small.

two。 If the frequency time measure is 1 hour, the hot spot data accessed at an average frequency of 1000 per hour per day may be removed from the data accessed at a frequency of 1001 over a period of 2 hours. There may be some critical data.

Cache policy setting recommendation

Suggestion: after understanding the elimination strategy of Redis, use the expire time of setting / updating key as much as possible to actively eliminate inactive old data, which is helpful to improve query performance.

Thank you for reading, the above is the content of "what is the elimination strategy of Redis cache". After the study of this article, I believe you have a deeper understanding of what the elimination strategy of Redis cache is, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report