Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Detailed explanation of MEMORY command under redis4.0

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces "detailed explanation of MEMORY command under redis4.0". In daily operation, I believe many people have doubts about the detailed explanation of MEMORY command under redis4.0. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts of "detailed explanation of MEMORY command under redis4.0". Next, please follow the editor to study!

Preface

In the past, there were only info memory commands to check the memory usage status of redis, and there was only some basic information, so it was difficult to get global information. Since 4. 0, redis has provided the MEMORY command, which makes everything easier.

MEMORY command

The MEMORY command has a total of 5 subcommands, which can be viewed through MEMORY HELP:

127.0.0.1 purl 6379 > memory help

1) "MEMORY DOCTOR-Outputs memory problems report"

2) "MEMORY USAGE [SAMPLES]-Estimate memory usage of key"

3) "MEMORY STATS-Show memory usage details"

4) "MEMORY PURGE-Ask the allocator to release memory"

5) "MEMORY MALLOC-STATS-Show allocator internal stats"

Next, let's start with MEMORY STATS and introduce the functions of each subcommand one by one.

1. MEMORY STATS

First of all, we need to make it clear that the memory usage of redis contains not only all the key-value data, but also the meta-information that describes the key-value, as well as the consumption of many management functions, such as persistence and master-slave replication. Through MEMORY STATS, we can better understand the memory usage of redis.

Here we start a redis with slave turned on, and randomly write some data to it (some data with expiration time), so that readers can better understand the memory usage of redis, and then execute the MEMORY STATS command:

127.0.0.1 purl 6379 > memory stats

1) "peak.allocated"

2) (integer) 423995952

3) "total.allocated"

4) (integer) 11130320

5) "startup.allocated"

6) (integer) 9942928

7) "replication.backlog"

8) (integer) 1048576

9) "clients.slaves"

10) (integer) 16858

11) "clients.normal"

12) (integer) 49630

13) "aof.buffer"

14) (integer) 3253

15) "db.0"

16) 1) "overhead.hashtable.main"

2) (integer) 5808

3) "overhead.hashtable.expires"

4) (integer) 104

17) "overhead.total"

18) (integer) 11063904

19) "keys.count"

20) (integer) 94

21) "keys.bytes-per-key"

22) (integer) 12631

23) "dataset.bytes"

24) (integer) 66416

25) "dataset.percentage"

26) "5.5934348106384277"

27) "peak.percentage"

28) "2.6251003742218018"

29) "fragmentation"

30) "1.1039986610412598"

There are a total of 15 items, and the memory usage is all in bytes. Let's take a look at one by one:

1. Peak.allocated

How much memory has been used since redis was started?

2. Total.allocated

The total amount of memory currently used.

3. Startup.allocated

Many readers will wonder why my redis has taken up dozens of MB of memory without doing anything since redis initialization.

This is because redis itself stores not only key-value, but also other memory consumption, such as shared variables, master-slave replication, persistence, and db meta-information, which are described in more detail below.

4. Replication.backlog

The memory used by the master-slave replication backlog. By default, the 10MB master-slave replication backlog only works when the master-slave disconnection is reconnected, and the master-slave replication itself does not depend on this.

5. Clients.slaves

The read and write buffers of all slave in master-slave replication, including the memory used by output-buffer (that is, output buffer) and querybuf (that is, input buffer). Here is a brief introduction to master-slave replication:

Redis appends all the changes to the database in an event loop to the output-buffer of slave and sends them to slave at the end of the event loop.

Then it is inevitable that there will be data delay between master and slave. If the connection between master and slave is disconnected, it is obviously not efficient to do a full synchronization in order to ensure data consistency when reconnecting. Backlog is designed for this purpose. Master caches part of the incremental data replicated by master and slave in backlog. If the offset of slave is in backlog, then only the incremental data after offset can be synchronized to slave, which avoids the overhead of full synchronization.

6. Clients.normal

Read and write buffers for all clients except slave.

Sometimes some clients do not read in time, which will cause the backlog of output-buffer to take up too much memory, which can be limited by configuring the item client-output-buffer-limit. When the threshold is exceeded, redis will actively disconnect to release memory, and so is slave.

7. Aof.buffer

This item is the sum of the cache used by aof persistence and the cache generated by aofrewrite. Of course, if appendonly is turned off, this item will always be 0:

Redis does not persist immediately when there are writes, but caches all write data in an event loop and persists to disk after the event loop ends.

The memory used to cache incremental data in aofrewrite is only used in aofrewrite. The aofrewrite mechanism can be referred to the previous article, "redis4.0 uses pipes to optimize aofrewrite".

You can see that the size of this item is proportional to the write traffic.

8. Db.0

Redis the memory used for the meta-information of each db. Only db0 is used here, so only the memory usage status of the db0 is printed, and there will be corresponding information when using other db.

The meta-information of db has the following three items:

A) redis's db is a hash table, starting with the memory used by this hash table (redis uses the header pointer to hold all linked lists in a linked hash,hash table)

B) each key-value pair has a dictEntry to record their relationship, and the meta-information contains the memory used by all dictEntry in that db

C) redis uses redisObject to describe the different data types corresponding to value (string, list, hash, set, zset), then the space occupied by redisObject is also calculated in the meta-information.

Overhead.hashtable.main:

The meta-information of db is the sum of the above three items. The formula is as follows:

Hashtable + dictEntry + redisObject

Overhead.hashtable.expires:

For the expiration time of key, redis does not put it with value, but uses a separate hashtable to store it. However, the hash table of expires records key-expire information, so there is no need for `redisObject` to describe value, and its meta-information is missing. The formula is as follows:

Hashtable + dictEntry

9. Overhead.total

Sum of 3-8 terms: startup.allocated+replication.backlog+clients.slaves+clients.normal+aof.buffer+dbx

10. Dataset.bytes

The memory used by all data-that is, the current memory usage of total.allocated-overhead.total-- minus the management class memory usage.

11. Dataset.percentage

Instead of using total.allocated as the denominator, the memory initialized by redis startup is removed. The formula is as follows:

* dataset.bytes / (total.allocated-startup.allocated)

12. Keys.count

Total amount of key currently stored by redis

13. Keys.bytes-per-key

Intuitively, the average memory size of each key should be divided by dataset.bytes by keys.count, but instead of doing so, redis allocates the memory of the management class equally to the memory usage of each key. The formula is as follows:

(total.allocated-startup.allocated) / keys.count

14. Peak.percentage

Ratio of current memory usage to historical maximum

15. Fragmentation

Memory fragmentation rate

2. MEMORY USAGE

It is believed that all redis users want to know the memory usage of each key-value like the back of their hand, but redis did not provide a clear method for memory evaluation before 4. 0, but since 4. 0, the MEMORY command has implemented this function.

First of all, let's take a look at the usage: MEMORY usage [samples]

There are not many command parameters, and it can also be seen literally to evaluate the memory usage of the specified key. Samples is optional and defaults to 5. Take hash as an example to see if it works:

First, similar to overhead.hashtable.main in the previous section, the meta-information memory of hash is calculated, including the size of the hash table and the memory footprint of all dictEntry.

Unlike overhead.hashtable.main, the key-value in each dictEntry is a string, so there is no extra consumption of redisObject. Instead of traversing all the key when evaluating the real data memory size, redis uses a sampling estimate: randomly selected samples key-value pairs to calculate their average memory footprint, multiplied by the number of key-value pairs to get the result. Imagine that if you want to accurately calculate the memory footprint, you need to traverse all the elements, and when there are many elements, it will block the redis, so set the size of the samples reasonably.

Other data structures are calculated in a similar way to hash, so I won't repeat them here.

3. MEMORY DOCTOR

This subcommand is the author's suggestion on the use of redis memory, and there will be different analysis results in different allowed states:

First of all, there is no problem.

Running in good condition:

Hi Sam, I can't find any memory issue in your instance. I can only account for what occurs on this base.

Redis has a small amount of data, so there are no recommendations for the time being:

Hi Sam, this instance is empty or is using very little memory, my issues detector can't be used in these conditions. Please, leave for your mission on Earth and fill it with some data. The new Sam and I will be back to our programming as soon as I finished rebooting.

The next result needs to be paid attention to.

The peak memory usage is 1.5 times the current memory usage, and the memory fragmentation rate may be relatively high. Please note:

Peak memory: In the past this instance used more than 150% the memory that is currently using. The allocator is normally not able to release memory after a peak, so you can expect to see a big fragmentation ratio, however this is actually harmless and is only due to the memory peak, and if the Redis instance Resident Set Size (RSS) is currently bigger than expected, the memory will be used as soon as you fill the Redis instance with more data. If the memory peak was only occasional and you want to try to reclaim memory, please try the MEMORY PURGE command, otherwise the only other option is to shutdown and restart the instance.

The memory fragmentation rate is too high to exceed 1.4. Please note:

High fragmentation: This instance has a memory fragmentation greater than 1.4 (this means that the Resident Set Size of the Redis process is much larger than the sum of the logical allocations Redis performed). This problem is usually due either to a large peak memory (check if there is a peak memory entry above in the report) or may result from a workload that causes the allocator to fragment memory a lot. If the problem is a large peak memory, then there is no issue. Otherwise, make sure you are using the Jemalloc allocator and not the default libc malloc.

The average memory per slave buffer exceeds 10MB, which may be due to high master write traffic, insufficient network bandwidth for master-slave synchronization, or slow slave processing:

Big slave buffers: The slave output buffers in this instance are greater than 10MB for each slave (on average). This likely means that there is some slave instance that is struggling receiving data, either because it is too slow or because of networking issues. As a result, data piles on the master output buffers. Please try to identify what slave is not receiving data correctly and why. You can use the INFO output in order to check the slaves delays and the CLIENT LIST command to check the output buffers of each slave.

The average memory of a normal client buffer exceeds that of 200KB, which may be due to improper use of pipeline or failure of Pub/Sub clients to process messages in a timely manner:

Big client buffers: The clients output buffers in this instance are greater than 200K per client (on average). This may result from different causes, like Pub/Sub clients subscribed to channels bot not receiving data fast enough, so that data piles on the Redis instance output buffer, or clients sending commands with large replies or very large sequences of commands in the same pipeline. Please use the CLIENT LIST command in order to investigate the issue if it causes problems in your instance, or to understand better why certain clients are using a big amount of memory.

4. MEMORY MALLOC-STATS

Print memory allocator status, useful only when using jemalloc.

5. MEMORY PURGE

Requesting the allocator to free memory also works only for jemalloc.

At this point, the study of "detailed explanation of MEMORY commands under redis4.0" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report