Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Memcache memory allocation policy and performance (usage) status check

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >

Share

Shulou(Shulou.com)06/01 Report--

I've been using Memcache all the time, but for its internal problems, such as how its memory is used, I'd like to see some status after using it for a while. It has not been clear, checked and forgotten, and now sort out the article for your own reference. Installation and operation are not involved in this article. Interested students can check the previously written articles and Google.

1: parameter

Memcached-h memcached 1.4.14 memcached TCP port. Default is 11211. You can not set-U UDP port. Default is 11211. 0: disable-s UNIX socket-an access mask for UNIX socket, in octal (default: 0700)-l listening IP address The machine can run-u as a daemon (daemon) without setting this parameter-d to specify the user. If it is currently root, you need to use this parameter to specify the maximum memory usage of user-m in MB. The default 64MB-M forbids the LRU policy and returns an error when memory is exhausted, instead of deleting items-c maximum simultaneous connections Default is 1024murv verbose (print errors/warnings while in event loop)-vv very verbose (also print client commands/reponses)-vvv extremely verbose (also print internal state transitions)-h help information-i print memcached and libevent license-P saves PID to the specified file-f growth factor Default 1.25 chunk=key+suffix+value+32 initial structure, default 48 bytes-L enable large memory pages, can reduce memory waste, improve performance-t thread number, default 4. Because memcached uses NIO, more threads do not have much effect-R maximum concurrency per event connection, CAS command is disabled by default at 20murc (version counting can be disabled Reduce overhead)-b Set the backlog queue limit (default: 1024)-B Binding protocol-one of ascii, binary or auto (default)-I resize allocated slab pages, default 1m, minimum 1k to 128m

For the above bold parameters, you need to focus on the example of normal startup:

Start: / usr/bin/memcached-m 64-p 11212-u nobody-c 2048-f 1024-d-l 10.211.55.9 connection: telnet 10.211.55.9 11212Trying 10.211.55.9.

Connected to 10.211.55.9.

Escape character is'^]'.

You can view all parameters through the command: stats settings

2: understand the memory storage mechanism of memcached

By default, Memcached uses a mechanism called Slab Allocator to allocate and manage memory. Before the advent of this mechanism, memory allocation was done by simply malloc and free all records. However, this approach can lead to memory fragmentation, increase the burden on the operating system memory manager, and, at worst, cause the operating system to be slower than the memcached process itself. Slab Allocator is born to solve this problem.

The basic principle of Slab Allocator is to allocate memory in page units according to the predetermined size. By default, a page is 1m, which can be specified by the-I parameter at startup, divided into blocks of various sizes (chunk), and blocks of the same size are divided into groups (sets of chunk). If you need to apply for memory, memcached will divide a new page and allocate it to the required slab area. Once page is allocated, it will not be recycled or reallocated until it is rebooted to solve the memory fragmentation problem.

Page

The memory space allocated to Slab. The default is 1MB. Assigned to the Slab, it is split into chunk according to the size of the slab.

Chunk

The memory space used to cache records.

Slab Class

A group of chunk of a specific size.

Memcached does not put all sizes of data together, but pre-divides the data space into a series of slabs, each slab is only responsible for a certain range of data storage. Based on the size of the data received, memcached chooses the slab that best suits the data size. Memcached holds a list of free chunk in slab, select chunk based on this list, and cache the data in it.

As shown in the figure, each slab only stores data that is larger than the size of its previous slab and less than or equal to its own maximum size. For example, 100byte strings are stored in slab2 (88112). Each slab is responsible for different space. By default, the maximum value of the next slab in memcached is 1.25 times that of the previous one, which can be modified by changing the-f parameter.

Slab Allocator solves the problem of memory fragmentation, but the new mechanism also brings new problems to memcached. Chunk is the place where memcached actually stores cached data, and this size is the maximum storage size of the slab that manages it. The chunk size in each slab is the same, as shown in the figure above, the chunk size of the slab1 is 88 bytes, and the slab2 is 112 bytes. Because a specific length of memory is allocated, the allocated memory cannot be used effectively. For example, if you cache 100 bytes of data into a 128byte chunk, the remaining 28 bytes are wasted. It should be noted that chunk stores not only the value of the cached object, but also the key,expire time, flag and other details of the cached object. So when set 1 byte of item, it needs much more than 1 byte of space to store.

Memcached can control the differences between slab to some extent by specifying the Growth Factor factor at startup (through the-f option). The default value is 1.25.

The process of memory allocation for slab is as follows:

Memcached specifies the maximum memory usage with the-m parameter at startup, but this is not used up as soon as it starts, but is gradually allocated to each slab. If a new data is to be stored, first select an appropriate slab, and then check to see if the slab still has free chunk, and if so, store it directly; if not, you need to apply for memory. Slab applies for memory in page units. No matter what the size is, 1m of page will be assigned to the slab (the page will not be reclaimed or reassigned, it will always belong to the slab). After applying to the page, slab will split the memory of the page according to the size of the chunk, so that it becomes an array of chunk, and then select one of the chunk array to store the data. If there is no free page, the changed slab will be LRU instead of the entire memcache LRU.

The above gives an overview of memcache's memory allocation strategy, and here's how to view memcache usage.

3Perfect memcache status and performance view

① hit ratio: stats command

Interpret and analyze according to the following diagram

Get_hits represents the number of read cache hits, and get_misses is the number of read failures, that is, attempts to read cached data that does not exist. That is:

Hit ratio = get_hits / (get_hits + get_misses)

The higher the hit rate, the greater the caching effect of cache. However, in practical use, this hit rate is not the hit rate of valid data. Sometimes the get operation may just check whether a key exists, and miss is also correct at this time. This hit rate is the combined value of all requests since memcached is started, and cannot reflect the situation within a period of time. Therefore, more detailed values are needed to troubleshoot memcached performance problems. However, the high hit rate can still reflect the good use of memcached, and the sudden drop in hit rate can reflect the occurrence of a large number of cache losses.

② observes the items of each slab: Stats items command

Description of main parameters:

The number of times outofmemoryslab class failed to allocate space for the new item. This means that you run with-M or the total number of data stored in the failed number age. The number of seconds that evicted has to remove the unexpired item from the LRU, in seconds, the number of seconds evicted_time has experienced since the last time the expired item was removed, that is, the last time the cache was removed, 0 indicates that it has been removed at present. Use this to determine the most recent time when the data was removed. Evicted_nonzero does not set the expiration time (default is 30 days), but the number of times the unexpired item has to be removed from the LRU

Because the memory allocation policy of memcached leads to once the total memory of memcached reaches the maximum memory set, it means that all the page that can be used by slab has been fixed, and if any data is put in, it will cause memcached to use LRU policy to eliminate data. The LRU strategy is not for all slabs, but only for the slab in which new data should be put. For example, if a new data is to be put into slab 3, then LRU is only for slab 3, and these culling conditions can be observed through stats items.

Note evicted_time: the occurrence of LRU does not mean that the memcached load is overloaded, because sometimes the expiration time is set to 0 when using cache, so that the cache will be stored for 30 days. If the memory is full and continues to put data, and these expired data has not been used for a long time, it may be deleted. Convert evicted_time to standard time to see whether the acceptable time has been reached. For example, if you think that the data has been cached for 2 days and the last deleted data has been stored for more than 3 days, the pressure on this slab can be considered acceptable. However, if the deleted data has only been cached for 20 seconds, the slab is already overloaded.

You can see the status of the slab1 of the current memcache from the above instructions:

There are 305816 items, the longest valid time is 21529 seconds, 95336839 unexpired items are removed through LRU, and 95312220 unexpired items are removed through LRU. Currently, there are items that have been cleared without-M parameter when starting.

③ observes each slabs: stats slabs command

If an abnormal slab is found in the Stats items, you can check through stats slabs to see if there is a problem with the memory allocation of the slab.

Description of main parameters:

Attribute name attribute indicates the size of chunk_size current slab per chunk chunk_per_page the number of chunk that can be stored per page the total number of page allocated to the current slab by total_pages. The default size of 1 page is 1m, which can calculate the size of the slab total_chunks the maximum number of chunk that the current slab can store Should be equal to the total number of chunks already occupied by chunck_per_page * total_pageused_chunks the number of chunk vacated by free_chunks expired data but not used free_chunks_end the number of chunk newly allocated but not yet used

Note here: total_pages is the total number of page allocated by the current slab. If the default size of the page is not modified, this value is the total size (in M) of the data that can be cached by the current slab. If the elimination of this slab is very serious, be sure to pay attention to whether the number of page of this slab is too small. There is another formula:

Total_chunks = used_chunks + free_chunks + free_chunks_end

In addition, stats slabs has two attributes:

Attribute name attribute description active_slabs

Total number of active slab total_malloced

The actual total amount of memory allocated, in byte, determines how much memory memcached can actually apply for. If this value has reached the set upper limit (compared to maxbytes in stats settings), no new page will be allocated.

Statistics of the number of ④ objects: stats sizes

Note: this command locks the service and pauses the processing of the request. This command shows the number of items in a fixed chunk size. You can also see how many chunks there are in slab1 (96byte).

⑤ View and Export key:stats cachedump

When entering memcache, everyone wants to check the key in cache, which is similar to the keys * command in redis. You can also check it in memcache, but it takes 2 steps to complete.

First, list items:

Stats items-- command.

...

STAT items:29:number 228STAT items:29:age 34935...

END

The second is to take key through itemid. The above id is 29, plus a parameter: for the listed length, 0 is all listed.

Stats cachedump 29 0-- Command ITEM 26457202 [49440 b; 1467262309 s]

...

ITEM 30017977 [45992 b; 1467425702 s]

ITEM 26634739 [48405 b; 1467437677 s]

END-A total of 228 keyget 26634739 fetch value

How to export key? Here you need to go through echo. Nc came to finish it.

Echo "stats cachedump 29 0" | nc 10.211.55.9 11212 > / home/zhoujy/memcache.log

It is important to note when exporting: the data size returned by the cachedump command is only 2m at a time, which is a dead value in the memcached code unless modified before compilation.

Another monitoring tool for ⑥: memcached-tool, a tool written by perl: memcache_tool.pl.

View Code

. / memcached-tool 10.211.55.9 11212-execute

# Item_Size Max_age Pages Count Full? Evicted Evict_Time OOM

1 96B 20157s 28 305816 yes 95431913 0 0

2 120B 16049s 40 349520 yes 117041737 0 0

3 152B 17574s 39 269022 yes 92679465 0 0

4 192B 18157s 43 234823 yes 78892650 0 0

5 240B 18722s 52 227188 yes 72908841 0 0

6 304B 17971s 73 251777 yes 85556469 0 0

7 384B 17881s 81 221130 yes 75596858 0 0

8 480B 17760s 70 152880 yes 53553607 0 0

9 600B 18167s 58 101326 yes 34647962 00

10 752B 18518s 52 72488 yes 24813707 0 0

11 944B 18903s 52 57720 yes 16707430 0 0

12 1.2K 20475s 44 38940 yes 11592923 0 0

13 1.4K 21220s 36 25488 yes 8232326 0 0

14 1.8K 22710s 35 19740 yes 6232766 0 0

15 2.3K 22027s 33 14883 yes 4952017 0 0

16 2.8K 23139s 33 11913 yes 3822663 0 0

17 3.5K 23495s 31 8928 yes 2817520 0 0

18 4.4K 22611s 29 6670 yes 2168871 0 0

19 5.5K 23652s 29 5336 yes 1636656 0 0

20 6.9K 21245s 26 3822 yes 1334189 0 0

21 8.7K 22794s 22 2596 yes 783620 0 0

22 10.8K 22443s 19 1786 yes 514953 0 0

23 13.6K 21385s 18 1350 yes 368016 0 0

24 16.9K 23782s 16 960 yes 254782 0 0

25 21.2K 23897s 14 672 yes 183793 0 0

26 26.5K 27847s 13 494 yes 117535 0 0

27 33.1K 27497s 14 420 yes 83966 0 0

28 41.4K 28246s 14 336 yes 63703 0 0

29 51.7K 33636s 12 228 yes 24239 0 0

Explanation:

Column meaning # slab class number Item_Size chunk size Max_ageLRU the lifetime of the oldest record pages assigned to Slab the number of records, the number of chunks, the number of items, the number of keys in the number of records, the number of chunks, the number of keys, does Full?Slab contain the number of free chunkEvicted to remove unexpired item from LRU, when Evict_Time was last removed from LRU, 0 means that there is a removed OOM-M parameter?

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Network Security

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report