In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
In this issue, the editor will bring you about which processes eat memory in Linux. The article is rich in content and analyzes and describes for you from a professional point of view. I hope you can get something after reading this article.
A frequently asked Linux question: why doesn't the Linux system run many programs and show so little available memory?
In fact, Linux is different from Windows in memory management. It caches memory as much as possible to improve read and write performance, which is often called Cache Memory.
Older sources will introduce that Linux's Cache takes up a lot of it, because Linux uses memory for caching as much as possible. But cache recycling also requires resources. A better article is whether the Cache in Linux memory written by Poor Zorro can really be reclaimed.
Although in most cases we see no problem when Cache is very high, we still want to find out which program makes Cache so high, which is not an easy task.
When allocating resources, kernel modules are allocated through Slab in order to improve efficiency and resource utilization. Slab takes up memory for structural caching, and this item often takes up a lot of memory. However, with the help of the slabtop tool, we can easily display the kernel chip cache information, this tool can more intuitively display the content under / proc/slabinfo.
# shows how objects are occupied in a machine cache $slabtop-s c Active / Total Objects (% used): 856448 / 873737 (98.0%) Active / Total Slabs (% used): 19737 / 19737 (100.0%) Active / Total Caches (% used): 67 / 89 (75.3%) Active / Total Size (% used): 141806.80K / 145931.33K (97.2%) Minimum / Average / Maximum Object: 0.01K / 0.17K / 8.00K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 416949 416949 0.10K 10691 39 42764K buffer_head 5616 5545 98% 2.00K 351 16 11232K kmalloc-2048 91148990 98% 1.02K 294 31 9408K ext4_inode_cache 12404 12404 100% 0.57K 443 28 7088K radix_tree_node 10800 10731 99% 0.58K 400 276400K inode_cache 31290 29649 94% 0.19K 745 42 5960K dentry 3552 3362 94% 1.00K 111323552K kmalloc-1024 1100 1055 95% 2.84K 10011 3200K task_struct 1649 1481 89% 1.88K 97 17 3104K TCP 27000 27000 100% 0.11K 750363000K sysfs_dir_cache 1380 126991% 2.06K 92 152944K sighand_cache
Although the above command shows the Slab in Cache, it still doesn't show the Cache occupied by any program.
Solution 1: use Pcstat to implement
After searching, it is found that the tool linux-ftools can show the Cache occupied by a file, and fincore is just one of its tools.
$fincore [options] files... -pages=false Do not print pages-summarize When comparing multiple files, print a summary report-only-cached Only print stats for files that are actually in cache. Https://colobu.com/2017/03/07/what-is-in-linux-cached/root@xxxxxx:/var/lib/mysql/blogindex# fincore-pages=false-summarize-only-cached * stats for CLUSTER_LOG_2010_05_21.MYI: file size=93840384, total pages=22910, cached pages=1, cached size=4096, cached perc=0.004365 stats for CLUSTER_LOG_2010_05_22.MYI: file size=417792, total pages=102, cached pages=1, cached size=4096, cached perc=0.980392 stats for CLUSTER_LOG_2010_05_23.MYI: file size=826368 Total pages=201, cached pages=1, cached size=4096, cached perc=0.497512 stats for CLUSTER_LOG_2010_05_24.MYI: file size=192512, total pages=47, cached pages=1, cached size=4096, cached perc=2.127660 stats for CLUSTER_LOG_2010_06_03.MYI: file size=345088, total pages=84, cached pages=43, cached size=176128, cached perc=51.190476 stats for CLUSTER_LOG_2010_06_04.MYD: file size=1478552, total pages=360, cached pages=97, cached size=397312, cached perc=26.944444 stats for CLUSTER_LOG_2010_06_04.MYI: file size=205824, total pages=50, cached pages=29 Cached size=118784, cached perc=58.000000 stats for COMMENT_CONTENT_2010_06_03.MYI: file size=100051968, total pages=24426, cached pages=10253, cached size=41996288, cached perc=41.975764 stats for COMMENT_CONTENT_2010_06_04.MYD: file size=716369644, total pages=174894, cached pages=79821, cached size=326946816, cached perc=45.639645 stats for COMMENT_CONTENT_2010_06_04.MYI: file size=56832000, total pages=13875, cached pages=5365, cached size=21975040, cached perc=38.666667 stats for FEED_CONTENT_2010_06_03.MYI: file size=1001518080, total pages=244511, cached pages=98975, cached size=405401600 Cached perc=40.478751 stats for FEED_CONTENT_2010_06_04.MYD: file size=9206385684, total pages=2247652, cached pages=1018661, cached size=4172435456, cached perc=45.321117 stats for FEED_CONTENT_2010_06_04.MYI: file size=638005248, total pages=155763, cached pages=52912, cached size=216727552, cached perc=33.969556 stats for FEED_CONTENT_2010_06_04.frm: file size=9840, total pages=2, cached pages=3, cached size=12288, cached perc=150.000000 stats for PERMALINK_CONTENT_2010_06_03.MYI: file size=1035290624, total pages=252756, cached pages=108563, cached size=444674048 Cached perc=42.951700 stats for PERMALINK_CONTENT_2010_06_04.MYD: file size=55619712720, total pages=13579031, cached pages=6590322, cached size=26993958912, cached perc=48.533080 stats for PERMALINK_CONTENT_2010_06_04.MYI: file size=659397632, total pages=160985, cached pages=54304, cached size=222429184, cached perc=33.732335 stats for PERMALINK_CONTENT_2010_06_04.frm: file size=10156, total pages=2, cached pages=3, cached size=12288, cached perc=150.000000-total cached size: 32847278080
Fincore works by comparing the corresponding Inode Data of the specified file with the Page Cache Table of Kernel, and if the Page Cache Table has this Inode information, find the size of the corresponding Data Block of the Inode.
Because Kernel's Page Cache Table stores only the reference to the Data Block, not the file name, that is, the Inode information of the file. So no tool can run once to find out how all files use caching. So using the linux-fincore tool can only add a file name to determine whether the file is cached, and if so, what the size is. The problem is that you can't guess which file is cached or not.
Shanker provides a script to solve this problem, that is, to see which processes use the most physical memory, find the files opened by that process, and then use fincore to see the cache usage of those files.
This method can find programs and processes that take up a lot of Cache in most cases. The script reads as follows:
#! / bin/bash # Author: Shanker # Time: 2016-06-08 # set-e # set-u # you have to install linux-fincore if [!-f / usr/local/bin/linux-fincore] then echo "You haven't installed linux-fincore yet" exit fi # find the top 10 processs' cache file ps-e-o pid Rss | sort-nk2-r | head-10 | awk'{print $1}'> / tmp/cache.pids # find all the processs' cache file # ps-e-o pid > / tmp/cache.pids if [- f / tmp/cache.files] then echo "the cache.files is exist Removing now "rm-f / tmp/cache.files fi while read line do lsof-p $line 2 > / dev/null | awk'{print $9}'> > / tmp/cache.files done > / tmp/cache.fincore fi done linux-fincore-s `cat / tmp/cache.finc ore` rm-f / tmp/cache. {pids,files,fincore}
Unfortunately, linux-ftools is no longer maintained. The program cannot be compiled on the new version of the operating system, so this method is invalid.
I searched again through Google, and then I found the tool pcstat, which is developed in Go and has basically the same function as linux-ftools.
Project address: https://github.com/tobert/pcstat
Then I modified Shanker's script to use pcstat for processing so that I could well find out what Cache was taking up. The modified script is as follows:
#! / bin/bash # you have to install pcstat if [!-f / data0/brokerproxy/pcstat] then echo "You haven't installed pcstat yet" echo "run\" go get github.com/tobert/pcstat\ "to install" exit fi # find the top 10 processs' cache file ps-e-o pid Rss | sort-nk2-r | head-10 | awk'{print $1}'> / tmp/cache.pids # find all the processs' cache file # ps-e-o pid > / tmp/cache.pids if [- f / tmp/cache.files] then echo "the cache.files is exist Removing now "rm-f / tmp/cache.files fi while read line do lsof-p $line 2 > / dev/null | awk'{print $9}'> > / tmp/cache.files done > / tmp/cache.pcstat fi done / data0/brokerproxy/pcstat `cat / tmp/ca che.pcstat` rm-f / tmp/cache. {pids,files,pcstat}
The result of the successful execution of the script is as follows:
+-- + | Name | Size (bytes ) | Pages | Cached | Percent | |-- +-| | / data0/abcasyouknow/0307/abc | | 10060771 | 2457 | 2457 | 100.000 | / data0/abcasyouknow/0307/logs/abc.log | 1860 | 1 | 1 | 100.000 | | / data0/abcasyouknow/0307/logs/uuid.log | 326326364 | 79670 | 79670 | 100.000 | | / usr/bin/bash | 960384 | | | 082.553 | 082.553 | / usr/lib/locale/locale-archive | 106065056 | 25895 | 000.815 | | / usr/lib64/libnss_files-2.17.so | 58288 | 15 | 15 | 100.000 | | / usr/lib64/libc-2.17.so | 2107760 | 515 | 336 | | | 065.243 | | / usr/lib64/libdl-2.17.so | 19512 | 5 | 5 | 100.000 | | / usr/lib64/libtinfo.so.5.9 | 174520 | 43 | 42 | 097.674 | / usr/lib64/ld-2.17.so | 164336 | | | 41 | 41 | 100.000 | | / usr/lib64/gconv/gconv-modules.cache | 26254 | 7 | 7 | 100.000 | +-- | -+
From the results, we can see that uuid.log takes up more Cache. This file is open, the program has been writing logs into it, Linux should have cached it.
Plan 2: use Vmtouch to implement
In addition to the pcstat tools mentioned above, you can also use vmtouch to achieve the same goal. Vmtouch is a tool that can query cached files and directories and push files into or out of the cache.
Project address: https://github.com/hoytech/vmtouch
Install Vmtouch
$git clone https://github.com/hoytech/vmtouch $cd vmtouch $make $sudo make install
Use Vmtouch
Hongmeng official Strategic Cooperation to build HarmonyOS Technology Community
Vmtouch command syntax
$vmtouch vmtouch: no files or directories specified vmtouch v1.0.2-the Virtual Memory Toucher by Doug Hoyte Portable file system cache diagnostics and control Usage: vmtouch [OPTIONS]. FILES OR DIRECTORIES... Options:-t touch pages into memory-e evict pages from memory-l lock pages in physical memory with mlock (2)-L lock pages in physical memory with mlockall (2)-d daemon mode-m max file size to touch-p use the specified portion instead of the entire file-f follow symbolic links-h also count hardlinked copies-w wait until all pages are locked (only useful together with-d)-v verbose-Q quiet
two。 Some examples of use
Because vmtouch directly supports directory-level queries, it is much easier to use.
View the in-memory cache of the / tmp directory
$vmtouch / tmp/ vmtouch: WARNING: skipping non-regular file: / tmp/ssh-GgJnCEkWMQC2/agent.1068 Files: 17 Directories: 7 Resident Pages: 4780 18M/18M Elapsed: 0.001006 seconds
If you need to see more details, you can use the-v parameter.
$vmtouch-v / tmp/
Check how much a file is cached
$vmtouch-v ~ / Downloads/phoronix-test-suite_6.0.1_all.deb / home/neo/Downloads/phoronix-test-suite_6.0.1_all.deb [] 0 Elapsed 132 Files: 1 Directories: 0 Resident Pages: 0 Elapsed: 0.000117 seconds
Cache the specified file
$vmtouch-vt ~ / Downloads/phoronix-test-suite_6.0.1_all.deb / home/neo/Downloads/phoronix-test-suite_6.0.1_all.deb [OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO] 132 Files: 1 Directories: 0 Touched Pages: 132 (528K) Elapsed: 0.007935 seconds
Expel the data specified in the cache
$vmtouch-ve ~ / Downloads/phoronix-test-suite_6.0.1_all.deb Evicting / home/neo/Downloads/phoronix-test-suite_6.0.1_all.deb Files: 1 Directories: 0 Evicted Pages: 132 (528K) Elapsed: 0.000109 seconds these are the specific processes in the Linux shared by the editor. If you happen to have similar doubts, please refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.