In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article introduces the knowledge about "how to check the memory leak problem under high concurrency and multithreading". In the actual operation process of the case, many people will encounter such a dilemma. Next, let Xiaobian lead you to learn how to deal with these situations! I hope you can read carefully and learn something!
Phenomenon: The memory of the application continues to increase under high concurrency, specifically at 7:00 in the morning, processing 2W per second, the memory growth trend is very fast, the maximum memory allocated to the application is 4G, but in fact the memory utilization rate of the container node has already exceeded this value.
The memory usage seen through the top command is not high, slightly higher than 4G. It is initially suspected that there is a problem with the container statistics. I looked at the memory status information.
cat /sys/fs/cgroup/memory/memory.stat
For security reasons, the following data are example data, cache 148365312rss 5496782848rss_huge 0maped_file 1605632swap 0pgpgin 3524638 pgpout 2146428pgfault 9691132pgmajfault 44inactive_anon 1142784active_anon 5496709120inactive_file 104824832active_file 42397696unevictable 0hierarchical_memory_limit 8523934592hierarchical_memsw_limit 8523934592total_cache 148365312total_rss 5492382848total_rss_huge 0total_mapped_file 1605632total_swap 0total_pgpgin 3524638total_pgpgout 2146428total_pgfault 9691132total_pgmajfault 44total_inactive_anon 1142784total_active_anon 5423709120total_inactive_file 104823832total_active_file 42397696total_unevictable 0
The requested URL/files/cgroup/memory.png was not found on this server.
MemTotal: 8382308 kBMemFree: 2874740 kBBuffers: 0 kBCached: 145000 kBSwapCached: 0 kBActive: 5412308 kBInactive: 103516 kBActive(anon): 5323724 kBInactive(anon): 1116 kBActive(file): 41484 kBInactive(file): 102400 kBUnevictable: 0 kBMlocked: 0 kBSwapTotal: 0 kBSwapFree: 0 kBDirty: 0 kBWriteback: 0 kBAnonPages: 5362300 kBMapped: 1568 kBShmem: 0 kBSlab: 0 kBSReclaimable: 0 kBSUnreclaim: 0 kBKernelStack: 0 kBPageTables: 0 kBNFS_Unstable: 0 kBBounce: 0 kBWritebackTmp: 0 kBCommitLimit: 0 kBCommitted_AS: 0 kBVmallocTotal: 0 kBVmallocUsed: 0 kBVmallocChunk: 0 kBHardwareCorrupted: 0 kBAnonHugePages: 0 kB
RSS is a bit high.
ps -p PID -o rss,vsz
Rss is particularly high. At first, I thought that cache caused a problem in memory usage statistics, but seeing that rss is very high can determine that it is not a statistical error caused by cache.
When rss is high, only java apps are running.
The jmap stack derived:
jmap -dump:format=b,file=20210508heapdump.hprof pid
And did not analyze the specific reasons, only know that the loader loaded a large number of thread objects, these objects are not large, but the amount of more.
At this time, I thought about taking the information inside the memory and wondering if there was growth outside the heap.
Use pmap to view memory allocation for processes
pmap -x PID > 20210508pmap.txtAddress Kbytes RSS Dirty Mode Mapping0000000700000000 4194304 4167772 4167772 rw--- [ anon ]0000000800000000 7180 5284 0 r---- classes.jsa0000000800703000 9204 0 0 ----- [ anon ]0000000801000000 10996 5556 5196 rw--- classes.jsa0000000801abd000 5388 0 0 ----- [ anon ]0000000802000000 1552 1540 176 rw--- classes.jsa0000000802184000 2544 0 0 ----- [ anon ]0000000802400000 36 20 0 r-x-- classes.jsa0000000802409000 84 0 0 ----- [ anon ]000000080241e000 10240 10172 10172 rw--- [ anon ]0000000802e1e000 1038336 0 0 ----- [ anon ]00007fdebc000000 132 12 12 rw--- [ anon ]00007fdebc021000 65404 0 0 ----- [ anon ]00007fdec4000000 132 4 4 rw--- [ anon ]00007fdec4021000 65404 0 0 ----- [ anon ]00007fdec8000000 132 4 4 rw--- [ anon ]00007fdec8021000 65404 0 0 ----- [ anon ]00007fdecc000000 132 4 4 rw--- [ anon ]
Found a lot of RSS size 40-160 memory segment, at least 2W, which is extremely abnormal, theoretically there should not be so many.
At this point, you can use smaps to output details of the memory blocks used by the process, including address ranges and sources.
cat /proc//smaps > 20210508smaps.txt Note: For security reasons, the following data is example data, not real data 802300000-80211000 r-xp 01345000 fc:10 13514Size: 36 kBRss: 20 kBPss: 20 kBShared_Clean: 0 kBShared_Dirty: 0 kBPrivate_Clean: 20 kBPrivate_Dirty: 0 kBReferenced: 20 kBAnonymous: 0 kBAnonHugePages: 0 kBSwap: 0 kBKernelPageSize: 4 kBMMUPageSize: 4 kBLocked: 0 kBVmFlags: rd ex mr mw me 803609000-80291e000 ---p 00000000 00:00 0 Size: 84 kBRss: 0 kBPss: 0 kBShared_Clean: 0 kBShared_Dirty: 0 kBPrivate_Clean: 0 kBPrivate_Dirty: 0 kBReferenced: 0 kBAnonymous: 0 kBAnonHugePages: 0 kBSwap: 0 kBKernelPageSize: 4 kBMMUPageSize: 4 kBLocked: 0 kBVmFlags: mr mw me nr
Check smaps.txt, find the memory block address in question, first find the address in question from the pmap, such as Address 0000080241 e000 has a problem, then take 80241e000 to smaps to find the corresponding address segment, there will generally be a continuous address segment, find the corresponding size of the problem, because continuous address segments may have other memory data, but certainly before and after, only need size corresponding to the high probability of this memory segment data.
Suppose we now find the problematic address segment as:
80241e000-802420000 ---p 00000000 00:00 0 Size: 1540 kBRss: 1540 kBPss: 0 kBShared_Clean: 0 kBShared_Dirty: 0 kBPrivate_Clean: 0 kBPrivate_Dirty: 0 kBReferenced: 0 kBAnonymous: 0 kBAnonHugePages: 0 kBSwap: 0 kBKernelPageSize: 4 kBMMUPageSize: 4 kBLocked: 0 kBVmFlags: mr mw me nr
At this time to find the memory address segment, you need to get the data on this memory address segment, using gdb artifact.
gdb attach PID
At this time, a large string of code will be swiped into the gdb command line, and the memory segment information found above will be dumped under gdb.
dump memory /tmp/20210508.dump 0x80241e000 0x802420000
At this time in/tmp/20210508.dump can find the memory segment data, you can use vim or less to find slowly, you can also lazy use the strings command to help find keywords.
Displays strings longer than 8 characters.
strings -8 /tmp/20210508.dump
After finding it, use the keyword and then use the less command to find the file.
And you can probably see a string of characters like this.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@insert xxx value(123,456,789) @@@@https://qq.com,@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
It seems that the data recorded in this memory address segment is the operation of a method when writing a table. This is the data of a method block that we applied. We found the corresponding method block context call and found the problem. The place where this method was called locally created a thread pool. The thread pool did not do shutdown operation, resulting in a large number of threads although it was executed well, but it still exists, so memory has been growing.
To basically know how to solve the problem here, the thread pool will be thrown out, not in the local new thread pool, compile and publish, high concurrency simulation test, memory no longer grows.
"How to check the memory leak problem under high concurrency multithreading" is introduced here, thank you for reading. If you want to know more about industry-related knowledge, you can pay attention to the website. Xiaobian will output more high-quality practical articles for everyone!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.