In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article focuses on "how to troubleshoot Spring Boot memory leaks", interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to troubleshoot Spring Boot memory leaks.
In order to better manage the project, we migrate a project in the group to the MDP framework (based on Spring Boot), and then we find that the system will frequently report the exception of excessive usage of Swap area. The author was asked to help check the reason and found that 4G in-heap memory was configured, but the actual physical memory used was as high as 7G, which was not normal. The parameter JVM is configured as "- XX:MetaspaceSize=256M-XX:MaxMetaspaceSize=256M-XX:+AlwaysPreTouch-XX:ReservedCodeCacheSize=128m-XX:InitialCodeCacheSize=128m,-Xss512k-Xmx4g-Xms4g,-XX:+UseG1GC-XX:G1HeapRegionSize=4M". The actual physical memory used is shown in the following figure:
Memory displayed by the top command
Investigation process
1. Use tools at the Java level to locate memory areas (in-heap memory, Code areas, or out-of-heap memory requested using unsafe.allocateMemory and DirectByteBuffer)
The author adds the-XX:NativeMemoryTracking=detailJVM parameter to the project to restart the project, and uses the command jcmd pid VM.native_memory detail to check the memory distribution as follows:
Memory status displayed by jcmd
It is found that the committed memory displayed by the command is less than the physical memory, because the memory displayed by the jcmd command contains in-heap memory, Code area, memory requested through unsafe.allocateMemory and DirectByteBuffer, but does not contain out-of-heap memory requested by other Native Code (C code). So the guess is the problem caused by using Native Code to request memory.
In order to prevent miscalculation, the author used pmap to check the memory distribution and found a large number of 64m addresses; and these address spaces are not in the address space given by the jcmd command, it is basically concluded that these 64m memory is caused.
Memory status displayed by pmap
2. Use system-level tools to locate out-of-heap memory
Because the author has basically determined that it is caused by Native Code, and the tools at the Java level are not easy to troubleshoot such problems, so we can only use the tools at the system level to locate the problems.
First, gperftools is used to locate the problem
For more information on how to use gperftools, refer to the monitoring of gperftools,gperftools as follows:
Gperftools monitoring
As can be seen from the figure above: the memory requested by malloc is released after 3G, and then remains at 700M-800M. The author's first reaction is: is there no malloc application in Native Code, but mmap/brk application directly? The gperftools principle replaces the operating system's default memory allocator (glibc) with dynamic linking. )
Then, use strace to trace the system call
Because the memory was not tracked using gperftools, the command "strace-f-e" brk,mmap,munmap "- p pid" was used directly to track memory requests from OS, but no suspicious memory requests were found. Strace monitoring is shown in the following figure:
Strace monitoring
Next, use GDB to dump suspicious memory
Because suspicious memory requests were not tracked using strace, I wanted to see what was going on in memory. Just use the command gdp-pid pid to enter GDB, and then use the command dump memory mem.bin startAddress endAddressdump memory, where startAddress and endAddress can be found in / proc/pid/smaps. Then use strings mem.bin to view the contents of dump, as follows:
Gperftools monitoring
In terms of content, it looks like the unzipped JAR package information. Reading JAR package information should be at project startup time, so using strace after project startup is not very useful. So you should use strace at the start of the project, not after the startup is complete.
Third, use strace to track system calls when the project starts
The strace tracking system call is used to start the project, and it is found that a lot of 64MB memory space has been applied for. The screenshot is as follows:
Strace monitoring
The address space applied for using this mmap corresponds to the following in pmap:
Strace address space corresponding to the content of the pmap application
Finally, use jstack to view the corresponding thread
Because the thread ID requesting memory is already displayed in the strace command. Directly use the command jstack pid to look at the thread stack and find the corresponding thread stack (note the decimal and hexadecimal conversion) as follows:
Thread stack for strace application space
Basically, you can see the problem here: MCC (Meituan Unified configuration Center) uses Reflections to scan packages, and the bottom layer uses Spring Boot to load JAR. Because you use the Inflater class to extract JAR, you need to use out-of-heap memory, and then use Btrace to trace the class. The stack is as follows:
Btrace trace stack
Then check where MCC is used and find that there is no packet scanning path configured, and all packages are scanned by default. So modify the code, configure the packet sweep path, and solve the memory problem after release.
3. Why is the out-of-heap memory not released?
Although the problem has been solved, there are several questions:
Why is there no problem using the old framework?
Why is the out-of-heap memory not released?
Why the memory size is 64m and the jar size can not be so big, and they are all the same size?
Why did gperftools finally show that the amount of memory used was about 700m, and did the decompression package really not use malloc to apply for memory?
With doubt, the author looked directly at the source code of that piece of Spring Boot Loader. It is found that Spring Boot wraps Java JDK's InflaterInputStream and uses Inflater, while Inflater itself uses out-of-heap memory to extract JAR packages. The wrapped class ZipInflaterInputStream does not release the out-of-heap memory held by Inflater. So the author thought that I had found the reason, and immediately gave feedback on this bug to the Spring Boot community. But after feedback, the author found that the object Inflater itself implements the finalize method, in which there is the logic of calling to release out-of-heap memory. In other words, Spring Boot relies on GC to free out-of-heap memory.
When I use jmap to look at objects in the heap, I find that there is basically no such object as Inflater. So when you suspect GC, you don't call finalize. With such doubts, the author replaced the Inflater packaging in Spring Boot Loader with his own packaged Inflater, managed and monitored in finalize, and the finalize method was indeed called. So the author looked at the C code corresponding to Inflater and found that the initialization used malloc to apply for memory, and end also called free to release memory.
At the moment, the author can only suspect that free did not really release memory, so I replaced the InflaterInputStream packaged by Spring Boot with the one that comes with Java JDK. After finding the replacement, the memory problem was also solved.
At this point, looking back at the memory distribution of gperftools, it is found that when using Spring Boot, memory usage has been increasing, and suddenly memory usage has dropped a lot at some point (the usage has dropped directly from 3G to about 700m). This point should be caused by GC, memory should be released, but there is no memory change at the operating system level, is it not released to the operating system and held by the memory allocator?
Continue to explore, and found that the system default memory allocator (glibc version 2.12) and the use of gperftools memory address distribution is very different, 2.5G address using smaps found that it belongs to Native Stack. The memory address distribution is as follows:
Memory address distribution displayed by gperftools
At this point, it is basically certain that the memory allocator is playing a trick; after searching glibc 64m, it is found that glibc has introduced a memory pool for each thread since 2.11 (64-bit machine size is 64m memory). The original text is as follows:
Glib memory Pool description
According to the article to modify the MALLOC_ARENA_MAX environment variable, found that there is no effect. See that tcmalloc (the memory allocator used by gperftools) also uses memory pooling.
In order to verify that the memory pool is responsible, the author simply writes a memory allocator without a memory pool. Use the command gcc zjbmalloc.c-fPIC-shared-o zjbmalloc.so to generate the dynamic library, and then replace glibc's memory allocator with export LD_PRELOAD=zjbmalloc.so. The code Demo is as follows:
# include # include / / the 64-bit machine used by the author, sizeof (size_t) is sizeof (long) void* malloc (size_t size) {long* ptr = mmap (0, size + sizeof (long), PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, 0,0); if (ptr = MAP_FAILED) {return NULL;} * ptr = size / / First 8 bytes contain length. Return (void*) (& ptr [1]); / Memory that is after length variable} void* calloc (size_t n, size_t size) {void* ptr = malloc (n * size); if (ptr = = NULL) {return NULL;} memset (ptr, 0, n * size); return ptr;} void* realloc (void* ptr, size_t size) {if (size = = 0) {free (ptr); return NULL } if (ptr = = NULL) {return malloc (size);} long* plen = (long*) ptr; plen--; / / Reach top of memory long len = * plen; if (size)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 296
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.