Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to parse native memory trace in JVM

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

How to parse JVM native memory tracking, many novices are not very clear about this, in order to help you solve this problem, the following editor will explain in detail for you, people with this need can come to learn, I hope you can gain something.

1. Overview

Have you ever wondered why Java applications consume so much more memory than specified through the well-known-Xms and-Xmx tuning flags? JVM can allocate additional native memory for a variety of reasons and possible optimizations. These additional allocations will eventually cause memory consumption to exceed the-Xmx limit.

In this tutorial, we will list some common memory allocation sources in JVM and their resizing flags, and then learn how to use native memory tracking to monitor them.

two。 Primary distribution

The heap is usually the largest memory consumer in Java applications, but there are others. In addition to the heap, JVM allocates a sizeable block from native memory to maintain class metadata, application code, JIT-generated code, internal data structures, and so on. In the following sections, we will explore some of these allocations.

2.1. Metaspace (Metaspace)

To maintain some metadata about loaded classes, JVM uses a dedicated non-heap area called Metaspace. Before Java 8, it was called PermGen or Permanent Generation. Metaspace or PermGen contains metadata about loaded classes, not instances of them, which are kept in the heap.

What is important here is that the heap size configuration does not affect the meta-space size because Metaspace is an out-of-heap data area. To limit the Metaspace size, we use other tuning flags:

-XX:MetaspaceSize and-XX:MaxMetaspaceSize set the minimum and maximum metaspace size before Java 8, and-XX:PermSize and-XX:MaxPermSize set the minimum and maximum PermGen size

2.2. Threads (Thread)

One of the most memory-intensive data areas in JVM is the stack, which is created at the same time as each thread. The stack stores local variables and partial results and plays an important role in method calls.

The default thread stack size depends on the platform, but in most modern 64-bit operating systems, it is about 1 MB. This size can be configured with the-Xss adjustment flag.

Compared with other data regions, when there is no limit on the number of threads, the total memory allocated to the stack is virtually unlimited. It is worth mentioning that JVM itself requires threads to perform its internal operations, such as GC or just-in-time compilation.

2.3. Code Cache (Code caching)

In order to run JVM bytecode on different platforms, it needs to be converted into machine instructions. The JIT compiler is responsible for this compilation when the program is executed.

When JVM compiles bytecode into assembly instructions, it stores those instructions in a special non-heap data area called the code cache. Code caching can be managed just like other data regions in JVM. The-XX:InitialCodeCacheSize and-XX:ReservedCodeCacheSize adjustment flags determine the initial and possible maximum value of the code cache.

2.4. Garbage Collection (garbage collection)

JVM comes with some GC algorithms, each of which is suitable for different use cases. All of these GC algorithms have one thing in common: they need to use some out-of-heap data structures to perform their tasks. These internal data structures consume more native memory.

2.5. Symbols (symbol)

Let's start with Strings, which is one of the most common data types in applications and library code. Because they are ubiquitous, they usually occupy a large part of the heap. If a large number of these strings contain the same content, a large part of the heap will be wasted.

To save some heap space, we can store one version of each String and have other versions reference the stored version. This process is called String Interning. Since JVM can only compile string constants internally, we can manually call the intern method of the string to get the internally compiled string.

JVM stores the actual stored strings in a hash table, also known as a string pool, that is a native special fixed size and is called a string table. We can adjust the size of the table (that is, the number of buckets) with the-XX:StringTableSize flag.

In addition to the string table, there is another native data area called the runtime pool. JVM uses this pool to store constants, such as numeric text at compile time or method and field references that must be resolved at run time.

2.6. Native Byte Buffers (local byte buffer)

JVM is usually suspected of allocating a large amount of native memory, but sometimes developers can allocate native memory directly. The most common methods are malloc called by JNI and ByteBuffers that can be called directly in NIO.

2.7. Additional Tuning Flags (additional adjustment flag)

In this section, we use a small number of JVM tuning flags for different optimization scenarios. Using the following tips, we can find almost all tuning flags related to a particular concept:

$java-XX:+PrintFlagsFinal-version | grep

PrintFlagsFinal prints all the-XX options in JVM. For example, to find all the flags related to Metaspace:

$java-XX:+PrintFlagsFinal-version | grep Metaspace// truncateduintx MaxMetaspaceSize = 18446744073709547520 {product} uintx MetaspaceSize = 21807104 {pd product} / / truncated

3. Native memory tracking (NMT)

Now that we know the common sources of native memory allocation in JVM, it's time to figure out how to monitor them. First, we should enable native memory tracing with another JVM tuning flag:-XX:NativeMemoryTracking = off | sumary | detail. By default, NMT is off, but we can make it view a summary or detailed view of its observations.

Suppose we want to track the native allocation of a typical Spring Boot application:

$java-XX:NativeMemoryTracking=summary-Xms300m-Xmx300m-XX:+UseG1GC-jar app.jar

Here, we enable NMT,G1 as our GC algorithm while allocating 300 MB heap space.

3.1. After NMT is enabled for instance snapshots, we can use the jcmd command to obtain native memory information at any time:

$jcmd VM.native_memory

To find the PID of the JVM application, we can use the jps command:

$jps-l 7858 app.jar / / This is our app7899 sun.tools.jps.Jps

Now, if we use jcmd with the appropriate pid, VM.native_memory causes JVM to print information about the native allocation:

$jcmd 7858 VM.native_memory

Let's analyze the NMT output section by section.

3.2. Total distribution

NMT reports that all reserved and committed memory is as follows:

Native Memory Tracking:Total: reserved=1731124KB, committed=448152KB reserved memory represents the total amount of memory our application is likely to use. Instead, the committed memory represents the amount of memory our application is now using.

Despite allocating 300MB's heap, our application's total reserved memory is almost 1.7 GB, far more than that. Similarly, committed memory is about 440 MB, which is once again well over 300 MB.

After the overall understanding, NMT reports the memory allocation for each allocation source. So let's delve into each source.

3.3. Heap (heap)

NMT reports heap allocation as we expect:

Java Heap (reserved=307200KB, committed=307200KB)

(mmap: reserved=307200KB, committed=307200KB)

300 MB of reserved and committed memory, which matches our heap size setting.

3.4. Metaspace (Metaspace)

This is the NMT report on loading the class's metadata:

Class (reserved=1091407KB, committed=45815KB) (classes # 6566) (malloc=10063KB # 8519) (mmap: reserved=1081344KB, committed=35752KB)

Almost 1 GB,45 MB is reserved for loading 6566 classes.

3.5. Thread (Thread) this is the NMT report on thread allocation:

Thread (reserved=37018KB, committed=37018KB) (thread # 37) (stack: reserved=36864KB, committed=36864KB) (malloc=112KB # 190) (arena=42KB # 72)

A total of 36 MB of memory is allocated to 37 thread stacks-about 1 MB each. JVM allocates memory to threads at creation time, so reserved and committed allocations are equal.

3.6. Code Cache (code buffer) Let's take a look at NMT's report on assembly instructions generated and cached by JIT:

Code (reserved=251549KB, committed=14169KB) (malloc=1949KB # 3424) (mmap: reserved=249600KB, committed=12220KB)

Currently, about 13 MB of code is being cached, which could reach 245 MB.

3.7. GC the following is an NMT report on G1 GC memory usage:

GC (reserved=61771KB, committed=61771KB) (malloc=17603KB # 4501) (mmap: reserved=44168KB, committed=44168KB)

We can see that both reserved and submitted are close to 60 MB, dedicated to helping G1.

Let's take a look at the memory usage of simpler GC, such as Serial GC:

$java-XX:NativeMemoryTracking=summary-Xms300m-Xmx300m-XX:+UseSerialGC-jar app.jarSerial GC uses less than 1 MB:

GC (reserved=1034KB, committed=1034KB) (malloc=26KB # 158) (mmap: reserved=1008KB, committed=1008KB)

Obviously, we cannot choose the GC algorithm simply because of its memory usage, because the paused recycling nature of serial GC can lead to performance degradation. However, there are several GC to choose from, each balancing memory and performance.

3.8. Symbol (symbol)

The following are NMT reports on symbol assignments, such as string tables and constant pools:

Symbol (reserved=10148KB, committed=10148KB) (malloc=7295KB # 66194) (arena=2853KB # 1)

Nearly 10 MB is allocated to symbols.

3.9. NMT over time

NMT allows us to track how memory allocation changes over time. First, we should mark the current state of the application as the baseline:

$jcmd VM.native_memory baseline

Baseline succeeded

Then, after a while, we can compare the current memory usage with that baseline (baseline):

$jcmd VM.native_memory summary.diff

NMT's use of the + and-symbols will tell us how memory usage changes during this period:

Total: reserved=1771487KB + 3373KB, committed=491491KB + 6873KB-Java Heap (reserved=307200KB, committed=307200KB) (mmap: reserved=307200KB, committed=307200KB)-Class (reserved=1084300KB + 2103KB, committed=39356KB + 2871KB) / / Truncated

The total memory reserved and committed increased by 3 MB and 6 MB, respectively. Other fluctuations in memory allocation can be easily detected.

3.10. Detailed NMT

NMT can provide very detailed information about the entire storage space mapping. To enable this detailed report, we should use the-XX:NativeMemoryTracking = detail information adjustment flag.

We list the different users of native memory allocation in JVM. Then we learned how to check the running application to monitor its native allocation. With all of the above, we can resize the application and runtime environment more effectively.

Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report