In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces the relevant knowledge of "how to use jvm GC tuning tool". The editor shows you the operation process through an actual case. The operation method is simple, fast and practical. I hope this article "how to use jvm GC tuning tool" can help you solve the problem.
JVM provides the native data of GC behavior during the execution of the program. Then, we can use this native data to generate various reports. Native data (raw data) includes:
Current usage of each memory pool
Total capacity of each memory pool
Duration of each GC pause
The duration of the GC pause at each phase.
Various indicators can be calculated from these data, such as program memory allocation rate, promotion rate and so on.
1,JMX API
The easiest way to get GC behavior data from the JVM runtime is to use the standard JMX API interface. JMX is a standard API for obtaining runtime state information within JVM. You can write program code to access the JVM where the program is located through JMX API, or you can perform (remote) access through the JMX client.
The most common JMX clients are JConsole and JVisualVM (which can install various plug-ins and are very powerful). Both tools are part of the standard JDK and are easy to use. If you are using JDK 7u40 and later, you can also use another tool: Java Mission Control (roughly translated as Java Control Center, jmc.exe).
JVisualVM steps to install MBeans plug-ins: through tools (T)-plug-ins (G)-available plug-ins-check VisualVM-MBeans-install-next-wait for the installation to complete. The installation process of other plug-ins is basically the same.
All JMX clients are stand-alone programs that can connect to the target JVM. The target JVM can be either local or remote JVM. If you want to connect to the remote JVM, the target JVM must specify a specific environment variable when it starts to open the remote JMX connection / and the port number. Examples are as follows:
Java-Dcom.sun.management.jmxremote.port=5432 com.yourcompany.YourApp
Here, JVM opens port 5432 to support JMX connections.
After connecting to a JVM through JVisualVM, switch to the MBeans tab and expand java.lang/GarbageCollector. You can see the behavior information of GC. The following is a screenshot in JVisualVM:
The following is a screenshot from Java Mission Control:
From the screenshot above, you can see two garbage collectors. One of them cleans up the younger generation (PS Scavenge) and the other cleans up the old age (PS MarkSweep); the name of the garbage collector is shown in the list. As you can see, the functionality of jmc and the way it presents data are more powerful.
For all garbage collectors, the information obtained through JMX API includes:
CollectionCount: the total number of GC executed by the garbage collector
CollectionTime: the cumulative time spent on the collector. This value is equal to the sum of the duration of all GC events
LastGcInfo: details of the most recent GC event. Includes the duration of GC events (duration), start time (startTime) and end time (endTime), as well as the usage of each memory pool before and after the last GC
MemoryPoolNames: the name of each memory pool
Name: name of the garbage collector
ObjectName: the name of the MBean defined by the JMX specification
Valid: whether this collector is valid. I have only seen "true" (^ _ ^)
As a rule of thumb, this information is inconclusive to the performance of GC. We can only write programs to obtain GC-related JMX information for statistics and analysis. As you can see below, we generally don't pay much attention to MBean, but MBean is useful for understanding how GC works.
2,JVisualVM
The "VisualGC" plug-in of the JVisualVM tool provides basic JMX client functionality and displays GC events and the usage of each memory space in real time.
The Visual GC plug-in is often used to monitor native Java programs, such as developers and performance tuning experts, to quickly get GC information when the program is running.
The chart on the left shows the usage of each memory pool: Metaspace/ permanent, Old Age, Eden area, and two surviving areas.
On the right, the top two charts are independent of GC and show JIT compilation time and class load time. The following six figures show the history of the memory pool, the number of GC for each memory pool, the total GC time, and the maximum, peak, and current usage.
And then there is HistoGram, which shows the age distribution of the younger generation. As for the object's age monitoring (objects tenuring monitoring), this chapter does not explain.
Compared with pure JMX tools, the VisualGC plug-in provides a more user-friendly interface, if there are no other handy tools, please choose VisualGC. Later in this chapter, we will introduce other tools that can provide more information and a better perspective. Of course, in the "Profilers" section, we will also introduce the applicable scenarios of JVisualVM-such as allocation analysis (allocation profiling), so we will never belittle which tool, the key depends on the actual situation.
3,jstat
Jstat is also a monitoring tool (Java Virtual Machine statistics monitoring tool) provided by standard JDK, which can count various indicators. You can connect to either a local JVM or a remote JVM. Check the supported metrics and corresponding options to execute "jstat-options". For example:
+-+-- + | Option | Displays... | | +-+ | class | Statistics on the behavior of the class loader | | compiler | Statistics on the behavior of the HotSpot Just-In | -Time com- | piler | | gc | Statistics on the behavior of the garbage collected heap | | gccapacity | Statistics of the capacities of the generations and their | corresponding spaces. | | gccause | Summary of garbage collection statistics (same as-gcutil), | with the cause of the last and current (if applicable) | garbage collection events. | | gcnew | Statistics of the behavior of the new generation. | | | gcnewcapacity | Statistics of the sizes of the new generations and its corre- | sponding spaces. | | gcold | Statistics of the behavior of the old and permanent genera- | tions. | | gcoldcapacity | Statistics of the sizes of the old generation. | | | gcpermcapacity | Statistics of the sizes of the permanent generation. | | | gcutil | Summary of garbage collection statistics. | | | printcompilation | Summary of garbage collection statistics. | | | +-+-+
Jstat is useful for quickly determining whether GC behavior is healthy. The startup method is "jstat-gc-t PID 1s", where PID is the ID of the Java process to be monitored. You can view a list of running Java processes through the jps command.
Jpsjstat-gc-t 2428 1s
As a result of the above command, jstat outputs one line of new content per second to standard output, such as:
Timestamp S0C S1C S0U S1U EC EU OC OU MC MU CCSC CCSU YGC YGCT FGC FGCT GCT 200.0 8448.0 8448.0 8448.0 0.0 67712.0 67712.0 169344.0 169344.0 21248.0 20534.3 3072.0 2807.7 34 0.720 658 133.684 134.404201.0 8448.0 8448.0 8448. 0 0.0 67712.0 67712.0 169344.0 169343.2 21248.0 20534.3 3072.0 2807.7 34 0.720 662 134.712 135.432202.0 8448.0 8448.0 8102.5 0.0 67712.0 67598.5 169344.0 169343.6 21248.0 20534.3 3072.0 2807.7 34 0.720 667 135.840 136.559203.0 8448.0 8448.0 8126.3 0.0 67712.0 67702.2 169344.0 169343.6 21248.0 20547.2 3072.0 2807.7 34 0.720 669 136.178 136.898204.0 8448.0 8448.0 8126.3 0.0 67712.0 67702.2 169344.0 169343.6 21248.0 20547.2 3072.0 2807.7 34 0.720 669 136.178 136.898205.0 8448.0 8448.0 8134.6 0.0 67712.0 67712.0 169344.0 169343.5 21248.0 20547.2 3072.0 2807.7 34 0.720 671 136.234 136.954206.0 8448.0 8448.0 8134.6 0.0 67712.0 67712.0 169344.0 169343.5 21248.0 20547.2 3072.0 2807.7 34 0.720 671 136.234 136.954207.0 8448.0 8448.0 8154.8 0.0 67712.0 67712.0 169344.0 169343.5 21248.0 20547.2 3072.0 2807.7 34 0.720 673 136.289 137.009208.0 8448.0 8448.0 8154.8 0.0 67712.0 67712.0 169344.0 169343.5 21248.0 20547.2 3072.0 2807.7 34 0.720 673 136.289 137.009
Explain the above a little bit. Referring to jstat manpage, we can know:
The time that jstat connects to JVM is 200 seconds after JVM starts. This information is obtained from the "Timestamp" column in the first row. Moving on to the next line, jstat receives information from JVM once a second, which means "1s" in the command line argument.
From the "YGC" column in the first row, we know that the younger generation has executed 34 times of GC, and from the "FGC" column, we know that the entire heap memory has executed 658 times of full GC.
The GC of the younger generation takes a total of 0.720 seconds, shown in the "YGCT" column.
The total time taken by Full GC is 133.684 seconds, which is known by the "FGCT" column. This immediately caught our attention. The total JVM running time was only 200 seconds, but 66% of it was consumed by Full GC.
If you look at the next line, the problem will be even more obvious.
The Full GC was executed four times in the next second. See column "FGC".
These four Full GC pauses took almost 1 second (based on the difference in the FGCT column). Compared to the first line, Full GC took 928ms, or 92.8% of the time.
According to the "OC" and "OU" columns, the space of the whole old era was 169344.0 KB ("OC"), and it still took up 169344.2 KB ("OU") after four Full GC. It takes 928ms time to release only 800 bytes of memory, which is not normal.
Just look at the contents of these two lines, you can see that there is a very serious problem with the program. By moving on to the next line, you can make sure that the problem still exists and gets worse.
JVM is almost completely stalled because GC takes up more than 90% of computing resources. After GC, all the space of the older generation is still occupied. In fact, the program died a minute later, throwing a "java.lang.OutOfMemoryError: GC overhead limit exceeded" error.
It can be seen that jstat can quickly detect GC behaviors that are extremely harmful to JVM health. In general, the following problems can be quickly identified by just looking at the output of jstat:
The ratio of the last column "GCT" to the total elapsed time "Timestamp" of JVM is the cost of GC. If the value of "GCT" increases significantly in every second, compared with the total running time, the problem of excessive GC overhead is exposed. Different systems have different tolerances for GC overhead, which is determined by performance requirements. Generally speaking, more than 10% GC overhead is problematic.
Rapid changes in the "YGC" and "FGC" columns are often a sign of problems. Frequent GC pauses accumulate and cause more thread pauses (stop-the-world pauses), which in turn affect throughput.
If you see that in the "OU" column, the usage of the old age is about equal to the maximum capacity of the old age (OC), and does not decrease, it means that although the old GC is executed, it is basically an invalid GC.
4Jing GC log (GC logs)
GC-related information can also be obtained from the contents of the log. Because the GC logging module is built into JVM, the log contains the most comprehensive description of GC activity. This is the de facto standard and can be used as the most realistic data source for GC performance evaluation and optimization.
GC logs are generally output to a file, which is in pure text format, but can also be printed to the console. There are several JVM parameters that can control GC logs. For example, you can print the duration of each GC, as well as the program pause time (- XX:+PrintGCApplicationStoppedTime), and how many reference types are cleaned up by GC (- XX:+PrintReferenceGC).
To print the GC log, you need to specify the following parameters in the startup script:
-XX:+PrintGCTimeStamps-XX:+PrintGCDateStamps-XX:+PrintGCDetails-Xloggc:
The above parameters instruct JVM to print all GC events to a log file and output the date and time stamp of each GC. The output of different GC algorithms is slightly different. The log output from ParallelGC looks like this:
199.879: [Full GC (Ergonomics) [PSYoungGen: 64000K-> 63998K (74240K)] [ParOldGen: 169318K-> 169318K (169472K)] 233318K-> 233317K (243712K), [Metaspace: 20427K-> 20427K (1067008K)], 0.1473386 secs] [Times: user=0.43 sys=0.01, real=0.15 secs] 200.027: [Full GC (Ergonomics) [PSYoungGen: 64000K-> 63998K (74240K)] [ParOldGen: 169318K-> 169318K (169472K)] 233318K-> 233317K (243712K), [Metaspace: 20427K-> 20427K (1067008K)] 0.1567794 secs] [Times: user=0.41 sys=0.00, real=0.16 secs] 200.184: [Full GC (Ergonomics) [PSYoungGen: 64000K-> 63998K (74240K)] [ParOldGen: 169318K-> 169318K (169472K)] 233318K-> 233317K (243712K), [Metaspace: 20427K-> 20427K (1067008K)], 0.1621946 secs] [Times: user=0.43 sys=0.00 Real=0.16 secs] 200.346: [Full GC (Ergonomics) [PSYoungGen: 64000K-> 63998K (74240K)] [ParOldGen: 169318K-> 169318K (169472K)] 233318K-> 233317K (243712K), [Metaspace: 20427K-> 20427K (1067008K)], 0.1547695 secs] [Times: user=0.41 sys=0.00, real=0.15 secs] 200.502: [Full GC (Ergonomics) [PSYoungGen: 64000K-> 63999K (74240K)] [ParOldGen: 169318K-> 169318K (169472K)] 233318K-> 233317K (243712K), [Metaspace: 20427K-> 20427K (1067008K)] 0.1563071 secs] [Times: user=0.42 sys=0.01, real=0.16 secs] 200.659: [Full GC (Ergonomics) [PSYoungGen: 64000K-> 63999K (74240K)] [ParOldGen: 169318K-> 169318K (169472K)] 233318K-> 233317K (243712K), [Metaspace: 20427K-> 20427K (1067008K)], 0.1538778 secs] [Times: user=0.42 sys=0.00, real=0.16 secs]
These formats are described in detail in "GC algorithm: implementation". If you don't know about this, you can read this chapter first.
By analyzing the contents of the above log, we can see that:
This part of the log was intercepted about 200 seconds after JVM started.
The log snippet shows that within 780ms, five Full GC pauses were caused by garbage collection (removing the sixth pause is more accurate).
The total duration of these pause events is 777 milliseconds, accounting for 99.6% of the total elapsed time.
After the GC is completed, almost all of the old space (169472 KB) is still occupied (169318 KB).
From the log information, it can be determined that the GC of the application is very bad. JVM is almost completely stagnant because GC takes up more than 99% of CPU time. The result of GC is that the old space is still full, which further confirms our conclusion. The sample program is the same as the one in the jstat section, and the system hangs after a few minutes, throwing a "java.lang.OutOfMemoryError: GC overhead limit exceeded" error. Needless to say, the problem is very serious.
As you can see from this example, the GC log is useful for monitoring GC behavior and whether JVM is in a healthy state. In general, you can quickly identify the following symptoms by viewing the GC log:
GC is too expensive. If the total GC pause time is long, it will damage the throughput of the system. Different systems allow different proportions of GC overhead, but it is generally believed that the normal range is within 10%.
Very few GC events are paused for too long. When a GC pause time is too long, it will affect the delay index of the system. If the delay target states that the transaction must be completed within 1000 ms, then any GC pause of more than 1000 milliseconds cannot be tolerated.
In the old days, the usage exceeded the limit. If the old space is still nearly full after Full GC, then GC becomes a performance bottleneck, either because the memory is too small or because there is a memory leak. This symptom can lead to a surge in the cost of GC.
As you can see, the information in the GC log is very detailed. But in addition to these simple Mini Program, production systems generally generate a large number of GC logs, which are difficult to read and parse purely by human.
5,GCViewer
We can write our own parser to parse large GC logs into intuitive and easy-to-read graphical information. But in many cases, it is not a good idea to write your own programs, because of the complexity of various GC algorithms, the log information formats are not compatible with each other. So here comes the artifact: GCViewer.
GCViewer is an open source GC log analysis tool. The GitHub home page of the project gives a complete description of the indicators. Below we introduce some of the most commonly used indicators.
The first step is to get the GC log file. These log files should be able to reflect the specific scenarios of the system during performance tuning. If the operations department (operational department) feedback: every Friday afternoon, the system is slow, whether GC is the main reason or not, there is little point in analyzing Monday morning logs.
After you get the log file, you can analyze it with GCViewer, and you will roughly see a graphical interface similar to the following:
The command line used is roughly as follows:
Java-jar gcviewer_1.3.4.jar gc.log
Of course, if you do not want to open the program interface, you can also add other parameters to output the analysis results directly to the file.
The command is roughly as follows:
Java-jar gcviewer_1.3.4.jar gc.log summary.csv chart.png
The above command summarizes the information into the Excel file summary.csv in the current directory, and saves the drawing information as a chart.png file.
Click to download: gcviewer jar package and use examples.
In the figure above, the Chart area is a graphical representation of GC events. Includes the size of each memory pool and GC events. In the figure above, there are only two visual indicators: the blue line indicates the heap memory usage, and the black Bar indicates the length of each GC pause.
As you can see from the figure, memory usage is growing rapidly. The maximum heap memory is reached in about a minute. Almost all heap memory is consumed, new objects can not be allocated smoothly, and frequent Full GC events are raised. This indicates that the program may have a memory leak or that the memory space specified at startup is insufficient.
You can also see the frequency and duration of GC pauses from the figure. After 30 seconds, GC runs almost non-stop, with a maximum pause time of more than 1.4 seconds.
There are three tabs on the right. The more useful ones in "Summary" are "Throughput" (percentage of throughput) and "Number of GC pauses" (number of GC pauses), and "Number of full GC pauses" (number of Full GC pauses). Throughput shows the percentage of time spent working effectively, and the rest is GC consumption.
The throughput in the above example is 6.28%. This means that 93.72% of CPU time is spent on GC. It's clear that the system is in a bad situation-- precious CPU time is not spent doing actual work, but trying to clean up the garbage.
The next interesting thing is the "Pause" tab:
"Pause" shows the total time, average, minimum and maximum of GC pauses, and counts total and minor/major pauses separately. If you want to optimize the latency indicator of the program, these statistics can quickly determine whether the pause time is too long. In addition, we can get a clear message: the cumulative pause time is 634.59 seconds, and the total number of GC pauses is 3938, which is unusually high in the total elapsed time of 11 minutes / 660s.
For more detailed summary information about GC pauses, please check the "Event details" label in the main interface:
From the "Event details" tab, you can see a summary of all the important GC events in the log: the number of normal GC and Full GC pauses, the number of concurrent executions, and so on. In this example, you can see that Full GC pauses severely affect throughput and latency, based on 3928 Full GC pauses for 634 seconds.
As you can see, GCViewer can quickly show abnormal GC behavior with a graphical interface. In general, pictorial information can quickly reveal the following symptoms:
Low throughput. When the application throughput drops to an intolerable level, the total time for useful work is greatly reduced. Exactly how much "tolerable" depends on the specific scenario. As a rule of thumb, an effective time of less than 90% is a cause for alarm, and GC may need to be optimized.
The pause time of single GC is too long. As long as a GC pause time is too long, it will affect the delay indicator of the program. For example, if the delay requirement states that a transaction must be completed within 1000 ms, no GC pause of more than 1000 milliseconds can be tolerated.
Heap memory usage is too high. If the old space is still nearly full after the Full GC, the program performance will degrade significantly, possibly due to insufficient resources or memory leaks. This symptom can have a serious impact on throughput.
Industry conscience-the graphical display of GC log information is definitely recommended by us. You don't have to read long and complex GC logs, and you can get the same information through easy-to-understand graphics.
6, Analyzer (Profilers)
Here is an introduction to the analyzer (profilers, Oracle official translation is: sampler). Compared to the previous tools, the parser is only concerned with some areas of GC. In this section, we only focus on the GC functions related to the profiler.
First of all, warning-don't think that the parser is suitable for all scenarios. Parsers are really useful sometimes, such as when detecting CPU hotspots in your code. But using an analyzer is not necessarily a good solution in some cases.
The same is true for GC tuning. You do not need to use a parser to detect latency or throughput problems due to GC. The previously mentioned tools (jstat or native / visual GC logs) can better and faster detect GC problems. In particular, when collecting performance data from a production environment, it is best not to use a parser because of the high performance overhead.
If you do need to optimize GC, then the parser can come in handy and know the creation information of Object at a glance. On the other hand, if the reason for the GC pause is not in a memory pool, it will only be because there are too many objects created. All parsers can track object allocation (via allocation profiling) and let you know which objects actually reside in memory according to the track of memory allocation.
Allocation analysis can locate where a large number of objects have been created. The advantage of using parser assistance for GC tuning is the ability to determine which types of objects consume the most memory and which threads create the most objects.
Let's introduce three kinds of allocation parsers through examples: hprof, JVisualVM and AProf. In fact, there are many analyzers to choose from, including commercial products and free tools, but their functions and applications are basically similar.
Hprof
The hprof parser is built into the JDK. Can be used in a variety of environments, generally give priority to the use of this tool.
To get hprof and the program to run together, you need to modify the startup script, like this:
Java-agentlib:hprof=heap=sites com.yourcompany.YourApplication
When the program exits, dump (dump) the allocation information to the java.hprof.txt file in the working directory. Open it with a text editor and search for the "SITES BEGIN" keyword, you can see:
SITES BEGIN (ordered by live bytes) Tue Dec 8 11:16:15 2015percent live alloc'ed stack classrank self accum bytes objs bytes objs trace name1 64.43% 4.43% 8370336 20121 27513408 66138 302116 int [] 2 3.26% 88.49% 482976 20124 1587696 66154 302104 java.util.ArrayList3 1.76% 88.74% 241704 20121 1587312 66138 3021eu.plumbr.demo.largeheap.ClonableClass0006... Partially omitted. SITES END
As you can see from the above snippet, allocations is sorted by the number of objects created each time. The first line shows that 64.43% of all objects are integer arrays (int []), created at the location identified as 302116. Search "TRACE 302116" and you can see:
TRACE 302116:eu.plumbr.demo.largeheap.ClonableClass0006. GeneratorClass.java:11) sun.reflect.GeneratedConstructorAccessor7.newInstance (: Unknown line) sun.reflect.DelegatingConstructorAccessorImpl.newInstance (DelegatingConstructorAccessorImpl.java:45) java.lang.reflect.Constructor.newInstance (Constructor.java:422)
Now that you know that 64.43% of the objects are integer arrays, at line 11 in the constructor of the ClonableClass0006 class, you can then optimize the code to reduce the pressure on GC.
Java VisualVM
The first part of this chapter introduced JVisualVM when monitoring JVM's GC behavior tool, and this section describes its application in allocation analysis.
JVisualVM connects to the running JVM through GUI. After connecting to the target JVM:
Open the tools-> options menu, click the performance Analysis (Profiler) tab, add configuration, select Profiler memory, and make sure "Record allocations stack traces" (record allocation stack trace) is checked.
Check the "Settings" check box, and under the memory Settings tab, modify the default configuration.
Click the "Memory" button to start memory analysis.
Let the program run for a while to gather enough information about object allocation.
Click the Snapshot button below. The collected snapshot information can be obtained.
After completing the above steps, you can get information like this:
The figure above is sorted by the number of objects created for each class. Looking at the first line, you can see that the most created objects are the int [] array. By right-clicking the row, you can see where these objects were created:
JVisualVM is easier to use than hprof-- for example, in the screenshot above, you can see the allocation information of all int [] in one place, so it's easy to find out if you allocate it multiple times in the same code.
AProf
The most important analyzer is AProf, developed by Devexperts. The memory allocation analyzer AProf is also packaged in the form of Java agent.
To analyze an application with AProf, you need to modify the JVM startup script, like this:
Java-javaagent:/path-to/aprof.jar com.yourcompany.YourApplication
After restarting the application, an aprof.txt file will be generated in the working directory. This file is updated every minute and contains information like this:
= TOTAL allocation dump for 91289 ms (0h01m31s) Allocated 1769670584 bytes in 24868088 objects of 425 classes in 2127 locations=Top allocation-inducing locations with the data types allocated from them- -- eu.plumbr.demo.largeheap.ManyTargetsGarbageProducer.newRandomClassObject: 1423675776 (80.44%) bytes in 17113721 (68.81%) objects (avg size 83 bytes) int []: 711322976 (40.19%) bytes in 1709911 (6.87%) objects (avg size 416 bytes) char []: 369550816 (20.88%) bytes in 5132759 (20.63%) objects (avg size 72 bytes) ) java.lang.reflect.Constructor: 136800000 (7.73%) bytes in 1710000 (6.87%) objects (avg size 80 bytes) java.lang.Object []: 41079872 (2.32%) bytes in 1710712 (6.87%) objects (avg size 24 bytes) java.lang.String: 41063496 (2.32%) bytes in 1710979 (6.88%) objects (avg size 24 bytes) java.util.ArrayList: 41050680 (2.31%) bytes in 1710445 (6.87%) objects (avg size 24 bytes). Cut for brevity...
The output above is sorted by size. As you can see, 80.44% of bytes and 68.81% of objects are allocated in the ManyTargetsGarbageProducer.newRandomClassObject () method. Among them, the int [] array takes up 40.19% of the memory, which is the largest.
If you look further down, you will find that the content related to allocation traces (distribution trace) is also sorted by allocation size:
Top allocated data types with reverse location traces- -int []: 725306304 (40.98%) bytes in 1954234 (7.85%) objects (avg size 371 bytes) eu.plumbr.demo.largeheap.ClonableClass0006.: 38357696 (2.16%) bytes in 92206 (0.37%) objects (avg size 416 bytes) java.lang.reflect.Constructor.newInstance: 38357696 (2.16%) bytes in 92206 (0.37%) objects (avg size 416 bytes) eu.plumbr.demo.largeheap.ManyTargetsGarbageProducer.newRandomClassObject: 38357280 (2.16%) ) bytes in 92205 (0.37%) objects (avg size 416 bytes) java.lang.reflect.Constructor.newInstance: 416 (0.005%) bytes in 1 (0.005%) objects (avg size 416 bytes). Cut for brevity...
As you can see, the allocation of the int [] array continues to grow in the ClonableClass0006 constructor.
Like other tools, AProf reveals the size and location information (allocation size and locations) of allocations, allowing you to quickly find the parts that consume the most memory. In our view, AProf is the most useful allocation analyzer because it only focuses on memory allocation, so it does the best. Of course, this tool is open source and free, with the least resource overhead.
This is the end of the introduction to "how to use the jvm GC tuning tool". Thank you for reading. If you want to know more about the industry, you can follow the industry information channel. The editor will update different knowledge points for you every day.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.