Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of JVM memory Management in depth garbage Collector and memory allocation Strategy

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article introduces you JVM memory management in-depth garbage collector and memory allocation strategy example analysis, the content is very detailed, interested friends can refer to, hope to be helpful.

Between Java and C++, there is a high wall surrounded by dynamic memory allocation and garbage collection technology. People outside the wall want to get in, while people inside want to get out.

Overview:

When it comes to garbage collection (Garbage Collection, hereinafter referred to as GC), most people regard this technology as a byproduct of the Java language. In fact, GC has a much older history than Java. Lisp, born in MIT in 1960, is the language that really uses dynamic memory allocation and garbage collection technology. When Lisp was still in its infancy, people were thinking about three things that GC needs to do: which memory needs to be reclaimed? When will it be recycled? How to recycle?

After half a century of development, the current memory allocation strategy and garbage collection technology have been quite mature, everything seems to have entered the era of "automation", so why do we need to understand GC and memory allocation? The answer is simple: when we need to troubleshoot various memory spills and leaks, and when garbage collection becomes a bottleneck for the system to achieve higher concurrency, we need necessary means to monitor and adjust these "automated" technologies.

Turn the time back from 1960 to the present, back to the Java language we are familiar with. In this paper, the various parts of the runtime area of Java memory are introduced, in which the program counter, the VM stack and the local method stack are generated and destroyed with the thread, and the frames in the stack operate out and into the stack in an orderly manner as the method enters and exits. How much memory is allocated in each frame is basically known when the Class file is generated (some optimizations may be made by JIT dynamic late compilation, but it can be generally considered to be known at compile time), so there is a high degree of certainty about memory allocation and recycling in these areas, so you don't need to think too much about recycling in these areas. Unlike the Java heap and the method area (including the runtime constant pool), we have to wait until the program is actually running before we know which objects will be created. The allocation and recycling of this part of memory is dynamic, and the "memory" allocation and recycling discussed later in this article only refers to this part of memory.

The date is dead?

Almost all the objects in the Java world are stored in the heap, and before recycling, we must first determine which of these objects are still alive and which are "dead", that is, objects that can no longer be used in any way.

Reference counting algorithm (Reference Counting)

The original idea, which is also the algorithm for many textbooks to judge whether an object is alive or not, is like this: add a reference counter to the object, when there is a reference to it, the counter adds 1, and when the reference expires, the counter minus 1. An object with a counter of zero at any time can no longer be used.

Objectively speaking, the reference counting algorithm is simple to implement and has high judgment efficiency. In most cases, it is a good algorithm, but the reference counting algorithm can not solve the problem of object circular reference. To take a simple example: objects An and B have fields b and a respectively, so that A.b=B and B.a=A, in addition to these two objects do not have any references, in fact, these two objects can no longer be accessed, but the reference counting algorithm can not recover them.

Root search algorithm (GC Roots Tracing)

In actual production languages (Java, C #, and even the aforementioned Lisp), the root search algorithm is used to determine whether the object is alive or not. The basic idea of the algorithm is to search downward through a series of points called "GC Roots". When an object is not connected to GC Roots with any Reference Chain, it is proved that the object is unavailable. In the Java language, GC Roots includes:

1. References in the VM stack (local variables in the frame)

two。 Static references in the method area

References in 3.JNI (commonly known as the Native method)

Live or die?

Determining that an object is dead goes through at least two marking processes: if an object finds no reference chain connected to GC Roots after a root search, it will be marked * * times and its finalize () method will be executed later (if it has one). The so-called "execution" here means that the virtual machine triggers this method, but does not promise to wait for it to finish running. This is necessary, otherwise a slow execution of an object in the finalize () method, or even an endless loop, will easily cause the entire system to crash. The finalize () method is an opportunity for the object to escape the fate of death. Later, GC will make a second smaller tag. If the object succeeds in saving itself in finalize () (as long as the connection to GC Roots is re-established, such as assigning itself to a reference), it will be removed from the collection "about to be recycled" on the second tag, if the object has not escaped at this time. Then basically, it is really not far from death.

It should be noted that the description of the finalize () method here may be a bit sad art processing, it does not mean that the author encourages you to use this method to save the object. On the contrary, the author suggests that we should avoid using it as much as possible. This is not a destructor in C _ blank +. It is expensive and uncertain to run, and can not guarantee the calling order of each object. Things like shutting down external resources are needed, and basically what it can do can be done better with try-finally.

About the method area

The method area is the * generation mentioned later. Many people think that the * generation does not have GC. In the heap, especially in the new generation, conventional applications can generally recover 70% and 95% of the space after GC, while the GC efficiency of the * * generation is much less than this. Although VM Spec is not required, all commercial JVM in production have GC that implements * generation, and mainly recycles two parts: obsolete constants and useless classes. These two recycling ideas are very similar to the object recycling in the Java heap, both search for whether there is a reference, the constant is relatively simple, and the determination is similar to the object. On the other hand, the recovery of classes is more stringent, and the following three conditions need to be met:

1. All instances of this class have been GC, that is, no instance of the Class exists in JVM.

two。 The ClassLoader that loaded the class has been GC.

3. The corresponding java.lang.Class object of this class is not referenced anywhere, for example, the method of this class cannot be accessed anywhere through reflection.

Whether or not to recycle a class can be controlled by using the-XX:+ClassUnloading parameter, and you can also use-verbose:class or-XX:+TraceClassLoading,-XX:+TraceClassUnLoading to view class loading and unloading information.

In scenarios where reflection, dynamic proxy, CGLib and other bytecode frameworks, dynamic generation of JSP and frequently customized ClassLoader such as OSGi are heavily used, JVM needs class unloading support to ensure that the * * generation will not overflow.

Garbage collection algorithm

In this section, we do not intend to discuss the implementation of the algorithm, but simply introduce the basic idea and the development process. The most basic collection algorithm is the "tag-clear algorithm" (Mark-Sweep), as its name is, the algorithm is layered into two stages: "marking" and "clearing". First, it marks all the objects that need to be recycled, and then recycles all the objects that need to be recycled. In fact, the whole process has been basically introduced in the previous section on object tag determination. It is said that it is the most basic collection algorithm because the subsequent collection algorithms are based on this idea and optimize its shortcomings. It has two main disadvantages, one is the efficiency problem, the two processes of marking and cleaning are not efficient, and the other is the space problem. After mark cleaning, a large number of discontinuous memory fragments will be produced. Too much space debris may lead to the subsequent use of not finding enough continuous memory and trigger another garbage collection action in advance.

In order to solve the problem of efficiency, a collection algorithm called "Copying" appears, which divides the available memory into two pieces, using only one of them at a time. When the half-area memory is used up, only the surviving objects are copied to the other block, and then the whole memory space is cleaned up at once. This makes each memory recovery is the recovery of the whole half area, memory allocation does not have to consider complex situations such as memory fragments, as long as the top pointer is moved and memory is allocated sequentially, which is simple and efficient. But the cost of this algorithm is to reduce the memory to half of the original, which is a bit too high.

Today's commercial virtual machines all use this collection algorithm to recover the new generation. Special research by IBM shows that 98% of objects in the new generation die overnight, so there is no need to divide the memory space according to the proportion of 1:1. Instead, the memory is divided into a larger eden space and 2 smaller survivor spaces, each using eden and one of the survivor. When recycling, copy the living objects of eden and survivor to another piece of survivor space at once, and then clean up the eden and used survivor. The default size ratio of eden to survivor for Sun Hotspot virtual machines is 8:1, which means that only 10 per cent of memory is "wasted" at a time. Of course, 98% of the objects can be recycled only in general scenarios. There is no way to guarantee that less than 10% of the objects will survive each time. When there is not enough survivor space, we need to rely on other memory (such as the old era) for Handle Promotion.

When the survival rate of objects is high, the efficiency of replication collection algorithm decreases. More crucially, if you do not want to waste 50% of the space, you need to have additional space to allocate guarantees to deal with the extreme situation in which all objects in the half-area memory are 100% alive, so this algorithm generally could not be used directly in the old days. So another "Mark-Compact" algorithm is proposed, and the marking process is still the same, but the next step is not to clean up directly, but to move one end of all living objects, and then directly clean up the memory outside the boundary of this end.

At present, the garbage collection of commercial virtual machines uses the "generation collection" (Generational Collecting) algorithm, which has no new idea, but divides the memory into several blocks according to the different survival cycle of the object. Generally, the Java heap is divided into the new generation and the old age, so that the most appropriate collection algorithm can be adopted according to the characteristics of each age. For example, in each GC of the new generation, a large number of objects die and only a small number of objects survive, then the replication algorithm only needs to pay a small amount of replication cost to complete the collection.

Garbage collector

Garbage collector is the concrete implementation of collection algorithm, and different virtual opportunities provide different garbage collectors. And provide parameters for users to combine collectors used in different ages according to their own application characteristics and requirements. The collector discussed in this article is based on Sun Hotspot virtual machine version 1.6.

Figure 1.Sun JVM1.6 garbage Collector

Figure 1 shows the six collectors provided in 1.6 for different ages, and if there is a connection between the two collectors, they can be used together. Before we introduce some collectors, let's make it clear that there is no collector, there is no collector, only the most suitable collector.

1.Serial collector

A single-threaded collector, which pauses all worker threads during collection (we call this thing Stop The World, hereinafter referred to as STW). Using the replication collection algorithm, the default new generation collector for virtual machines running in Client mode.

2.ParNew collector

The ParNew collector is a multithreaded version of Serial. Except for using multiple collection threads, the other behaviors, including algorithm, STW, object allocation rules, collection strategy, and so on, are exactly the same as those of the Serial collector. The corresponding collector is the default new generation collector for virtual machines running in Server mode. In a single CPU environment, the ParNew collector is no better than the Serial collector.

3.Parallel Scavenge collector

The Parallel Scavenge collector (hereinafter referred to as the PS collector) is also a multithreaded collector and uses replication algorithms, but its object allocation rules and recovery strategy are different from those of the ParNew collector. It is implemented with the goal of throughput * * (that is, the GC time accounts for the minimum of the total running time), and it allows long-term STW for total throughput * *.

4.Serial Old collector

Serial Old is a single-threaded collector, using the tag-collation algorithm, is an old collector, the above three are used in the new generation of collectors.

5.Parallel Old collector

The old version of the throughput first collector, which uses multithreading and mark-collation algorithms, is provided by JVM 1.6. before that, if the new generation used the PS collector, the old generation had no choice but Serial Old, because PS could not work with the CMS collector.

6.CMS (Concurrent Mark Sweep) collector

CMS is a collector aimed at the shortest pause time. Using CMS can not achieve GC efficiency (the overall GC time is minimum), but it can minimize the service pause time during GC, which is very important for real-time or highly interactive applications (such as securities trading), which are generally intolerable for long-term STW. The CMS collector uses a mark-and-clear algorithm, which means that it generates space debris during operation, so the virtual machine provides parameters to turn on CMS collection and then compress memory again.

Memory allocation and recovery strategy

One of the most important points in understanding GC is to understand JVM's memory allocation strategy: that is, where objects are allocated and when objects are allocated back about where objects are allocated, generally speaking, they are mainly allocated on the heap, but they may also be allocated on the stack by scalar substitution and splitting into atomic types after escape analysis by JIT, or they may be allocated in DirectMemory (see this article * * Chapter). In terms of details, objects are mainly allocated on the new generation eden, or directly in the old age, and the details of the allocation depend on the type of garbage collector currently in use and VM-related parameter settings. We can verify the memory allocation and recycling policy of the Serial collector (the rules of the ParNew collector are exactly the same) with the following code. After reading the analysis of the Serial collector, you might as well write some programs according to the JVM parameter document to practice the allocation strategy of several other collectors.

Listing 1: memory allocation test code

Public class YoungGenGC {private static final int _ 1MB = 1024 * 1024; public static void main (String [] args) {/ / testAllocation (); testHandlePromotion (); / / testPretenureSizeThreshold (); / / testTenuringThreshold (); / / testTenuringThreshold2 ();} / * VM parameters:-verbose:gc-Xms20M-Xmx20M-Xmn10M-XX:+PrintGCDetails-XX:SurvivorRatio=8 * / @ SuppressWarnings ("unused") public static void testAllocation () {byte [] allocation1, allocation2, allocation3, allocation4 Allocation1 = new byte [2 * _ 1MB]; allocation2 = new byte [2 * _ 1MB]; allocation3 = new byte [2 * _ 1MB]; allocation4 = new byte [4 * _ 1MB]; / / once Minor GC} / * VM parameter:-verbose:gc-Xms20M-Xmx20M-Xmn10M-XX:+PrintGCDetails-XX:SurvivorRatio=8 *-XX:PretenureSizeThreshold=3145728 * / @ SuppressWarnings ("unused") public static void testPretenureSizeThreshold () {byte [] allocation Allocation = new byte [4 * _ 1MB]; / / Direct allocation in the old era} / * VM parameters:-verbose:gc-Xms20M-Xmx20M-Xmn10M-XX:+PrintGCDetails-XX:SurvivorRatio=8-XX:MaxTenuringThreshold=1 *-XX:+PrintTenuringDistribution * / @ SuppressWarnings ("unused") public static void testTenuringThreshold () {byte [] allocation1, allocation2, allocation3; allocation1 = new byte [_ 1MB / 4] / / when to enter the old age, it is up to XX:MaxTenuringThreshold to set allocation2 = new byte [4 * _ 1MB]; allocation3 = new byte [4 * _ 1MB]; allocation3 = null; allocation3 = new byte [4 * _ 1MB] } / * VM parameter:-verbose:gc-Xms20M-Xmx20M-Xmn10M-XX:+PrintGCDetails-XX:SurvivorRatio=8-XX:MaxTenuringThreshold=15 *-XX:+PrintTenuringDistribution * / @ SuppressWarnings ("unused") public static void testTenuringThreshold2 () {byte [] allocation1, allocation2, allocation3, allocation4; allocation1 = new byte [_ 1MB / 4]; / / allocation1+allocation2 is greater than half of survivo space allocation2 = new byte [_ 1MB / 4]; allocation3 = new byte [4 * _ 1MB] Allocation4 = new byte [4 * _ 1MB]; allocation4 = null; allocation4 = new byt / * VM parameters:-verbose:gc-Xms20M-Xmx20M-Xmn10M-XX:+PrintGCDetails-XX:SurvivorRatio=8-XX:-HandlePromotionFailure * / @ SuppressWarnings ("unused") public static void testHandlePromotion () {byte [] allocation1, allocation2, allocation3, allocation4, allocation5, allocation6, allocation7; allocation1 = new byte [2 * _ 1MB]; allocation2 = new byte [2 * _ 1MB] Allocation3 = new byte [2 * _ 1MB]; allocation1 = null; allocation4 = new byte [2 * _ 1MB]; allocation5 = new byte [2 * _ 1MB]; allocation6 = new byte [2 * _ 1MB]; allocation4 = null; allocation5 = null; allocation6 = null; allocation7 = new byte [2 * _ 1MB];}}

Rule 1: in general, objects are allocated in eden. When eden cannot be allocated, a Minor GC is triggered.

The GC log and memory allocation status are output after executing the testAllocation () method. The three parameters of-Xms20M-Xmx20M-Xmn10M determine that the Java heap size is 20m, which is not expandable, of which 10m is allocated to the Cenozoic, and the remaining 10m is the old age. -XX:SurvivorRatio=8 determines that the spatial ratio of eden to survivor in the Cenozoic is 1:8. From the output results, we can clearly see the information of "eden space 819K, from space 1024K, to space 1024K". The total available space of the Cenozoic is 9216K (eden+1 survivor).

We also noticed that the result of a Minor GC,GC during the execution of testAllocation () is that the new generation 6651K becomes 148K, while the total memory footprint is almost not reduced (because there are few recyclable objects). This GC occurs because when memory is allocated for allocation4, eden is already occupied by 6m, and there is not enough space left to allocate 4m memory for allocation4, so Minor GC occurs. During GC, the virtual machine found that the three existing 2m objects could not be put into the survivor space (the survivor space was only 1m), so it was transferred directly to the old age. The allocation4 object of 4m after GC is allocated in eden.

The listing 2:testAllocation () method outputs the result

[GC [DefNew: 6651K-> 148K (9216K), 0.0070106 secs] 6651K-> 6292K (19456K), 0.0070426 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] Heap def new generation total 9216K, used 4326K [0x029d0000, 0x033d0000, 0x033d0000) eden space 819K, 51% used [0x029d0000, 0x02de4828, 0x031d0000) from space 1024K, 14% used [0x032d0000, 0x032f5370, 0x033d0000) to space 1024K, 0% used [0x031d0000, 0x031d0000, 0x032d0000) tenured generation total 1024K, used 6144K [0x033d0000, 0x03dd0000, 0x03dd0000) the space 1024K 60% used [0x033d0000, 0x039d0030, 0x039d0200, 0x03dd0000) compacting perm gen total 12288K, used 2114K [0x03dd0000, 0x049d0000, 0x07dd0000) the space 12288K, 17% used [0x03dd0000, 0x03fe0998, 0x03fe0a00, 0x049d0000) No shared spaces configured.

Rule 2: when PretenureSizeThreshold is configured, the object greater than the set value will be allocated directly in the old years.

After executing the testPretenureSizeThreshold () method, we see that the eden space is almost unused, while 40% of the old 10m controls are used, that is, 4m allocation objects are directly allocated in the old age, because PretenureSizeThreshold is set to 3m, so objects over 3m are allocated directly from the old age.

Listing 3:

Heap def new generation total 9216K, used 671K [0x029d0000, 0x033d0000, 0x033d0000) eden space 8192 K, 8% used [0x029d0000, 0x02a77e98, 0x031d0000) from space 1024K, 0% used [0x031d0000, 0x031d0000, 0x032d0000) to space 1024K, 0% used [0x032d0000, 0x032d0000, 0x033d0000) tenured generation total 1024K, used 4096K [0x033d0000, 0x03dd0000, 0x03dd0000) the space 1024K, 40% used [0x033d0000, 0x037d0010, 0x037d0200, 0x03dd0000) compacting perm gen total 12288K, used 2107K [0x03dd0000, 0x049d0000, 0x07dd0000) the space 12288K 17% used [0x03dd0000, 0x03fdefd0, 0x03fdf000, 0x049d0000) No shared spaces configured.

Rule 3: objects that survive after eden passes through GC and can be accommodated by survivor will be moved to survivor space, and if objects continue to survive several collections in survivor (default is 15), they will be moved to the old age. The number of times to recycle is set by MaxTenuringThreshold.

TestTenuringThreshold () is executed with-XX:MaxTenuringThreshold=1 and-XX:MaxTenuringThreshold=15 settings, respectively, where the allocation1 object requires 256K of memory and the survivor space can accommodate it. When MaxTenuringThreshold=1, the allocation1 object enters the old age when the second GC occurs, and the new generation of used memory GC becomes 0KB very cleanly. In the case of MaxTenuringThreshold=15, after the occurrence of the second GC, the allocation1 object remains in the space of the new generation survivor, at which time the new generation of 404KB is still occupied.

Listing 4:

MaxTenuringThreshold=1 [GC [DefNew Desired survivor size 524288 bytes, new threshold 1 (max 1)-age 1: 414664 bytes, 414664 total: 4859K-> 404K (9216K), 0.0065012 secs] 4859K-> 4500K (19456K), 0.0065283 secs] [Times: user=0.02 sys=0.00, real=0.02 secs] [GC [DefNew Desired survivor size 524288 bytes, new threshold 1 (max 1): 4500K-> 0K (9216K), 0.0009253 secs] 8596K-> 4500K (19456K), 0.0009458 secs] [Times: user=0.00 sys=0.00 524288 bytes, new threshold 1 (max 1): 4500K-> 0K (9216K), 0.0009253 secs] Real=0.00 secs] Heap def new generation total 9216K, used 4178K [0x029d0000, 0x033d0000, 0x033d0000) eden space 819K, 51% used [0x029d0000, 0x02de4828, 0x031d0000) from space 1024K, 0% used [0x031d0000, 0x031d0000, 0x032d0000) to space 1024K, 0% used [0x032d0000, 0x032d0000, 0x033d0000) tenured generation total 1024K, used 4500K [0x033d0000, 0x03dd0000, 0x03dd0000) the space 10240K, 43% used [0x033d0000, 0x03835348, 0x03835400, 0x03dd0000) compacting perm gen total 12288K, used 2114K [0x03dd0000, 0x049d0000, 0x049d0000) 0x07dd0000 12288K, 17% 0x07dd0000 [0x07dd0000, the space, the space 0x049d0000) No shared spaces configured. MaxTenuringThreshold=15 [GC [DefNew Desired survivor size 524288 bytes, new threshold 15 (max 15)-age 1: 414664 bytes, 414664 total: 4859K-> 404K (9216K), 0.0049637 secs] 4859K-> 4500K (19456K), 0.0049932 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] [GC [DefNew Desired survivor size 524288 bytes, new threshold 15 (max 15)-age 2: 414520 bytes, 414520 total: 4500K-> 404K (9216K), 0.0008091 secs] 859K-> 4500K (19456K) 0.0008305 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] Heap def new generation total 9216K, used 4582K [0x029d0000, 0x033d0000, 0x033d0000) eden space 8192 K, 51% used [0x029d0000, 0x02de4828, 0x031d0000) from space 1024K, 39% used [0x031d0000, 0x03235338, 0x032d0000) to space 1024K, 0% used [0x032d0000, 0x032d0000, 0x033d0000) tenured generation total 1024K, used 4096K [0x033d0000, 0x03dd0000, 0x03dd0000) the space 10240K, 40% used [0x033d0000, 0x037d0010, 0x037d0200, 0x03dd0000) compacting perm gen total 12288K, used 2114K [used, used 0x07dd0000) the space 12288K, 17% used [0x03dd0000, 0x03fe0998, 0x03fe0a00, 0x049d0000) No shared spaces configured.

Rule 4: if the cumulative value of all object sizes of the same age in the survivor space is more than half of the survivor space, the objects greater than or equal to the age can directly enter the old age without reaching the age required in the MaxTenuringThreshold.

Execute the testTenuringThreshold2 () method and set-XX:MaxTenuringThreshold=15. It is found that the survivor occupancy in the run result is still 0%, while the old age has increased by 6% than expected. In other words, both allocation1 and allocation2 objects have directly entered the old age without waiting for the critical age of 15. Because the two objects add up to 512K, and they are in the same year, meeting half of the rules that objects reach survivor space in the same year. As long as we comment out one of the object new operations, we will find that the other will not be promoted to the old age.

Listing 5:

[GC [DefNew Desired survivor size 524288 bytes, new threshold 1 (max 15)-age 1: 676824 bytes, 676824 total: 5115K-> 660K (9216K), 0.0050136 secs] 5115K-> 4756 K (19456K), 0.0050443 secs] [Times: user=0.00 sys=0.01, real=0.01 secs] [GC [DefNew Desired survivor size 524288 bytes, new threshold 15 (max 15): 4756 K-> 0K (9216K), 0.0010571 secs] 8852K-> 4756 K (19456K) 0.0011009 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] Heap def new generation total 9216K, used 4178K [0x029d0000, 0x033d0000, 0x033d0000) eden space 8192 K, 51% used [0x029d0000, 0x02de4828, 0x031d0000) from space 1024K, 0% used [0x031d0000, 0x031d0000, 0x032d0000) to space 1024K, 0% used [0x032d0000, 0x032d0000, 0x033d0000) tenured generation total 1024K, used 4756 K [0x033d0000, 0x03dd0000, 0x03dd0000) the space 1024K, 46% used [0x033d0000, 0x038753e8, 0x03875400 0x03dd0000) compacting perm gen total 12288K, used 2114K [0x03dd0000, 0x049d0000, 0x07dd0000) the space 12288K, 17% used [0x03dd0000, 0x03fe09a0, 0x03fe0a00, 0x049d0000) No shared spaces configured.

Rule 5: when Minor GC is triggered, it will detect whether the average size of each promotion to the old age is greater than the remaining space in the old years. If it is larger than that, it will be used for Full GC directly. If it is less than that, check the HandlePromotionFailure setting to see if the guarantee is allowed to fail. If allowed, Minor GC will still be carried out, and if not, Full GC will be carried out instead.

As mentioned earlier, the replication collection algorithm is only available in the new generation, but only one of the survivor space is used as a rotating backup for memory utilization, so when a large number of objects survive after GC (the most extreme is that all objects survive after GC), you need the old age to allocate the objects that cannot be accommodated by survivor directly into the old age. Similar to the loan guarantee in life, such a guarantee should be carried out in the old age, on the premise that there is still room for these objects in the old age, and how many objects are not clearly known before GC, so take the average capacity of each time GC is promoted to the old age to compare with the remaining space of the old age to decide whether to carry out Full GC to make more space for the old age.

In fact, taking the average for comparison is still a means of dynamic probability, that is, if the number of objects after the survival of the Minor GC suddenly increases, which is much higher than the average, it will still lead to the failure of the guarantee, so you have to do the Full GC again after the failure. Although the circle that is done when the guarantee fails, in most cases, the HandlePromotionFailure will be opened to avoid Full GC too frequently.

Listing 6:

HandlePromotionFailure = false [GC [DefNew: 6651K-> 148K (9216K), 0.0078936 secs] 6651K-> 4244K (19456K), 0.0079192 secs] [Times: user=0.00 sys=0.02, real=0.02 secs] [GC [DefNew: 6378K-> 6378K (9216K), 0.0000206 secs] [Tenured: 4096K-> 4244K (10240K), 0.0042901 secs] 10474K-> 4244K (19456K), [Perm: 2104K-> 2104K (12288K)], 0.0043613 secs] [Times: user=0.00 sys=0.00 Real=0.00 secs] HandlePromotionFailure = true [GC [DefNew: 6651K-> 148K (9216K), 0.0054913 secs] 6651K-> 4244K (19456K), 0.0055327 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] [GC [DefNew: 6378K-> 148K (9216K), 0.0006584 secs] 10474K-> 4244K (19456K), 0.0006857 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]

Total knot

This paper introduces the algorithm of garbage collection, 6 main garbage collectors, and introduces the influence of the new generation serial collector on memory allocation and collection through code examples.

In many cases, GC is the decisive factor of system parallelism. Virtual machines provide a variety of different collectors and a large number of tuning parameters because the performance of * * can only be obtained by selecting * * collection methods according to the actual application requirements and implementation methods. There is no fixed collector, no parameter combination, no tuning method, and the virtual machine does not have any inevitable behavior. The author has read some articles, putting aside the specific scenarios to talk about the old age reaching 92% will trigger Full GC (92% should come from the default critical point triggered by the CMS collector), 98% of the time the garbage collection system will throw an OOM exception (98% should come from the default critical point of the parallel collector collection time ratio) is not very meaningful. Therefore, if you want to learn GC to the practical tuning stage, you must understand the behavior, advantages and disadvantages, and adjustment parameters of each specific collector.

So much for the example analysis of JVM memory management in-depth garbage collector and memory allocation strategy. I hope the above can be helpful and learn more. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report