Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the classic problems of Java virtual machine

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article introduces the relevant knowledge of "what are the classic problems of Java virtual machine". In the operation of actual cases, many people will encounter such a dilemma. Next, let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

How does Java provide high execution efficiency while ensuring portability?

The most common way to execute Java programs is to precompile to an intermediate code format called Java bytecode. This code format cannot be run directly on top of CPU, but needs to be executed with JVM. In other words, as long as a platform provides an implementation that conforms to the JVM specification, it can execute the Java bytecode. This is what we often say, "write once, run everywhere".

The JVM provided in the mainstream OpenJDK/OracleJDK is called HotSpot. It uses both interpretive execution and just-in-time compilation. Interpretation and execution is like simultaneous interpretation. JVM sends a sequence of instructions to CPU while understanding the input bytecode; just-in-time compilation is "sharpening the knife without making mistakes in chopping wood", and JVM compiles the hot code into directly executable binary code.

This mixed execution mode is based on the assumption that the program conforms to the two-eight law, that is, 20% of the code occupies 80% of the computing resources. For uncommonly used code, we do not need to take time to compile it into binary code, but to run it by interpreting execution; on the other hand, for hot code that only occupies a small part, JVM will take time to compile it into binary code to achieve the ideal running efficiency.

How is exception capture implemented?

In the compiled Java bytecode, each method comes with an exception table. Each row in the exception table defines an exception execution path, including the starting bytecode index that specifies the capture range, the ending (excluding) bytecode index, the starting bytecode index of the exception handling code, and the type of exception caught.

When the program triggers an exception, JVM iterates through all the entries in the exception table from top to bottom. When the index value of the bytecode that triggered the exception is within the capture range of the exception table entry in a row, JVM determines whether the exception thrown matches the exception that the entry wants to catch. If there is a match, JVM transfers the control flow to the exception handling code that the entry points to.

The above exception catching mechanism is also used for the implementation of finally clauses. In general, javac, the compiler of Java programs, copies multiple finally code blocks, places them in the generated Java bytecode, and then implements the complete finally logic by generating multiple lines of exception table entries.

Why is the reflection call slow?

By default, reflection calls are first delegated to the native method. It is conceivable that its operation efficiency is low. When the number of calls to a reflection call reaches 15, the JDK code determines that the call is a hot call. Then, JDK dynamically generates the bytecode that invokes the target method directly, and switches the delegate object of reflection call from the original native method implementation to the dynamically generated implementation. The efficiency of this approach is much higher than that of the native method.

The reason why JDK does not use dynamic bytecode generation from the very beginning is that the generation process takes a certain amount of time. For those reflection calls that are performed only a few times throughout the life cycle, the loss of dynamically generating bytecode will outweigh the gain.

However, even if the dynamic implementation of calling the target method directly, its peak performance is not comparable to that of the real direct call. This involves virtual method inlining in just-in-time compilation.

Related articles:

< 方法内联(下)>

What is the basic idea of garbage collection?

At present, the mainstream garbage collectors of JVM adopt reachability analysis algorithms. The essence of the algorithm is to assemble a series of objects called GC Roots as the initial living objects, and then explore all the objects that can be referenced by the collection and mark them as living objects. When the marking phase is over, objects that are not marked can be cleared.

Traditional garbage collection algorithms need to abort other application threads in the process of marking and cleaning, that is, the so-called Stop-The-World. New garbage collection algorithms, such as CMS, G1, and ZGC, implement concurrent marking and cleaning as much as possible, so that the length of time of Stop-The-World can be controlled.

Another basic idea of garbage collection is generation-by-generation recycling. JVM classifies newly generated objects as new generations and objects that survive multiple garbage collections as old ones. JVM will set different recovery algorithms for different generations, so as to achieve the goal of more collection, fast collection in the new generation, less collection and full collection in the old age.

How to understand the Java memory model?

Most of the modern computers are symmetrical multiprocessor architecture. Each processor has a separate register group and cache (which is abstracted as working memory in the Java memory model); multiple processors can execute different threads in the same process at the same time.

In Java programs, different threads may access the same variable or object. If the compiler or processor is allowed to optimize these accesses, problems are likely to occur that are unthinkable in a single-threaded execution mindset. Therefore, the Java language specification introduces the Java memory model, which limits compilers and processors by defining a number of rules.

The most important attribute embodied by these rules is visibility, that is, whether access to a variable can be observed by other operations of the same thread, or by different threads. The Java memory model introduces a variety of happens-before relationships to achieve the above visibility. Take the volatile field as an example, the read operation after its write operation happens before, that is, we can always read the latest value of the volatile field.

How does JVM deal with various scenarios of object locks?

Heavyweight lock is the most basic and inefficient implementation of object lock. JVM blocks threads that fail to lock and wakes them up when the target lock is released. We use waiting for a red light as an analogy. The blocking state of the Java thread is equivalent to flameout and stopping, and it will take time to start again. JVM will spin before entering the blocking state, that is, stop at idle speed. If the target lock can be released in a short period of time, the thread can acquire the lock directly without entering the blocking state.

Heavyweight locks are designed for scenarios where multiple threads compete for the same lock at the same time. In reality, multiple threads may hold the same lock at different times. In order to deal with this situation where there is no lock competition, JVM uses a lightweight locking mechanism. When locked, JVM marks the lock object and points to the stack of the current thread; when unlocked, the above mark is cleared. If a thread requests a lock and finds that the lock is lightweight and points to the stack corresponding to another thread, it expands the lock to a heavy lock.

The scenario where locks are biased is more optimistic: only one thread requests a lock from beginning to end. JVM takes the approach of marking the lock object to the address of the current thread the first time it is locked, and doing nothing when it is unlocked. If the next time the lock is requested by the same thread, the marking process is skipped directly; otherwise, JVM inflates the lock into a lightweight lock.

This is the end of the content of "what are the Classic questions of the Java Virtual Machine". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report