Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are Java memory model and thread respectively?

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

The main content of this article is to explain "what are the Java memory model and threads respectively". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn what the Java memory model and threading are.

1. Amdahl's law: describe the computing acceleration capability of multiprocessor systems through the proportion of parallelization and serialization in the system; Moore's law: used to describe the relationship between the number of processor transistors and operating efficiency. The wide application of concurrent processing is the fundamental reason why Amdahl's law has replaced Moore's law as the driving force for the development of computer performance, and it is also the most powerful weapon for human beings to squeeze computer computing power.

2. To measure the performance of a service, transactions per second (Transaction Per Second,TPS) is one of the most important indicators, which represents the average total number of requests that the server can respond to in a second, and the TPS value is closely related to the concurrency ability of the program.

3, 1) Why should there be a cache? Solve the speed contradiction between processor and memory-the computing speed difference between storage device and processor is several orders of magnitude. But introduce a new problem: cache consistency.

2) how to solve cache consistency? Each processor needs to follow some protocols when accessing the cache, such as MSI, MESI, MOSI, S have that color matching, Firefly and Dragon Protocol and so on.

3) what is out-of-order optimization? The processor executes the input code out of order, and after calculation, the result of out-of-order execution is reorganized to ensure that the result is consistent with the result of sequential execution, but it does not guarantee that the order of calculation of each statement in the program is the same as that in the input code. There is a similar instruction rearrangement optimization for just-in-time compilation of Java virtual machines.

4. Memory model can be understood as the abstraction of the process of reading and writing access to a specific memory or cache under a specific operation protocol.

5. JSR:Java Specification Requests,Java specification proposal.

6. The main goal of the Java memory model is to define the access rules of each variable in the program. Here variables include instance fields, static fields, and elements that make up the array object, but do not include local variables and method parameters, because the latter is thread private and will not be shared.

7. The Java memory model stipulates that all variables are stored in the main memory (Main Memory), and each thread has its own working memory (Working Memory). The working memory of the thread stores a copy of the main memory of the variables used by the thread, and all operations on the variables by the thread must be carried out in the working memory, rather than directly reading and writing variables in the main memory. There is no direct access to variables in each other's working memory between different threads, and the transfer of variable values between threads needs to be completed through the main memory. The main memory and working memory here are not partitioned at the same level as the heap, stack, method area, and so on in the Java memory area, and the two are basically unrelated.

8. The Java memory model defines eight operations to complete the interaction between the main memory and the working memory. The virtual machine must ensure that each operation is atomic and indivisible. Lock 、 unlock 、 read 、 load 、 use 、 assign 、 store 、 write . The Java memory model also specifies the rules that must be met when performing the above eight basic operations. Through the basic memory access operations and rules in these 8, you have fully determined which memory access operations in Java programs are safe under concurrency. Because the definition is rigorous and it is troublesome to practice, the principle of first occurrence can be used to determine whether an access is secure in a concurrent environment.

9. Special rules for volly variables:

1) the lightest synchronization mechanism provided by Java virtual machines

2) the variable modified by volatile will have two characteristics:

The first is to guarantee the visibility of variables to all threads, but not atomicity

The second is to prohibit rearrangement of instructions.

3) principle: by setting the memory barrier, that is, a null operation with a lock prefix, it will write the Cache of this CPU to memory, and the write operation will invalidate the Cache of other CPU or other kernels, so that the modification of volatile variables can be immediately visible to other CPU, and the semantics of volatile mask instruction rearrangement will be completely repaired in JDK1.5.

4) the special rules for the definition of volatile variables by Java memory model: a. Take the value before use; b. Synchronize immediately after modification; c. Will not be rearranged by instructions.

5) in addition to volatile,Java, there are two other keywords to achieve visibility, namely synchronized and final.

10. The Java memory model is built around how to deal with atomicity, visibility and order in the process of concurrency.

11. the principle of prior occurrence

1) it is the main basis for judging whether there is competition in data and thread safety. depending on this principle, we can solve all the problems of possible conflict between two operations in concurrent environment through a package of several rules.

2) there are some "natural" relationships under the Java memory model, and these antecedent relationships already exist without the assistance of any synchronizer:

A) procedural order rules. To be exact, it is the order of control flow rather than the order of program code.

B) Pipe locking rules. For the same lock, unlock occurs first in the lock operation

C) volatile variable rules. For volatile variables, the write operation occurs first in the read operation

D) Thread startup rules. The start method of Thread occurs first in every action of this thread

E) Thread termination rules All operations of a thread occur first in the termination detection of this thread

F) Thread interrupt rules. The call to the thread interrupt method occurs first when the code of the interrupted thread detects the occurrence of the interrupt event; that is,

G) object termination rules. The initialization of an object occurs first at the beginning of its finalize method.

H) transitivity. If A precedes B and C, then A will precede C.

3) there is basically no great relationship between the time sequence and the first occurrence principle, so when we measure the concurrency security problems, we should not be disturbed by the time sequence, everything must be based on the first occurrence principle.

12. Implementation of threads

1) Thread is a lighter scheduling execution unit than a process. Thread introduction can separate the resource allocation and scheduling of a process, and each thread can not only share process resources, but also schedule independently.

2) in JavaAPI, a native method often means that the method is not used or cannot be implemented using platform-independent means

3) there are three ways to implement threads: using kernel threads, using user threads, and using user threads plus lightweight processes.

4) use kernel threads to implement:

A) Kernel threads are threads directly supported by the operating system kernel

B) programs generally do not use kernel threads directly, but use a high-level interface of kernel threads-lightweight processes (Light Weight Process, LWP). The relationship between lightweight processes and kernel threads is 1:1, so it is called one-to-one thread model.

C) all kinds of thread operations require system calls, which need to switch back and forth between user mode and kernel mode, which is expensive. In addition, each lightweight process needs to be supported by a kernel thread, which consumes a certain amount of kernel resources.

5) use user threads to implement:

A) this thread does not need to switch to kernel mode, so the operation can be very fast and low consumption, or it can support a larger number of threads.

B) this relationship between the process and the user thread is called an one-to-multithreading model.

C) the implementation is complicated.

6) using the hybrid implementation of user threads and lightweight processes: the number ratio of user threads to lightweight processes is variable, so this relationship of NRV M is called many-to-multithreading model.

7) implementation of Java threads: Windows platform and Linux platform use one-to-one thread model, Solaris platform, which provides proprietary virtual machine parameters to specify which thread model the virtual machine uses.

13. Java thread scheduling:

1) Thread scheduling refers to the process that the system allocates the right to use the processor to the thread. There are mainly two ways: collaborative scheduling and preemptive scheduling.

2) the thread scheduling method used by Java is preemptive scheduling.

3) Java sets a total of 10 levels of thread priority (Thread.MIN_PRIORITY and Thread.MAX_PRIORITY). When two threads are in Ready state at the same time, the higher the priority thread is, the easier it is for the system to choose and execute. But thread priorities are unreliable and should not be too dependent on priorities.

14. State transition. The Java language defines five thread states, and a thread can only have one and only one of them:

New: has not been started since it was created

Runnable: may be executing or waiting for CPU to allocate execution time

Waiting: threads are not allocated CPU execution time and need to be explicitly awakened by other threads.

Timed Waiting: threads are not allocated CPU execution time, but do not need to be explicitly awakened by other threads, and will be automatically awakened by the system after a certain period of time

Blocked: waiting for the lock to be released

Terminated: thread terminated.

15. There are five types of data shared by various operations in Java language: immutable, absolute thread safety, relative thread safety, thread compatibility and thread opposition. Relative thread safety is what we usually call thread safety, separate operation is thread safe, but it involves a certain sequence of continuous execution, we need synchronization means to ensure thread safety.

16. The implementation method of thread safety:

1) Mutual exclusive synchronization

A) synchronization refers to ensuring that shared data is used by only one thread at a time when multiple threads access shared data concurrently

B) the most basic synchronization method is synchronized. There are two points to pay special attention to: first, synchronized is reentrant to the same thread, and second, the synchronization block blocks the entry of other threads before the execution of the incoming thread is finished, while the Java thread is mapped to the native thread of the operating system. If you want to block or wake up a thread, you need the help of the operating system, which requires switching from user mode to kernel mode. So state transition takes a lot of processor time.

C) synchronization can also be achieved with java.util.concurrent-safe reentrant locks (ReentrantLock). Compared to synchronized,ReentrantLock, it provides some advanced features.

D) with JDK1.6 or above, performance factors are no longer the reason to choose ReentrantLock, and virtual machines will be more inclined to synchronized in future performance improvements.

E) Mutual exclusive synchronization is a pessimistic concurrency strategy

2) non-blocking synchronization: an optimistic concurrency strategy that does not need to suspend threads

3) No synchronization solution:

A) to ensure thread safety, synchronization is not necessary, and there is no causal relationship between the two.

B) reentrant code: inherently thread-safe.

C) Thread local storage: ThreadLocal, each thread's Thread object has a ThreadLocalMap object, which stores a set of ThreadLocal V value pairs with ThreadLocal.threadLocalHashCode as the key and local thread variables as the value. ThreadLocal object is the access entry of the ThreadLocalMap of the current thread, and each thread object contains a unique threadLocalHashCode value, which can be used to find the corresponding local thread variable in the thread KMI V value pair.

Lock optimization techniques: adaptive spin, lock elimination, lock coarsening, lightweight lock, bias lock

At this point, I believe that you have a deeper understanding of what the Java memory model and threads are, so you might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report