Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to interpret shared objects in Java Multithreading and concurrency Model

2025-01-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article shows you how to interpret the shared objects in the Java multithreading and concurrency model. The content is concise and easy to understand, which will definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.

Unless otherwise specified, the following refers to the Java environment.

Shared object

The key to writing thread-safe programs using Java lies in the correct use of shared objects and safe access management. In the first chapter, we talked about Java's built-in locks to ensure thread safety, and for other applications, concurrency security ensures the boundaries of the use of thread variables in the "black box" of built-in locks. When it comes to thread boundaries, there is another important meaning of the Java memory model, visibility. Java's native support for visibility is the volatile keyword.

Volatile keyword

The volatile variable has two characteristics. One is to ensure that the variable is visible to all threads. The visibility here means that when one thread modifies the value of the variable, the new value is immediately available to other threads. Second, volatile forbids rearrangement of instructions.

Although the volatile variable has visibility and disables instruction reordering, it cannot be said that the volatile variable ensures concurrency security.

Public class VolatileTest {

Public static volatile int a = 0

Public static final int THREAD_COUNT = 20

Public static void increase () {astata +

}

Public static void main (String [] args)

Throws InterruptedException

{

Thread [] threads = new thread [thread _ COUNT]

For (int I = 0

I

< THREAD_COUNT; i++) { threads[i] = new Thread(new Runnable() { public void run() {for (int i = 0; i < 1000; i++) {increase(); } } } ); threads[i].start(); } while (Thread.activeCount() >

2)

{

Thread.yield ()

}

System.out.println (a)

}

}

As we expected, it should return 20000, but unfortunately, the return result of the program is almost always different.

The problem mainly lies in the aura +. The compound operation is not atomic. Although volatile is used here to define a, when we do it, we first get the latest a value, for example, the newest value may be 50 at this time, and then let an increase. However, in the process of increasing, other threads may have changed the value of a to 52 or 53, but the thread still uses the old value when it increments itself. So the result is often less than 2000 expected. If you want to solve this problem, you can lock the increase () method.

Volatile applicable scenario

Volatile is suitable for operations that do not depend on the current value of the variable, which means that the an of the above program does not augment itself, or is just an assignment operation, such as boolean flag = true.

Volatile boolean shutDown = false

Public voidshutDown ()

{

ShutDown = true

}

Public voiddoWork ()

{while (! shutDown)

{

System.out.println ("Do work" + Thread.currentThread (). GetId ())

}

}

Code 2.1: visibility of variables

In Code 2.1, you can see that the thread stops after the normal logic should print 10, but the actual situation may be to print out 0 or the program will never be terminated. The reason is that no proper synchronization mechanism is used to ensure that thread writes are visible to all threads.

We generally understand volatile as a lightweight implementation of synchronized, which can guarantee the "visibility" of shared variables in multi-core processors, but not atomicity. The rules of program variables on atomicity are explained in this section. Let's take a look at the memory model implementation of Java to understand how JVM and computer hardware coordinate shared variables and the visibility of volatile variables.

Java memory model

We all know that modern computers are von Neumann structures, and all code is executed sequentially. If the computer needs to operate on an instruction in CPU, it is bound to involve reading and writing data. Because most of the program data is stored in the main memory (RAM), there is a problem of reading speed. CPU is very fast and the main memory is much slower (relative to CPU). In order to solve this speed ladder problem, various CPU manufacturers have introduced cache in CPU to optimize the data interaction between main memory and CPU. I specially sorted out the above technology. There are many technologies that can not be explained clearly by a few words, so I simply asked my friends to record some videos. The answers to many questions are actually very simple, but the thinking and logic behind them are not simple. To know it, you need to know why. If you want to learn Java engineering, high-performance and distributed, profound and simple. Micro services, Spring,MyBatis,Netty source code analysis friends can add my Java advanced group: 591240817, the group has Daniel live explanation technology, as well as Java large-scale Internet technology video free sharing

At this time, when CPU needs to obtain data from the main memory, it will copy a copy to the cache, and when CPU calculates, it can read and write data directly in the cache to improve throughput. When the data is finished, the cached contents are flushed to main memory, and what the other CPU sees is the result after execution, but there is a time lag between them.

Look at this example:

Int counter = 0; counter = counter + 1; copy code

Code 2.2: self-increment inconsistency problem

Code 2.2 at run time, CPU reads the value of counter from main memory, copies a copy to the cache of the current CPU core, writes result 1 to the cache after CPU executes the instruction to add 1, and finally flushes the cache to main memory. This example code will run correctly in a single-threaded program.

But let's imagine a situation where there are two threads running this code together. During initialization, the two threads read the value 0 of counter from the main memory into their respective caches, thread 1 writes to the cache Cache1 after the operation is completed in CPU1, and thread 2 writes to the cache Cache2 after the operation in CPU2, and the value of counter is 1 in the cache of both CPU.

At this point, CPU1 flushes the value to main memory, the value of counter is 1, and then CPU2 flushes the value of counter to main memory, the value of counter is overwritten to 1, and the final result counter is 1 (the sum of the correct two calculations should be 2). This is the cache inconsistency problem. This occurs when multithreading accesses shared variables.

Solutions to cache inconsistencies:

Through the bus lock LOCK# mode.

Through the cache consistency protocol.

Figure 2.1: cache inconsistency issu

The two memory consistency protocols mentioned in figure 2.1 provide protection at the computer hardware level. CPU generally achieves its purpose by adding LOCK# locks on the bus to lock access to memory, that is, blocking access to memory by other CPU, so that only one CPU can access the main memory. Therefore, it is necessary to use the bus for memory locking, which can be analyzed to show that the damage to the throughput of CPU is very serious and inefficient.

As the technology upgrade brings the cache consistency protocol, the CPU of Intel, which has a large market share, uses the MESI protocol, which ensures that the copies of the shared variables used by each cache are consistent. The core idea of its implementation is that when the variable accessed in the multi-core CPU is a shared variable, when a thread modifies the shared variable data in the CPU, it will notify other CPU which also stores a copy of the variable to set the cache to an invalid state, so when other CPU reads the variables in the cache, they find that the copy of the shared variable is invalid and will be reloaded from the main memory. However, when the cache consistency protocol does not work, CPU still degrades locking processing using bus locks.

An interlude: why volatile cannot guarantee atomicity

Let's take a look at figure 2.2 below. CPU reads a variable in main memory and copies it to the cache. Although the CPU recognizes the "variability" of the variable during execution, it can only guarantee the atomicity of the last step of the store operation, and its atomicity operation is not realized during load,use.

Figure 2.2: data loading and memory barrier

In order to optimize the execution experience of our code, JVM does not guarantee the sequence of code execution (except for those that meet the Happen-Before rules) when self-optimizing. This is "instruction rearrangement", and the above-mentioned store operation ensures atomicity. How is JVM implemented? The reason for this is that there is a "memory barrier" instruction (we'll talk about the whole thing later), which is an instruction supported by CPU that can only guarantee the atomicity of store, but not the atomicity of the entire operation.

From the whole episode, we can see that although volatile has the semantics of visibility, it does not really guarantee thread safety. If you want to ensure the safe access of concurrent threads, you need to comply with the access rules of concurrent program variables.

Access rules of concurrent program variables

1. Atomicity

The atomicity of the program has the same meaning as the atomicity of database transactions, which ensures that either all of the operations are executed successfully or none of them are executed at all.

two。 Visibility

Visibility is subtle because the end result is always contrary to our intuition. When multiple threads work together to modify the value of a shared variable, the data cannot be flushed to the main memory in time due to the existence of a copy of the variable in the cache, so that the result of the current thread's operation in CP is invisible to other CPU.

3. Order

The popular understanding of orderliness is that programs are executed sequentially in JVM, but it has been mentioned earlier that JVM rearranges instructions to optimize the speed of code execution. "instruction rearrangement" in single thread does not bring security problems, but in concurrent programs, unsafe thread access problems may occur in the process of running because the sequence of programs can not be guaranteed.

To sum up, atomicity, visibility, and orderliness must be met in order to run programs safely in a concurrent programming environment. As long as there is no guarantee for any of the above, unpredictable errors may occur when the program runs. Finally, we introduce the "killer mace" of Java concurrency, the Happens-Before rule, which can guarantee the access rules of variables in the concurrent environment.

Happens-before semantics

The Java memory model is defined using a variety of operations, including reading and writing of variables, acquisition and release of monitors, etc., which is used in JMM

Happens-before

Semantics describe the memory visibility between operations. If you want to ensure that the thread performing operation B sees the structure of operation A (regardless of whether the AB is in the same thread or not), then AMagi B must satisfy the

Happens-before

Relationship. If there is a lack between two operations

Happens-before

Happens-Before rule:

Program order rule: every action An in a thread Happens-Before every action B in that thread, and in the program, all actions B appear after action A.

Lock rule: the unlock operation for a Lock is always Happens-Before for each subsequent lock operation on the Lock.

Volatile variable rule: the write operation of the volatile variable is the same as the subsequent read operation of the same variable.

Thread startup rule: in a thread, a call to the Thread.start () function will Happens-Before the actions in each startup thread.

Thread termination rule: any action in a thread Happens-Before detects that the thread has terminated or successfully returns from a call to the Thread.join () function or that the Thread.isAlive () function returns false.

Interruption rule: one thread calls another thread's interrupt and always Happens-Before the interrupted thread to find the interrupt.

Termination rule: the end of an object's constructor is always the beginning of the object's finalizer (Java does not have a direct C-like destructor).

Transitivity rule: if An event Happens-Before B event, and B event Happens-Before C event, then An event Happens-Before C event.

When a variable is read and stored in multithreaded contention, if it does not follow Happens-Before 's law, then it will have a data race.

Summary

This is the end of the content about Java's shared variables. Now you understand the meaning of Java's volatile keyword, understand why volatile can't guarantee atomicity, and understand Happens-Before rules that make our Java programs run more safely.

Here to provide you with a platform for learning and exchange, java architect group 1017599436

With 1-5 work experience, in the face of the current popular technology do not know where to start, need to break through the technical bottleneck can be added.

I stayed in the company for a long time and lived comfortably, but I hit a brick wall in the interview when I changed jobs. Those who need to study and change jobs to get a high salary in a short period of time can join the group.

If you have no work experience, but the foundation is very solid, the working mechanism of java, common design ideas, often use java development framework to master proficiency can be added.

It is hoped that this section will help you gain a deeper understanding of built-in locks and shared variables in Java's concurrency concept. There are many concurrent contents of Java, such as Lock, blocking queues, synchronizers, which are more efficient than synchronized in some scenarios.

The above is how to interpret the shared objects in the Java multithreading and concurrency model. Have you learned any knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report