In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
The main content of this article is to explain "what is the history of Java multithreading". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Next, let the editor take you to learn "how the history of Java multithreading is"!
JDK 1.0
The JDK1.0 version of January 1996 established the most basic threading model of Java from the very beginning, and there is no substantial change in the subsequent patching of this threading model, which can be said to be a good design with inheritance.
Preemptive and collaborative are two common process / thread scheduling methods. The operating system is very suitable for using preemptive scheduling to schedule its processes. It allocates time slices to different processes. For long-term unresponsive processes, it has the ability to deprive it of its resources, or even force it to stop (if it adopts a collaborative approach, it requires the process to release resources consciously and actively. Maybe I don't know when I need to wait. The Java language adopted a collaborative approach from the beginning, and in the later development process, it gradually abandoned the rough stop/resume/suspend method, which is contrary to the bad collaborative design, and instead adopted the way that both threads cooperate with each other, such as wait/notify/sleep.
One way to communicate between threads is to use interrupts:
Public class InterruptCheck extends Thread {@ Override public void run () {System.out.println ("start"); while (true) if (Thread.currentThread (). IsInterrupted ()) break; System.out.println ("while exit");} public static void main (String [] args) {Thread thread = new InterruptCheck () Thread.start (); try {sleep (2000);} catch (InterruptedException e) {} thread.interrupt ();}}
This is one way to use interrupts, which looks like a flag bit, which thread A sets and thread B checks from time to time. There is another way to interrupt communication, as follows:
Public class InterruptWait extends Thread {public static Object lock = new Object (); @ Override public void run () {System.out.println ("start"); synchronized (lock) {try {lock.wait ();} catch (InterruptedException e) {System.out.println (Thread.currentThread (). IsInterrupted ()) Thread.currentThread (). Interrupt (); / / set interrupt flag again System.out.println (Thread.currentThread (). IsInterrupted ()); e.printStackTrace ();} public static void main (String [] args) {Thread thread = new InterruptWait (); thread.start () Try {sleep (2000);} catch (InterruptedException e) {} thread.interrupt ();}}
In this way, if a thread that is waiting with the wait method is awakened by another thread using an interrupt and throws an InterruptedException, and at the same time, the interrupt flag is cleared, we usually reset the interrupt where the exception was caught so that subsequent logic can see how the thread ends by checking the interrupt status.
In the more stable version of JDK 1.0.2, you can already find classes like Thread and ThreadUsage, which are the two core classes in the threading model. The entire version contains only a few packages: java.io, java.util, java.net, java.awt, and java.applet, so Java established a persistent threading model from the very original version.
It is worth mentioning that in this version, the atomic object AtomicityXXX has been designed. Here is an example to show that the operation is non-atomic, while the atomic object can guarantee the atomicity of the + + operation:
Import java.util.concurrent.atomic.AtomicInteger; public class Atomicity {private static volatile int nonAtomicCounter = 0; private static volatile AtomicInteger atomicCounter = new AtomicInteger (0); private static int times = 0; public static void caculate () {times++; for (int I = 0; I)
< 1000; i++) { new Thread(new Runnable() { @Override public void run() { nonAtomicCounter++; atomicCounter.incrementAndGet(); } }).start(); } try { Thread.sleep(1000); } catch (InterruptedException e) { } } public static void main(String[] args) { caculate(); while (nonAtomicCounter == 1000) { nonAtomicCounter = 0; atomicCounter.set(0); caculate(); } System.out.println("Non-atomic counter: " + times + ":" + nonAtomicCounter); System.out.println("Atomic counter: " + times + ":" + atomicCounter); } } 上面这个例子你也许需要跑几次才能看到效果,使用非原子性的++操作,结果经常小于1000。 对于锁的使用,网上可以找到各种说明,但表述都不够清晰。请看下面的代码: public class Lock { private static Object o = new Object(); static Lock lock = new Lock(); // lock on dynamic method public synchronized void dynamicMethod() { System.out.println("dynamic method"); sleepSilently(2000); } // lock on static method public static synchronized void staticMethod() { System.out.println("static method"); sleepSilently(2000); } // lock on this public void thisBlock() { synchronized (this) { System.out.println("this block"); sleepSilently(2000); } } // lock on an object public void objectBlock() { synchronized (o) { System.out.println("dynamic block"); sleepSilently(2000); } } // lock on the class public static void classBlock() { synchronized (Lock.class) { System.out.println("static block"); sleepSilently(2000); } } private static void sleepSilently(long millis) { try { Thread.sleep(millis); } catch (InterruptedException e) { e.printStackTrace(); } } public static void main(String[] args) { // object lock test new Thread() { @Override public void run() { lock.dynamicMethod(); } }.start(); new Thread() { @Override public void run() { lock.thisBlock(); } }.start(); new Thread() { @Override public void run() { lock.objectBlock(); } }.start(); sleepSilently(3000); System.out.println(); // class lock test new Thread() { @Override public void run() { lock.staticMethod(); } }.start(); new Thread() { @Override public void run() { lock.classBlock(); } }.start(); } } 上面的例子可以反映对一个锁竞争的现象,结合上面的例子,理解下面这两条,就可以很容易理解synchronized关键字的使用: 非静态方法使用synchronized修饰,相当于synchronized(this)。 静态方法使用synchronized修饰,相当于synchronized(Lock.class)。 JDK 1.2 1998年年底的JDK1.2版本正式把Java划分为J2EE/J2SE/J2ME三个不同方向。在这个版本中,Java试图用Swing修正在 AWT中犯的错误,例如使用了太多的同步。可惜的是,Java本身决定了AWT还是Swing性能和响应都难以令人满意,这也是Java桌面应用难以比及其服务端应用的一个原因,在IBM后来的SWT,也不足以令人满意,JDK在这方面到JDK 1.2后似乎反省了自己,停下脚步了。值得注意的是,JDK高版本修复低版本问题的时候,通常遵循这样的原则: 向下兼容。所以往往能看到很多重新设计的新增的包和类,还能看到deprecated的类和方法,但是它们并不能轻易被删除。 严格遵循JLS(Java Language Specification),并把通过的新JSR(Java Specification Request)补充到JLS中,因此这个文档本身也是向下兼容的,后面的版本只能进一步说明和特性增强,对于一些最初扩展性比较差的设计,也会无能为力。这个在下文中关于ReentrantLock的介绍中也可以看到。 在这个版本中,正式废除了这样三个方法:stop()、suspend()和resume()。下面我就来介绍一下,为什么它们要被废除: public class Stop extends Thread { @Override public void run() { try { while (true) ; } catch (Throwable e) { e.printStackTrace(); } } public static void main(String[] args) { Thread thread = new Stop(); thread.start(); try { sleep(1000); } catch (InterruptedException e) { } thread.stop(new Exception("stop")); // note the stack trace } } 从上面的代码你应该可以看出两件事情: 使用stop来终止一个线程是不讲道理、极其残暴的,不论目标线程在执行任何语句,一律强行终止线程,最终将导致一些残缺的对象和不可预期的问题产生。 被终止的线程没有任何异常抛出,你在线程终止后找不到任何被终止时执行的代码行,或者是堆栈信息(上面代码打印的异常仅仅是main线程执行stop语句的异常而已,并非被终止的线程)。 很难想象这样的设计出自一个连指针都被废掉的类型安全的编程语言,对不对?再来看看suspend的使用,有引起死锁的隐患: public class Suspend extends Thread { @Override public void run() { synchronized (this) { while (true) ; } } public static void main(String[] args) { Thread thread = new Suspend(); thread.start(); try { sleep(1000); } catch (InterruptedException e) { } thread.suspend(); synchronized (thread) { // dead lock System.out.println("got the lock"); thread.resume(); } } } 从上面的代码可以看出,Suspend线程被挂起时,依然占有锁,而当main线程期望去获取该线程来唤醒它时,彻底瘫痪了。由于suspend在这里是无期限限制的,这会变成一个彻彻底底的死锁。 相反,看看这三个方法的改进品和替代品:wait()、notify()和sleep(),它们令线程之间的交互就友好得多: public class Wait extends Thread { @Override public void run() { System.out.println("start"); synchronized (this) { // wait/notify/notifyAll use the same // synchronization resource try { this.wait(); } catch (InterruptedException e) { e.printStackTrace(); // notify won't throw exception } } } public static void main(String[] args) { Thread thread = new Wait(); thread.start(); try { sleep(2000); } catch (InterruptedException e) { } synchronized (thread) { System.out.println("Wait() will release the lock!"); thread.notify(); } } } 在wait和notify搭配使用的过程中,注意需要把它们锁定到同一个资源上(例如对象a),即: 一个线程中synchronized(a),并在同步块中执行a.wait() 另一个线程中synchronized(a),并在同步块中执行a.notify() 再来看一看sleep方法的使用,回答下面两个问题: 和wait比较一下,为什么sleep被设计为Thread的一个静态方法(即只让当前线程sleep)? 为什么sleep必须要传入一个时间参数,而不允许不限期地sleep? 如果我前面说的你都理解了,你应该能回答这两个问题。 public class Sleep extends Thread { @Override public void run() { System.out.println("start"); synchronized (this) { // sleep() can use (or not) any synchronization resource try { /** * Do you know: * 1. Why sleep() is designed as a static method comparing with * wait? * 2. Why sleep() must have a timeout parameter? */ this.sleep(10000); } catch (InterruptedException e) { e.printStackTrace(); // notify won't throw exception } } } public static void main(String[] args) { Thread thread = new Sleep(); thread.start(); try { sleep(2000); } catch (InterruptedException e) { } synchronized (thread) { System.out.println("Has sleep() released the lock!"); thread.notify(); } } } 在这个JDK版本中,引入线程变量ThreadLocal这个类:Each thread has a ThreadLocalMap mounted. The use of the ThreadLocal class is interesting. The get method has no key input, because the key is the ThreadLocal itself that you are currently using. The object life cycle of ThreadLocal can accompany the entire thread life cycle. Therefore, if you keep growing objects in thread variables (the most common is a poorly managed map), it can easily lead to memory leaks.
Public class ThreadLocalUsage extends Thread {public User user = new User (); public User getUser () {return user;} @ Override public void run () {this.user.set ("var1"); while (true) {try {sleep (1000) } catch (InterruptedException e) {} System.out.println (this.user.get ());}} public static void main (String [] args) {ThreadLocalUsage thread = new ThreadLocalUsage (); thread.start (); try {sleep (4000) } catch (InterruptedException e) {} thread.user.set ("var2");}} class User {private static ThreadLocal enclosure = new ThreadLocal (); / / is it must be static? Public void set (Object object) {enclosure.set (object);} public Object get () {return enclosure.get ();}}
The above example prints var1 all the time, not var2, because the ThreadLocal in different threads is independent of each other.
Lock-related information can be found with the jstack tool, and if the thread owns the lock, but the lock is temporarily released because it is in the wait state when executing the wait method, the waiting on information is printed:
Thread-0 "prio=6 tid=0x02bc4400 nid=0xef44 in Object.wait () [0x02f0f000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait (Native Method)-waiting on (a Wait) at java.lang.Object.wait (Object.java:485) at Wait.run (Wait.java:8)-locked (a Wait)
If the program persists in possession of a lock (for example, the sleep method does not release the lock during sleep), the locked information is printed:
"Thread-0" prio=6 tid=0x02baa800 nid=0x1ea4 waiting on condition [0x02f0f000] java.lang.Thread.State: TIMED_WAITING (sleeping) at java.lang.Thread.sleep (Native Method) at Wait.run (Wait.java:8)-locked (a Wait)
If the thread wants to enter a synchronous block and is waiting for the lock to be released, the waiting to message will be printed:
"main" prio=6 tid=0x00847400 nid=0xf984 waiting for monitor entry [0x0092f000] java.lang.Thread.State: BLOCKED (on object monitor) at Wait.main (Wait.java:23)-waiting to lock (a Wait)
JDK 1.4
In the JDK1.4 released in April 2002, NIO was officially introduced. JDK provides a set of solutions for multiplexing IO on the basis of the original standard IO.
By attaching multiple Channel to a Selector, through unified polling thread detection, whenever data arrives, it triggers a listening event and distributes the event, instead of letting each channel consumption block a thread waiting for the data flow to arrive. Therefore, the obvious advantage of NIO can be seen only in the high concurrency scenario where the competition for resources is fierce.
Compared with the traditional stream-oriented approach, this block-oriented access method loses some simplicity and flexibility. The following is a simple example of reading a file with the NIO API (for illustration purposes only):
Import java.io.FileInputStream; import java.io.IOException; import java.nio.ByteBuffer; import java.nio.channels.FileChannel; public class NIO {public static void nioRead (String file) throws IOException {FileInputStream in = new FileInputStream (file); FileChannel channel = in.getChannel (); ByteBuffer buffer = ByteBuffer.allocate (1024); channel.read (buffer); byte [] b = buffer.array () System.out.println (new String (b)); channel.close ();}}
JDK 5.0
JDK 1.5 was released in September 2004, and its name was officially changed to 5.0. There is a joke that there is a saying in the software industry, "do not use software below version 3.0," which means that if the version is too small, the quality of the software often fails-but according to this view, when will the original version naming of JDK be 3.0, so it has risen directly to 5.0 through the change of version naming after 1.4.
JDK 5.0is more than just a change in version number naming. For multithreaded programming, there have been two major events, the official release of JSR 133and JSR 166.
JSR 133
JSR 133 redefines the Java memory model. In fact, before that, common memory models included the continuous consistent memory model and the antecedent model.
For the continuous consistency model, the order in which the program is executed is exactly the same as the order shown on the code. This is difficult to guarantee for modern multi-core CPU with optimized instruction execution. Moreover, the guarantee of sequential consistency severely limits the runtime optimization of the code by JVM.
But the Happens-before specified by JSR 133 makes the order in which instructions are executed flexible:
In the same thread, the previous operation takes place before the latter one in the order in which the code is executed (that is, the order of code semantics).
Unlocking a monitor object precedes subsequent locking operations on the same monitor object
The write operation to the volatile field precedes the subsequent read operation to this field
The start operation on the thread (calling the thread object's start () method) precedes any other operation on the thread
All operations in a thread call the join () method on this thread before any other thread.
If operation A takes precedence over operation B and takes precedence over C, then operation A takes precedence over C
In memory allocation, separate the working memory (even including) of each thread from the main memory, and give JVM a lot of space to optimize the execution of instructions in the thread. The variables in the main memory can be copied to the thread's working memory to execute separately, and after the execution is finished, the result can be flushed back to the main memory at some time:
However, how to ensure the consistency of data between threads? JLS's approach is that, by default, data consistency cannot be guaranteed at any time, but data consistency can be achieved through the use of semantically enhanced keywords synchronized, volatile, and final. To explain this, take a look at the classic DCL (Double Check Lock) problem:
Public class DoubleCheckLock {private volatile static DoubleCheckLock instance; / / Do I need add "volatile" here? Private final Element element = new Element (); / / Should I add "final" here? Is a "final" enough here? Or I should use "volatile"? Private DoubleCheckLock () {} public static DoubleCheckLock getInstance () {if (null = = instance) synchronized (DoubleCheckLock.class) {if (null = = instance) instance = new DoubleCheckLock (); / / the writes which initialize instance and the write to the instance field can be reordered without "volatile"} return instance } public Element getElement () {return element;}} class Element {public String name = new String ("abc");}
In the above example, if you do not use the volatile keyword for the place declared by the instance, JVM will not guarantee that the instance obtained by the getInstance method is a complete and correct instance, while the volatile keyword guarantees the visibility of the instance, that is, the real instance object at that time.
But the problem is not that simple. For element in the above example, without volatile and final decorations, the name in element would not be as visible as when the instance described earlier is returned to the outside world. If element is an immutable object, using final can also guarantee its visibility after the constructor call.
With regard to the effect of volatile, many people want a short piece of code to see the difference between the results of execution with and without volatile. Unfortunately, it is not easy to find. Here I give you a less strict example:
Public class Volatile {public static void main (String [] args) {final Volatile volObj = new Volatile (); Thread T2 = new Thread () {public void run () {while (true) {volObj.check (); t2.start () Thread T1 = new Thread () {public void run () {while (true) {volObj.swap ();}; t1.start ();} boolean boolValue;// use volatile to print "WTF!" Public void check () {if (boolValue = =! boolValue) System.out.println ("WTF!");} public void swap () {try {Thread.sleep;} catch (InterruptedException e) {e.printStackTrace ();} boolValue =! boolValue;}}
There are two threads in the code, one thread is constantly changing the value of boolValue through an endless loop; the other thread executes "boolValueValue fetching boolValue" every 100ms, and this line of code fetches boolValue twice, so it is conceivable that there is a certain probability that the two fetching boolValue results will be inconsistent, then "WTF!" will be printed.
However, the above situation occurs when you use volatile decorations on boolValue to ensure its visibility, and if you do not use volatile decorations on boolValue, the runtime will not print "WTF!" at least on my computer. In other words, it's not normal. I can't guess what JVM did. Basically, the only thing I can be sure of is that boolValue doesn't always get the most real value when it's not decorated with volatile.
JSR 166
The contribution of JSR 166is the introduction of the java.util.concurrent package. I have previously explained the atomic type of the AtomicXXX class, and the internal implementation ensures its atomicity through a compareAndSet (XQuery y) method (CAS), which is traced to the lowest level through a single instruction from CPU. What this method does is make sure that the value x is replaced with y if the value of a variable is x. In this process, there is no locking behavior, so its performance is generally higher than using synchronized.
The Lock-free algorithm is a way to atomize "set" based on CAS, and there are usually two forms:
Import java.util.concurrent.atomic.AtomicInteger; public class LockFree {private AtomicInteger max = new AtomicInteger (); / / type A public void setA (int value) {while (true) {/ / 1.circulation int currentValue = max.get () If (value > currentValue) {if (max.compareAndSet (currentValue, value)) / / 2.CAS break; / / 3.exit} else break;}} / / type B public void setB (int value) {int currentValue Do {/ / 1.circulation currentValue = max.get (); if (value
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.