In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
The main content of this article is to explain "Why to learn Java concurrency", interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Next let the editor to take you to learn "Why to learn Java concurrency" bar!
01
Initial recognition of concurrency
What is concurrency and what is parallelism?
Use an example of JVM to explain that when the garbage collector does concurrency markup, JVM can not only do garbage markup, but also deal with some requirements of the program. This is called concurrency. When doing garbage collection, multiple threads of JVM do collection at the same time, which is called parallelism.
02
Why learn concurrent programming?
Intuitive reason
1) mandatory requirements of JD
With the rapid development of the Internet industry, concurrent programming has become a very hot field, and it is also a necessary skill for server recruitment in major enterprises.
2) the only way from calf to Daniel
The architect is a very important role in the software development team, and becoming an architect is the goal of many technicians. the measure of an architect's ability is to design a system to solve high concurrency. This shows the importance of high concurrency technology, and concurrent programming is the foundation of the bottom. No matter the game or the Internet industry, whether software development or large websites, there is a huge demand for high concurrency technical personnel. Therefore, in order to work to improve themselves, it is urgent to learn high concurrency technology.
3) it is very easy to step in the interview process.
During the interview, in order to examine the mastery of concurrent programming, we often examine the knowledge related to concurrent security and the knowledge of thread interaction. For example, how to implement a thread-safe singleton mode in the case of concurrency, and how to complete the interactive execution of functions in two threads.
The following is a thread-safe singleton slacker pattern using double retrieval, of course, enumeration or singleton hungry mode can also be used.
Private static volatile Singleton singleton; private Singleton () {}; public Singleton getSingleton () {if (null = = singleton) {synchronized (Singleton.class) {if (null = = singleton) {singleton = new Singleton ();} return singleton;}
The first layer of null judgment here is to reduce the granularity of lock control. Volatile modification is used because instruction rearrangement occurs in new Singleton () in jvm, and volatile avoids happens before and null pointer problems. A lot can be derived from a thread-safe singleton pattern, including the implementation principles of volatile and synchronized, JMM model, MESI protocol, instruction rearrangement, and a more detailed illustration of the post-order of the JMM model.
In addition to thread safety issues, the interaction between threads is also examined. For example, use two threads to print A1B2C3 alternately. Z26
The focus of the investigation is not to simply achieve this function, through this question, you can examine the overall mastery of knowledge, a variety of programs to achieve, you can use Atomicinteger, ReentrantLock, CountDownLat ch. The following figure is an example of using LockSupport to control the alternating printing of two threads. The principle of the internal implementation of LockSupport is to use UNSAFE to control a semaphore to change between 0 and 1, thus controlling the alternating printing of the two threads.
4) concurrency can be seen everywhere in the framework we work with, tom cat,netty,jvm,Disruptor
Familiarity with the basis of JAVA concurrent programming is the cornerstone of mastering the underlying knowledge of these frameworks. Here is a brief introduction to the underlying implementation principle of the high concurrency framework Disruptor.
Martin Fowler introduced in a LMAX article that this high-performance asynchronous processing framework has a throughput of up to 6 million per second per single thread.
Core concepts of Disruptor
Disruptor feature
Based on event driven
Based on "observer" model and "producer-consumer" model
The queue operation of the network can be realized without lock.
RingBuffer execution process
Disruptor underlying components, RingBuffer closely related objects: Sequ enceBarrier and Sequencer
SequenceBarrier is the bridge between consumers and RingBuffer. In Disruptor, consumers access SequenceBarrier directly, and SequenceBarrier reduces queue conflicts in RingBuffer.
SequenceBarrier through the waitFor method when the consumer speed is greater than the production speed of the producer, the consumer can give the producer a certain buffer time through the waitFor method, coordinate the speed problem between the producer and the consumer, and the timing of waitFor execution:
Sequencer is the bridge between producers and buffer RingBuffer. Producers apply for data storage space from RingBuffer through Sequencer, and use publish method to inform consumers through WaitStrategy. WaitStrategy is a waiting strategy when consumers have no data to consume. Each producer or consumer thread will first apply for the position of the operable elements in the array, and then directly write or read data at that location. The whole process ensures the thread safety of the operation through the atomic variable CAS. This is the lock-free design of Disruptor.
Here are five common waiting strategies:
The default policy for BlockingWaitStrategy:Disruptor is BlockingWaitStrategy. Within BlockingWaitStrategy, locks and condition are used to control thread wakeup. BlockingWaitStrategy is the least efficient strategy, but it consumes the least CPU and provides more consistent performance in a variety of deployment environments.
The performance of SleepingWaitStrategy:SleepingWaitStrategy is similar to that of BlockingWaitStrategy, and the consumption of CPU is similar, but it has the least impact on producer threads, and loop waiting is implemented by using LockSupport.parkNanos (1).
YieldingWaitStrategy:YieldingWaitStrategy is one of the strategies that can be used in low-latency systems. YieldingWaitStrategy spins to wait for the sequence to increase to the appropriate value. Inside the loop, Thread.yield () is called to allow other queued threads to run. This strategy is recommended in scenarios where extremely high performance is required and the number of event processing lines is less than the number of CPU logic cores; for example, CPU enables hyperthreading.
BusySpinWaitStrategy: the best performance, suitable for low latency systems. This strategy is recommended in scenarios where extremely high performance is required and the number of event processing threads is less than the number of CPU logic cores; for example, CPU enables hyperthreading.
At present, many well-known projects, including Apache Storm, Camel, Log4j2 and so on, have applied Disruptor to achieve high performance.
5) JUC is an exemplary Doug Lea soul masterpiece (the first mainstream attempt to raise the level of abstraction beyond threads, locks, and events to a more approachable way: concurrent collections, fork/join, etc.)
Through the study of concurrent programming design thinking, we can give full play to the advantages of using multithreading.
Take advantage of the powerful power of multiprocessors
Simplicity of modeling
Simplified handling of asynchronous events
A more responsive user interface
So what are the problems if you can't learn the basics of concurrent programming well?
1) Multithreading is used everywhere in daily development, jvm, tomcat, netty, learning java concurrent programming is a prerequisite for a deeper understanding and mastery of such tools and frameworks, because there is a gap of several orders of magnitude between the cpu computing speed of the computer and the io speed of memory. Therefore, modern computers have to add a cache that is as close to the speed of the processor as possible: copy the data needed for the operation in memory to the cache first, and then synchronize it back to memory when the operation is over. As shown below:
Because jvm implements cross-hardware platforms, jvm defines its own memory model, but because jvm's memory model is eventually mapped to hardware, the jvm memory model is almost the same as the hardware model:
In the underlying data structure of the operating system, the data structure in the cache corresponding to each CPU is a linked list stored by bucket, in which tag represents the address in main memory, cache line is the offset, and MESI cache consistency protocol corresponding to flag.
The MESI cache consistency states are:
M:Modify, which stands for modification
E:Exclusive, which stands for exclusive
S:Share, which stands for sharing
I:Invalidate, which stands for failure
The following is the process of writing an cpu0 data:
When CPU0 executes load,read and write once, the state of flag is S before doing write, and then sends an invalidate message to the bus
Other cpu listens for bus messages, changes the flag state in the cache entry corresponding to each cpu from S to I, and sends invalidate ack to the bus
After cpu0 receives all the invalidate ack returned by cpu, cpu0 changes the flag to E, performs data writing, and the status is changed to M, similar to a locking process.
Considering the performance problem, the efficiency of writing modified data is too long, so write buffers and invalid queues are introduced. All modification operations will be written to the write buffer first, and other cpu will write invalid queues and return ack messages after receiving messages, and then consume messages from invalid queues in an asynchronous manner. Of course, this creates ordering problems, such as flag or S in some entry, but it should actually be marked as I, so that the data accessed will be problematic. The purpose of using volitale is to solve the disorder problem caused by instruction rearrangement. Volitale is the keyword at jvm level and MESI is at cpu level.
2) the performance is not up to standard and no solution can be found.
3) Thread-unsafe methods may be written at work
The following is a step-by-step optimization example of multithreaded print time
New Thread (new Runnable () {@ Override public void run () {System.out.println (new ThreadLocalDemo01 (). Date (10));}}) .start (); new Thread (new Runnable () {@ Override public void run () {System.out.println (new ThreadLocalDemo01 (). Date (1007));}}) .start ()
Optimization 1, multiple threads use thread pool reuse
For (int I = 0; I
< 1000; i++){ int finalI = i; executorService.submit(new Runnable() { @Override public void run() { System.out.println(new ThreadLocalDemo01().date2(finalI)); } });}executorService.shutdown();public String date2(int seconds){ Date date = new Date(1000 * seconds); String s = null;// synchronized (ThreadLocalDemo01.class){// s = simpleDateFormat.format(date);// } s = simpleDateFormat.format(date); return s;} 优化2,线程池结合ThreadLocal public String date2(int seconds){ Date date = new Date(1000 * seconds); SimpleDateFormat simpleDateFormat = ThreadSafeFormatter.dateFormatThreadLocal.get(); return simpleDateFormat.format(date);} 在多线程服用一个SimpleDateFormat时会出现线程安全问题,执行结果会打印出相同的时间,在优化2中使用线程池结合ThreadLocal实现资源隔离,线程安全。 4)许多问题无法正确定位 踩坑:crm仿真定时任务阻塞,无法继续执行 问题:crm仿真运用schedule配置的定时任务在某个时间节点后的所有定时任务均未执行 原因:定时任务配置导致的问题,@Schedule配置的定时任务如果未配置线程池,在启动类使用@EnableScheduling启用定时任务时会默认使用单线程,后端配置了多定时任务,会出现问题.配置了两定时任务A和B,在A先占用资源后如果一直未释放,B会一直处于等待状态,直到A任务释放资源后,B开始执行,若要避免多任务执行带来的问题,需要使用以下方法配置: @Bean public ThreadPoolTaskScheduler taskScheduler(){ ThreadPoolTaskScheduler scheduler = new ThreadPoolTaskScheduler(); scheduler.setPoolSize(10); return scheduler; } crm服务由于定时任务配置的不多,并且在资源足够的情况下,任务执行速度相对较快,并未设置定时任务的线程池 定时任务里程序方法如何造成线程一直未释放,导致阻塞。 在问题定位时,产生的问题来自CountDownLatch无法归零,导致整个主线程hang在那里,无法释放。 在api中当调用await时候,调用线程处于等待挂起状态,直至count变成0再继续,大致原理如下: 因此将目光焦点转移至await方法,使当前线程在锁存器倒计数至零之前一直等待,除非线程被中断或超出了指定的等待时间。如果当前计数为零,则此方法立刻返回true 值。如果当前计数大于零,则出于线程调度目的,将禁用当前线程,且在发生以下三种情况之一前,该线程将一直处于休眠状态:由于调用 countDown() 方法,计数到达零;或者其他某个线程中断当前线程;或者已超出指定的等待时间。 Executors.newFixedThreadPool这是个有固定活动线程数。当提交到池中的任务数大于固定活动线程数时,任务就会放到阻塞队列中等待。CRM该定时任务里为了加快任务处理,运用多线程处理,设置的CountDownLatch的count大于ThreadPoolExecutor的固定活动线程数导致任务一直处于等待状态,计数无法归零,导致主线程一直无法释放,从而导致crm一台仿真服务的定时任务处于瘫痪状态。 03 如何学习java并发编程 为了学习好并发编程基础,我们需要有一个上帝视角,一个宏观的概念,然后由点及深,掌握必备的知识点。我们可以从以下两张思维导图列举出来的逐步进行学习。 必备知识点 04 线程 列举了如此多的案例都是围绕线程展开的,所以我们需要更深地掌握线程,它的概念,它的原则,它是如何实现交互通信的。 以下的一张图可以更通俗地解释进程、线程的区别 进程: 一个进程好比是一个程序,它是 资源分配的最小单位 。同一时刻执行的进程数不会超过核心数。不过如果问单核CPU能否运行多进程?答案又是肯定的。单核CPU也可以运行多进程,只不过不是同时的,而是极快地在进程间来回切换实现的多进程。电脑中有许多进程需要处于「同时」开启的状态,而利用CPU在进程间的快速切换,可以实现「同时」运行多个程序。而进程切换则意味着需要保留进程切换前的状态,以备切换回去的时候能够继续接着工作。所以进程拥有自己的地址空间,全局变量,文件描述符,各种硬件等等资源。操作系统通过调度CPU去执行进程的记录、回复、切换等等。 线程:线程是独立运行和独立调度的基本单位(CPU上真正运行的是线程),线程相当于一个进程中不同的执行路径。 单线程:单线程就是一个叫做"进程"的房子里面,只住了你一个人,你可以在这个房子里面任何时间去做任何的事情。你是看电视、还是玩电脑,全都有你自己说的算。想干什么干什么,想什么时间做什么就什么时间做什么。 多线程:但是如果你处在一个"多人"的房子里面,每个房子里面都有叫做"线程"的住户:线程1、线程2、线程3、线程4,情况就不得不发生变化了。 在多线程编程中有"锁"的概念,在你的房子里面也有锁。如果你的老婆在上厕所并锁上门,她就是在独享这个"房子(进程)"里面的公共资源"卫生间",如果你的家里只有这一个卫生间,你作为另外一个线程就只能先等待。 线程最为重要也是最为麻烦的就是线程间的交互通信过程,下图是线程状态的变化过程: 为了阐述线程间的通信,简单模拟一个生产者消费者模型: 生产者 CarStock carStock;public CarProducter(CarStock carStock){ this.carStock = carStock;}@Overridepublic void run() { while (true){ carStock.produceCar(); }}public synchronized void produceCar(){ try { if(cars < 20){ System.out.println("生产者..." + cars); Thread.sleep(100); cars++; notifyAll(); }else { wait(); } } catch (InterruptedException e) { e.printStackTrace(); }} 消费者 CarStock carStock;public CarConsumer(CarStock carStock){ this.carStock = carStock;}@Overridepublic void run() { while (true){ carStock.consumeCar(); }}public synchronized void consumeCar(){ try { if(cars >0) {System.out.println ("sales car..." + cars); Thread.sleep (100); cars--; notifyAll ();} else {wait ();}} catch (InterruptedException e) {e.printStackTrace ();}}
Consumption process
Communication process
For this simple producer-consumer model, we can use queue, thread pool and other technologies to improve the program, use BolckingQueue queue to share data, and improve the consumption process.
05
Three characteristics of concurrent programming
The implementation mechanisms of concurrent programming mostly focus on the following three points: atomicity, visibility and order.
1) atomicity problem
For (int I = 0; I
< 20; i++){ Thread thread = new Thread(() ->{for (int j = 0; j < 10000; jacks +) {res++; normal++; atomicInteger.incrementAndGet ();}}); thread.start ();}
Running result:
Volatile: 170797
AtomicInteger:200000
Normal:182406
This is the problem of atomicity, which means that in an operation, cpu cannot be paused and then scheduled halfway, either the operation is interrupted, or the execution is completed, or it is not executed.
If an operation is atomic, then if multiple threads are concurrent, the variable will not be modified.
2) visibility issues
Class MyThread extends Thread {public int index = 0; @ Override public void run () {System.out.println ("MyThread Start"); while (true) {if (index = =-1) {break;}} System.out.println ("MyThread End");}}
The main thread modifies the index to-1 MagneMyThread thread is not visible, this is the thread safety caused by the visibility problem, visibility means that when a thread modifies the value of the thread shared variable, other threads can immediately know the change. The Java memory model realizes visibility by synchronizing the new value back to the main memory after the variable is modified, and refreshing the variable value from the main memory before the variable is read, which depends on the main memory as the transmission medium, whether it is an ordinary variable or a volatile variable. The difference between ordinary variables and volatile variables is that the special rules of volatile ensure that the new values can be synchronized to the main memory immediately, and immediately refreshed from memory before each use. Because we can say that volatile guarantees the visibility of variables during thread operations, while ordinary variables do not.
3) order problem
Double retrieval single case lazy mode
Orderliness: the natural orderliness of programs in the Java memory model can be summed up in one sentence: if you observe in this thread, all operations are ordered; if you observe another thread in one thread, all operations are disordered. The first half of the sentence refers to "serial semantics in the thread", and the second half refers to the phenomenon of "instruction reordering" and "synchronization delay of main memory in working memory".
At this point, I believe you have a deeper understanding of "Why to learn Java concurrency", might as well come to the actual operation of it! Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.