Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the underlying implementation principle of concurrency mechanism in Java concurrent programming?

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

Today, I will talk to you about the underlying implementation principle of the concurrency mechanism in Java concurrent programming, which may not be well understood by many people. in order to make you understand better, the editor has summarized the following content for you. I hope you can get something according to this article.

The concurrency mechanism in Java depends on the implementation of JVM and CPU instructions. Then we explore the implementation principle of Java concurrency mechanism in the bottom layer.

1 、 volatile

Java allows threads to access shared variables, and in order to ensure that shared variables are consistently updated, threads should ensure that the variable is obtained separately through an exclusive lock. Volatile is provided in Java, and if a field is declared as volatile, the Java thread memory model ensures that all threads see the same value of this variable.

Volatile is a lightweight synchronized with low execution cost and no context switching consumption. Volatile-decorated shared variables cause two things under multiprocessors when writing:

(1) write the data of the current processor cache row to the system memory

(2) this writeback operation will invalidate the data cached by other processors in CPU.

For a cache line that is 64 bytes wide, if the head node and tail node of the queue are less than 64 bytes, the processor will read them to the same cache line, and when the header node is modified, the entire cache line will be locked. as a result, other processors cannot read the tail node of their own cache. Affects the efficiency of out-of-queue and in-queue operations. Adding 64 bytes can optimize the performance of volatile, prevent the header node and tail node from being loaded into the same cache line at the same time, and avoid locking each other.

2 、 synchronized

The above method of appending bytes may not work in Java 7, it will eliminate or reorder useless fields, except for the more frequently used synchronized in volatile,Java, let's take a look:

Synchronized is a heavyweight lock, and each object in Java can be used as a lock, specifically in the following three forms:

(1) ordinary synchronization method. Lock is the current instance object.

(2) static synchronization method, which is the Class object of the current class

(3) synchronization method block. Lock is the object configured in synchronized parentheses.

The lock used by synchronized has a Java object header, and there is a field in the object header indicating which lock is currently used. There are four states of the lock, from low to high: no lock state, biased lock state, lightweight lock, heavyweight lock.

Comparison of three kinds of locks

Lock advantages and disadvantages apply to scenarios where locking and unlocking do not require additional consumption. If there is lock competition between threads, it will lead to additional consumption of lock revocation. Suitable for scenarios where only one thread accesses the synchronization block, lightweight lock competition threads will not block and improve the response time of threads. If there is no lock competition thread, it will spin and consume the CPU pursuit response time. Synchronized block executes very fast heavy lock thread competition will not use spin, will not consume CPU, thread blocking, slow response time to pursue throughput, synchronous block execution speed is slow 3, atomic operation

Atomic operation refers to one or a series of uninterruptible operations. General processors use bus locking or cache locking to realize atomic operations among multiprocessors.

Bus locking: the processor provides a LOCK # signal, and as soon as one processor outputs a signal on the bus, other processor requests are blocked, so the processor can monopolize shared memory.

Cache locking: if the memory area is cached in the processor's cache line and locked during the Lock operation, when the lock operation is written back to memory, the processor does not declare the LOCK # signal on the bus, but modifies the memory address. The cache consistency mechanism will prevent simultaneous modification of memory area data cached by more than two processors, and other processors will invalidate the cache line when other processors successfully write back the locked cache row data. Other processors can no longer use this cache line.

In Java, atomic operations are realized by using a lock CAS (atomic package).

After reading the above, do you have any further understanding of the underlying implementation principle of the concurrency mechanism in Java concurrent programming? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report