Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of concurrency level in java High concurrency

2025-04-09 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article shares with you the content of the sample analysis of the level of concurrency in java high concurrency. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.

Blockage

If a thread is blocked, the current thread cannot continue execution until another thread releases resources. When we use the synchronized keyword or reenter the lock, we get a blocked thread.

Both the synchronize keyword and the reentrant lock attempt to get the lock in the critical area before the subsequent code is executed, and if not, the thread is suspended and waited until it has the required resources.

No hunger (Starvation-Free)

If there is a priority between threads, then thread scheduling always tends to satisfy the high priority thread first. In other words, the allocation of the same resource is unfair! Figure 1.7 shows two cases of unfair lock and fair lock (the pentagram represents a high priority thread). For unfair locks, the system allows high-priority threads to jump the queue. This can lead to hunger in low-priority threads. But if the lock is fair, according to the first-come-first-served rule, then hunger will not occur, no matter how high the priority of the new thread, if you want to get resources, you must queue up, so that all threads have a chance to execute.

Accessibility (Obstruction-Free)

Accessibility is the weakest non-blocking scheduling. If two threads execute without barrier, one side will not be suspended because of a problem in the critical section. In other words, everyone can swagger into the critical zone. So what if everyone modifies the shared data and destroys the data? For an accessible thread, as soon as it detects this, it immediately rolls back its changes to ensure data security. But if there is no data contention, then the thread can successfully complete its work and get out of the critical area.

If the control mode of blocking is pessimistic, that is to say, the system thinks that unfortunate conflicts are likely to occur between two threads, so the protection of shared data is the first priority, relatively speaking, non-blocking scheduling is an optimistic strategy. It believes that it is very likely that there will be no conflict between multiple threads, or that the probability is low. Therefore, everyone should implement it without obstacles, but once a conflict is detected, it should be rolled back.

It can also be seen from this strategy that barrier-free multithreaded programs do not necessarily run smoothly. Because when there is a serious conflict in the critical section, all threads may constantly roll back their operations, and no thread can walk out of the critical area. This situation will affect the normal execution of the system. Therefore, we may very much hope that at least one thread in this cluster can complete its own operation in a limited time and exit the critical section. At least this ensures that the system will not wait indefinitely in the critical area.

A viable accessibility implementation can be achieved by relying on a "consistency tag". The thread reads and saves the tag before the operation, and reads it again after the operation is completed to check whether the tag has been changed. If the two are consistent, there is no conflict in resource access. If it is inconsistent, the resource may conflict with other threads during the operation and the operation needs to be retried. Any thread that modifies the resource needs to update the consistency flag before modifying the data, indicating that the data is no longer secure.

The optimistic lock in the database should be familiar with. You need a field version (version number) in the table. Each time you update the data version+1, the version number is updated as a condition. The pseudo code is as follows: the update is successful according to the number of rows affected by the update:

1. Query the data, at this time the version number is walled v 2. Open transaction 3. Do some business operations 4.update t set version = version+1 where id = record id and version = wattv dominating / the number of rows affected by this row c 5.if (c > 0) {/ / commit transaction} else {/ / rollback transaction}

When multiple threads update the same data, the database locks the current data, and only one thread can execute the update statement at a time.

No lock (Lock-Free)

Unlocked parallelism is barrier-free. In the case of no lock, all threads can try to access the critical area, but the difference is that the unlocked concurrency ensures that one thread must be able to complete the operation and leave the critical area in a finite step.

A typical feature of an unlocked call is that it may contain an infinite loop. In this loop, the thread keeps trying to modify the shared variable. If there is no conflict and the modification is successful, then the program exits, otherwise continue to try to modify. But in any case, unlocked parallelism always ensures that a thread can win and will not be wiped out. As for the failed threads in the critical zone, they must keep retrying until they win. If you have bad luck and always try unsuccessfully, there will be a hunger-like first write, and the thread will stop.

The following is an unlocked hint code, and if the modification is not successful, the loop will never stop.

While (! atomicVar.compareAndSet (localVar, localVar+1)) {localVal = atomicVar.get ();} wait

No lock requires only one thread to complete the operation in a limited number of steps, while no wait is further extended on the basis of no lock. It requires that all threads must be completed in a limited number of steps, so that it does not cause hunger. If you limit the upper limit of this step, you can further decompose it into bounded no wait and no wait independent of the number of threads, the only difference between them is the limit on the number of cycles.

A typical no-wait result is RCU (Read Copy Update). Its basic idea is that the reading of data can be uncontrolled. Therefore, all reader threads do not wait, and they are neither locked to wait nor cause any conflicts. But when writing data, first get a copy of the original data, then only modify the copy data (which is why reading can be uncontrolled), and write back the data at the appropriate time after the modification is completed.

Thank you for reading! This is the end of this article on "sample Analysis of concurrency levels in java High concurrency". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it out for more people to see!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report