In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "the example Analysis of Java Lock event". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Next, let the editor take you to learn the "example Analysis of Java Lock incident"!
We know that the computer has CPU, memory and hard disk, and the reading speed of the hard disk is the slowest, followed by the reading of memory, and the reading of memory is too slow relative to CPU, so we have made a CPU cache, L1, L2, L3.
It is this CPU cache coupled with the current multicore CPU situation that gives rise to concurrent BUG.
This is a very simple code. If thread An and thread B execute this method in CPU-An and CPU-B respectively, their operation is to first fetch a from the main memory to the respective cache of CPU, where the value of an in their cache is 0.
Then they perform ajar respectively, and the value of an in their respective eyes is 1, and then the value of an is still 1 when it is brushed to main memory, which is a problem. It is obvious that the final result is 1 instead of 2 after two times of adding one.
This problem is called the visibility problem.
When we look at our averse + sentence, our current language is all high-level language, which is actually very similar to grammatical candy, which seems to be very convenient to use. In fact, it is only on the surface, and there are all the instructions that really need to be executed.
When a statement in a high-level language is translated into CPU instructions, there is more than one. For example, there are at least three CPU instructions converted from averse +.
Get a from memory to register
+ 1 in register
Write the result to cache or memory
So we think that the statement averse + is impossible to interrupt is atomic, but in fact CPU can execute an instruction time slice is up, when the context switches to another thread, it also executes aura +. When you cut it back again, the value of an is already wrong.
This problem is called atomicity.
And the compiler or interpreter may change the order in which statements are executed in order to optimize performance, which is called instruction rearrangement. the most classic example is singleton mode double checking. In order to improve the execution efficiency, CPU will also execute out of order. for example, while waiting for memory data to be loaded, CPU finds that the subsequent addition instructions do not depend on the calculation results of the previous instructions, so it executes this addition instruction first.
This problem is called the problem of order.
So far, we have analyzed the source of concurrent BUG, that is, these three major problems. You can see that CPU caching, multicore CPU, high-level languages, and out-of-order rearrangement are all necessary, so we can only face these problems.
The solution to these problems is to disable caching, prohibit compiler instruction rearrangement, mutex and other means, today's topic is related to mutex.
Mutex ensures that changes to shared variables are mutually exclusive, that is, only one thread is executing at a time. And when it comes to mutual exclusion, I believe what comes to mind is the lock. Yes, our theme today is lock! Locks are designed to solve the problem of atomicity.
Lock
When it comes to locks, maybe the first reaction of Java students is the synchronized keyword, which is supported at the language level after all. Let's first take a look at synchronized. Some students don't have a good understanding of synchronized, so there will be a lot of holes in it.
Synchronized, pay attention.
Let's first take a look at a code, this code is our way to raise wages, and finally millions are sprinkled. And a thread of time compares whether our wages are equal or not. Let me briefly talk about IntStream.rangeClosed (1Meter 1000000). ForEach, some people may not be familiar with this, this code is equivalent to 100W for loops.
You first understand for yourself and see if there is anything wrong with it. The first reaction seems to be no problem, you look at the pay increase on a thread of execution, this than the salary has not changed the value, it seems that there is nothing wrong with it? There is no competition for concurrent resources, but also decorated with volatile to ensure visibility.
Let's take a look at the results. I intercepted some of them.
You can see that the first log typed out is already wrong, and then the typed value is still the same! Did it surprise you? Some students may subconsciously think that the raiseSalary is being modified, so it must be a thread safety problem to add a lock to the raiseSalary!
Note that there is only one thread calling the raiseSalary method, so simply locking the raiseSalary method is not useful.
This is actually the atomicity problem I mentioned above. Imagine that the payroll thread just executes yesSalary! = yourSalary when the wage thread executes yesSalary++ before it executes yourSalary++. Is it certain that it is true? That's why the log is printed out.
In addition, because the visibility is ensured by using volatile decoration, when you type log, the yourSalary++ may have been executed, and the log typed out will be yesSalary = = yourSalary.
So the easiest solution is to decorate both raiseSalary () and compareSalary () with synchronized, so that the wage increase and specific wage threads will not be executed at the same time, so it must be safe!
It seems that the lock is quite simple, but there is still a hole in the use of this synchronized for beginners, that is, you have to pay attention to what the synchronized lock is.
For example, I changed to multi-thread to raise my salary. Here we mention parallel, which actually takes advantage of the ForkJoinPool thread pool operation, and the default number of threads is the number of CPU cores.
Because raiseSalary () is locked, the end result is correct. This is because synchronized decorates the yesLockDemo instance, and there is only one instance in our main, so what is equal to multithreading competition is a lock, so the final calculated data is correct.
Then I'll modify the code so that each thread has its own instance of yesLockDemo to raise wages.
You will find why this lock is useless? The agreed annual salary of one million dollars will change me to 10w. It's a good thing you still have 70w.
This is because at this time our locks are decorated with non-static methods, instance-level locks, and we have created an instance for each thread, so these threads are not competing for a lock at all. The above multithread calculates the correct code because each thread uses the same instance, so it is a lock. If you want the code to be correct at this time, you just need to change the instance-level lock into a class-level lock.
It's simple to turn this method into a static method, and the synchronized decorated static method is a class-level lock.
Another way is to declare a static variable, which is recommended, because turning a non-static method into a static method is tantamount to changing the code structure.
Let's summarize that you need to pay attention to what the lock is when using synchronized. If you decorate static fields and static methods, it is a class-level lock, and if you decorate non-static fields and non-static methods, it is an instance-level lock.
Lock granularity
I believe you know that Hashtable is not recommended to use ConcurrentHashMap, because although Hashtable is thread-safe, but it is too rough, it has the same lock for all methods! Let's look at the source code.
What do you think the contains has to do with the size method? Why don't I call size when I call contains? This is that the granularity of the lock is too coarse. We have to evaluate that different methods use different locks in order to improve the degree of concurrency in the case of thread safety.
But different methods and different locks are not enough, because sometimes some operations in a method are actually thread-safe, and only the piece of code that involves competing resources needs to be locked. Especially if the code that does not need a lock is time-consuming, it will occupy the lock for a long time, and other threads can only wait in line, such as the following code.
Obviously, the second section of code is the normal posture of using locks, but in normal business code, it is not as easy to see at a glance as the sleep posted in my code, and sometimes you need to modify the order of code execution to ensure that the granularity of locks is fine enough.
Sometimes we need to make sure that the lock is thick enough, but this part of the JVM will detect that it will help us optimize, such as the following code.
You can see that the logic called in a method goes through locking-performing A-unlocking-adding-unlocking-performing B-unlocking. It is obvious that you only need to go through locking-executing A-executing B-unlocking.
So JVM will coarsening the lock during just-in-time compilation, expanding the scope of the lock, similar to the following situation.
And JVM will also have lock elimination action, through escape analysis to determine that the instance object is thread-private, then it must be thread-safe, so it will ignore the locking action in the object and call it directly.
Read-write lock
The read-write lock we submitted above reduces the granularity of the lock according to the scenario, splitting a lock into a read lock and a write lock, which is especially suitable for use in the case of more reads and less writes, such as a cache implemented by ourselves.
ReentrantReadWriteLock
Read-write locks allow multiple threads to read shared variables at the same time, but write operations are mutually exclusive, that is, write-write mutual exclusion, read-write mutual exclusion. To put it bluntly, when writing, one thread can only write, and other threads can neither read nor write.
Let's take a look at a small example, in which there is also a small detail. This code is to simulate the read of the cache, first put the read lock on the cache to cache the data, release the read lock if there is no data in the cache, then put the write lock on the database to get the data, and then plug it into the cache to return.
The small detail here is to determine again whether data = getFromCache () has a value, because multiple threads may call getData () at the same time, and then the cache is empty, so they all compete for the write lock, and eventually only one thread will get the write lock first and then plug the data back into the cache.
At this time, the waiting thread will eventually get the write lock one by one, and there is already a value in the cache when the write lock is acquired, so there is no need to query the database.
Of course, the paradigm used by Lock is well known, and you need to use try- finally to ensure that it will be unlocked. There is another important point to note about read-write locks, that is, locks cannot be upgraded. What do you mean? I'll change the code above.
However, the read lock can be used in the write lock to achieve the degradation of the lock, and some people may ask what read lock is added to the write lock.
It is still useful, for example, a thread grabs the write lock, adds a read lock at the end of the write operation, and then releases the write lock. At this time, it also holds the read lock to ensure that it can immediately use the write lock to complete the operation. Other threads can also read data because the write lock is no longer available.
In fact, there is no need for a write lock, which is a more domineering lock. So downgrade it so that everyone can read it.
In summary, read-write locks are suitable for situations where you read more and write less, and cannot be upgraded, but can be degraded. Lock locks need to be matched with try- finally to ensure that they will be unlocked.
By the way, I would like to mention a little bit about the implementation of read-write lock. Students who are familiar with AQS may know the state inside. Read-write lock is to divide the state of this int type into two halves, and the high and low 16 bits record the status of read lock and write lock respectively. The difference between it and a normal mutex is that it maintains these two states and handles them differently at the waiting queue.
Therefore, it is better to use mutex directly in scenarios that are not suitable for read-write locks, because read-write locks also need to judge the displacement of state and so on.
StampedLock
I would also like to mention this a little bit, the appearance rate proposed by 1. 8 does not seem to be as high as that of ReentrantReadWriteLock. It supports write lock, pessimistic read lock and optimistic read. The write lock and pessimistic read lock are actually the same as the read-write lock in ReentrantReadWriteLock, so it has an optimistic read.
From the above analysis, we know that read-write locks are not writable while reading, while optimistic reading of StampedLock allows a thread to write. Optimistic reading is actually the same as the database optimistic lock we know, for example, the optimistic lock of the database is judged by a version field, such as the following sql.
StampedLock optimistic reading is similar, let's take a look at the simple usage.
It is strong in comparison with ReentrantReadWriteLock, but not others, such as StampedLock does not support reentry and does not support conditional variables. It is also important not to call interrupt operations when using StampedLock, because it will result in CPU 100%. I ran the example provided above on the concurrent programming network and repeated it.
The specific reasons will not be described here, the link will be posted at the end of the article, the above is very detailed.
So when you come out of something that seems to be very powerful, you need to really understand it and be familiar with it in order to be targeted.
CopyOnWrite
Copying while writing is also used in many places, such as the process fork () operation. It is also helpful for our business code level, in that its read operation does not block write, and write operation does not block read. It is suitable for scenarios with more reading and less writing.
For example, the implementation of CopyOnWriteArrayList in Java, some people may listen, this thing thread-safe reading will not block writing, good guy will use it!
You have to make it clear that copying when writing will copy a copy of the data, and any modification you make will trigger an Arrays.copyOf in CopyOnWriteArrayList, and then modify it on the copy. If you modify a lot of actions, and copy the data is also very large, this will be a disaster!
Concurrent security container
Finally, let's talk about the use of concurrent security containers. I'll take the relatively familiar ConcurrentHashMap as an example. I think the new colleague seems to think that as long as you use a concurrent security container, it must be thread-safe. Actually, not really. It depends on how to use it.
Let's first take a look at the following code, which simply uses ConcurrentHashMap to record everyone's salary, up to 100 at most.
The final result will exceed the standard, that is, there are not only 100 people recorded in the map. So how does it turn out to be right? It's simple to add a lock.
When I saw this, someone said, "Why do I use ConcurrentHashMap if you add a lock? I can do it with a lock on my HashMap!" Yeah, you're right! Because our current usage scenario is compound operation, that is, we first judge the size of map, and then execute the put method, ConcurrentHashMap can not guarantee that the compound operation is thread-safe!
ConcurrentHashMap is appropriate only in the case of exposed thread-safe methods, not in the case of compound operations. For example, the following code
Of course, my example is not appropriate in fact, because the reason why ConcurrentHashMap performs better than HashMap + locks is that segmented locks require multiple key operations to reflect, but I want to highlight the point is that you can't be careless when using it, and you can't just think that it's thread-safe when you use it.
At this point, I believe you have a deeper understanding of the "example Analysis of Java Lock incident". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.