In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces the common Java multithreading problems, the article is very detailed, has a certain reference value, interested friends must read it!
1. What's the use of multithreading?
A question that may seem ridiculous to many people: I wish I could use multithreading, what's the use of it? In my opinion, this answer is more nonsense. As the saying goes, "know what it is", "know how to use it" only "know what it is", "why use it" is "know why it is". Only to the extent that you know what it is and why you know it can be said to use a knowledge point freely. OK, let's talk about my views on this issue:
(1) give full play to the advantages of multicore CPU
With the progress of industry, notebooks, desktops and even commercial application servers are at least dual-core, and 4-core, 8-core and even 16-core programs are rare. If it is a single-threaded program, 50% is wasted on dual-core CPU and 75% on 4-core CPU. The so-called "multithreading" on a single-core CPU is fake multithreading, and the processor will only process one piece of logic at a time, but the threads switch faster, looking like multiple threads running "at the same time". Multi-threading on multi-core CPU is the real multi-thread, it can let your multi-segment logic work at the same time, multi-thread, can really give play to the advantages of multi-core CPU, to achieve the purpose of making full use of CPU.
(2) prevent obstruction
From the perspective of program running efficiency, single-core CPU will not give full play to the advantages of multi-threading, but will lead to thread context switching due to running multi-threads on single-core CPU, which will reduce the overall efficiency of the program. But single-core CPU we still have to apply multithreading, just to prevent blocking. Imagine that if a single-core CPU uses a single thread, as long as the thread is blocked, for example, to read some data remotely, and the peer does not return and does not set a timeout, then your entire program stops running before the data is returned. Multithreading can prevent this problem, multiple threads running at the same time, even if the code execution of one thread reads data blocking, it will not affect the execution of other tasks.
(3) easy to model.
This is another less obvious advantage. Suppose there is a large task A, single-threaded programming, then it is necessary to consider a lot, and it is more troublesome to establish the whole program model. However, if this large task An is divided into several small tasks, task B, task C, task D, establish the program model, and run these tasks through multi-thread, it will be much easier.
2. How to create a thread?
One of the more common problems is generally two:
(1) inherit Thread class
(2) implement Runnable interface
As for which is better, needless to say, the latter is better, because the way of implementing interfaces is more flexible than that of inheriting classes, and it can also reduce the degree of coupling between programs. Interface-oriented programming is also the core of the six principles of design patterns.
3. What is the difference between start () method and run () method?
Only when the start () method is called will it show the feature of multithreading, with the code in the run () method of different threads executing alternately. If you just call the run () method, the code is executed synchronously, and you must wait for all the code in one thread's run () method to be executed before another thread can execute the code in its run () method.
4. What is the difference between Runnable interface and Callable interface?
A bit of a deep problem, but also to see a Java programmer to learn the breadth of knowledge.
The return value of the run () method in the Runnable interface is void, and what it does is simply to execute the code in the run () method; the call () method in the Callable interface has a return value, which is a generic type, and can be used with Future and FutureTask to obtain the result of asynchronous execution.
This is actually a very useful feature, because one of the important reasons why multithreading is more difficult and more complex than single threading is that multithreading is full of unknowns, does a thread execute? How long has a thread been executing? Has the expected data been assigned when a thread executes? There is no way to know that all we can do is wait for the completion of this multithreaded task. On the other hand, Callable+Future/FutureTask can get the results of multi-thread running, and it is really useful to cancel the task of the thread if the waiting time is too long to get the needed data.
5. What is the difference between CyclicBarrier and CountDownLatch?
Two similar-looking classes, both under java.util.concurrent, can be used to indicate that the code is running at a certain point, the difference between the two is:
(1) after a thread in CyclicBarrier runs to a certain point, the thread stops running until all threads reach this point, while CountDownLatch does not. After a thread runs at a certain point, it just gives a value of-1, and the thread continues to run.
(2) CyclicBarrier can only evoke one task, and CountDownLatch can evoke multiple tasks.
(3) CyclicBarrier is reusable, but CountDownLatch is not reusable. If the count value is 0, the CountDownLatch is no longer available.
6. What is the function of volatile keyword?
A very important problem that every Java programmer who learns and applies multithreading must master. The prerequisite for understanding the function of the volatile keyword is to understand the Java memory model. Instead of talking about the Java memory model here, you can see point 31. The volatile keyword has two main functions:
(1) Multithreading is mainly developed around visibility and atomicity. The variable modified by the volatile keyword ensures its visibility between multithreading, that is, every time the volatile variable is read, it must be the latest data.
(2) the underlying execution of the code is not as simple as the high-level language we see-Java program. Its execution is Java code-- > bytecode-- > execution of the corresponding Cmax code according to the bytecode-- > the code is compiled into assembly language-- > the code is compiled into assembly language-- > interact with the hardware circuit. In reality, JVM may reorder instructions for better performance. There may be some unexpected problems in multithreading. Using volatile will reorder the forbidden semantics, which also reduces the efficiency of code execution to a certain extent.
From a practical point of view, one of the important functions of volatile is to combine with CAS to ensure atomicity, which can be found in the classes under the java.util.concurrent.atomic package, such as AtomicInteger.
7. What is thread safety
It's another theoretical question, and there are a lot of answers. I'll give you what I think is the best explanation: if your code is executed in multithreaded and single-threaded, you can always get the same result. Then your code is thread safe.
It is worth mentioning that there are several levels of thread safety:
(1) immutable
Classes such as String, Integer and Long are final classes, and no thread can change their values unless a new one is created, so these immutable objects can be used directly in a multithreaded environment without any synchronization means.
(2) absolute thread safety
Regardless of the runtime environment, the caller does not need additional synchronization measures. To do this, you usually have to pay a lot of extra costs. Java marks itself as a thread-safe class, but in fact, most of them are not thread-safe, but there are also absolute thread-safe classes in Java, such as CopyOnWriteArrayList and CopyOnWriteArraySet.
(3) relative thread safety
Relative thread safety is what we usually call thread safety, such as Vector, add, remove methods are atomic operations, will not be interrupted, but only this, if there is a thread traversing a Vector, there is a thread in the case of add this Vector,99% will appear ConcurrentModificationException, that is, the fail-fast mechanism.
(4) Thread is not safe
There is nothing to say about this. ArrayList, LinkedList, HashMap and so on are all thread-unsafe classes.
8. How to get thread dump files in Java?
Dump is the best way to solve the problems such as dead loop, deadlock, blocking, slow page opening and so on. The so-called thread dump is the thread stack. There are two steps to get the thread stack:
(1) to get the pid of a thread, you can use ps-ef under the Linux environment by using the jps command. | grep java
(2) print thread stack. You can also use kill-3 pid in Linux environment by using jstack pid command.
In addition, the Thread class provides a getStackTrace () method that can also be used to get the thread stack. This is an instance method, so this method is bound to a specific thread instance, and each time it gets the stack that a specific thread is currently running
9. What happens if a thread has a run-time exception?
If the exception is not caught, the thread stops execution. Another important point is that if the thread holds an object's monitor, the object monitor will be released immediately
10. How do I share data between two threads?
You can do this by sharing objects between threads, and then evoking and waiting through wait/notify/notifyAll and await/signal/signalAll. For example, the blocking queue BlockingQueue is designed to share data between threads.
What's the difference between the sleep method and the wait method?
This question is often asked. Both the sleep method and the wait method can be used to give up the CPU for a certain amount of time. The difference is that if the thread holds the monitor of an object, the sleep method will not abandon the monitor of the object, and the wait method will give up the monitor of the object.
12. What is the function of producer-consumer model?
It's a theoretical question, but it's important:
(1) to improve the operation efficiency of the whole system by balancing the production capacity of producers and the consumption capacity of consumers, which is the most important role of producer-consumer model.
(2) decoupling, which is the function attached to the producer-consumer model. Decoupling means that there are fewer connections between producers and consumers, and the fewer connections, the more they can develop independently without mutual constraints.
What is the use of ThreadLocal?
To put it simply, ThreadLocal is a method of exchanging space for time. A ThreadLocal.ThreadLocalMap implemented by open address method is maintained in each Thread. If the data is isolated and not shared, there will be no thread safety problems.
14. Why are the wait () method and notify () / notifyAll () method called in the synchronization block
This is enforced by JDK. Both the wait () method and the notify () / notifyAll () method must acquire the lock of the object before calling it.
15. What's the difference between the wait () method and the notify () / notifyAll () method when abandoning the object monitor?
The difference between the wait () method and the notify () / notifyAll () method when abandoning the object monitor is that the wait () method immediately releases the object monitor, while the notify () / notifyAll () method waits for the thread's remaining code to finish executing before abandoning the object monitor.
16. Why use thread pools?
Avoid creating and destroying threads frequently to achieve the reuse of thread objects. In addition, using the thread pool gives you the flexibility to control the number of concurrency based on the project.
17. How do I detect whether a thread holds an object monitor?
It was only when I saw a multithreaded interview question on the Internet that I knew that there was a way to determine whether a thread holds an object monitor: the Thread class provides a holdsLock (Object obj) method that returns true if and only if the monitor of the object obj is held by a thread. Note that this is a static method, which means that "some thread" refers to the current thread.
18. The difference between synchronized and ReentrantLock?
Synchronized is the same keyword as if, else, for, and while, and ReentrantLock is a class, which is the essential difference between the two. Since ReentrantLock is a class, it provides more and more flexible features than synchronized, can be inherited, can have methods, and can have a variety of class variables. The extensibility of ReentrantLock over synchronized is reflected in several points:
(1) ReentrantLock can set the waiting time for acquiring locks to avoid deadlocks.
(2) ReentrantLock can obtain the information of various locks.
(3) ReentrantLock can flexibly implement multi-channel notification.
In addition, the locking mechanisms of the two are actually different. The underlying ReentrantLock calls the park method of Unsafe to lock, and synchronized should operate on the mark word in the object header, which I'm not sure.
19. What is the concurrency of ConcurrentHashMap?
The concurrency of ConcurrentHashMap is the size of segment, and the default is 16, which means that up to 16 threads can operate on ConcurrentHashMap at the same time, which is also the biggest advantage of ConcurrentHashMap over Hashtable. In any case, can Hashtable have two threads at the same time to obtain data in Hashtable?
20. What is ReadWriteLock?
First of all, to be clear, it's not that ReentrantLock is bad, it's just that ReentrantLock is limited sometimes. If you use ReentrantLock, it may be to prevent the data inconsistency caused by thread A writing data and thread B reading data, but in this way, if thread C is reading data and thread D is also reading data, the reading data will not change the data, and there is no need to lock it, but it is still locked, which reduces the performance of the program.
Because of this, the read-write lock ReadWriteLock was born. ReadWriteLock is a read-write lock interface, and ReentrantReadWriteLock is a concrete implementation of the ReadWriteLock interface, which realizes the separation of read and write. The read lock is shared, the write lock is exclusive, there is no mutual exclusion between read and read, and there is mutual exclusion between read and write, write and read, write and write, which improves the performance of read and write.
21. What is FutureTask?
Actually, as mentioned earlier, FutureTask represents a task of asynchronous operation. In FutureTask, you can pass a specific implementation class of Callable, which can wait for the result of the task of asynchronous operation to be obtained, determine whether it has been completed, cancel the task, and so on. Of course, because FutureTask is also the implementation class of the Runnable interface, FutureTask can also be placed in the thread pool.
22. How to find out which thread uses CPU longest in Linux environment
This is a more practical problem, which I think is quite meaningful. You can do this:
(1) get the pid,jps or ps-ef of the project | grep java, which was mentioned earlier.
(2) top-H-p pid, the order cannot be changed.
This allows you to print out the current project, the percentage of CPU time consumed by each thread. Note that what is typed here is LWP, that is, the thread number of the native thread of the operating system. My notebook mountain does not deploy the Java project under the Linux environment, so there is no way to take screenshots to demonstrate. Netizens can give it a try if the company is using the Linux environment to deploy the project.
Using "top-H-p pid" + "jps pid", you can easily find a thread stack that takes up a thread with a high CPU, thus locating the reason for the high CPU, usually due to improper code operation that leads to a dead loop.
Finally, the LWP typed by "top-H-p pid" is decimal, and the local thread number typed by "jps pid" is hexadecimal. After conversion, you can locate the current thread stack that occupies a high CPU thread.
23. Java programming to write a program that will lead to deadlock
When I saw this topic for the first time, I thought it was a very good question. Many people know what deadlocks are all about: threads An and B wait for each other to hold the lock, causing the program to loop indefinitely. Of course, it is limited to this. I don't know if I ask how to write a deadlock program. To put it bluntly, I don't know what a deadlock is, but if I understand a theory, I can't see the problem of deadlock in practice.
It is not difficult to really understand what a deadlock is. Here are a few steps:
(1) two Object objects are held in the two threads: lock1 and lock2. These two lock act as locks for synchronous code blocks.
(2) in the run () method of thread 1, the synchronization code block first acquires the object lock of lock1, Thread.sleep (xxx). It doesn't take much time, about 50 milliseconds, and then acquires the object lock of lock2. The main purpose of this is to prevent thread 1 from acquiring the object locks of lock1 and lock2 all at once.
(3) run of thread 2) (in the method, the synchronization code block first acquires the object lock of lock2, and then the object lock of lock1. Of course, the object lock of lock1 is already held by thread 1, and thread 2 must wait for thread 1 to release the object lock of lock1.
In this way, thread 1 "sleeps" and thread 2 has acquired the object lock of lock2. Thread 1 is blocked when it tries to acquire the object lock of lock2, and a deadlock is formed. The code is not written, taking up a little more space, Java multithreading 7: deadlock this article has, is the code implementation of the above steps.
24. How to wake up a blocked thread?
If the thread is blocked by a call to the wait (), sleep (), or join () method, you can interrupt the thread and wake it up by throwing an InterruptedException; if the thread encounters an IO block, there is nothing you can do because the IO is implemented by the operating system, and the Java code has no way of directly touching the operating system.
25. How do immutable objects help multithreading
As mentioned earlier, immutable objects ensure the memory visibility of objects, and the reading of immutable objects does not require additional synchronization means, which improves the efficiency of code execution.
26. What is multithreaded context switching?
Multi-threaded context switching refers to the process of switching CPU control from one running thread to another thread that is ready and waiting for CPU execution rights.
27. What happens if the thread pool queue is full when you submit the task?
Here's a distinction:
If you are using an unbounded queue LinkedBlockingQueue, that is, an unbounded queue, it doesn't matter. Continue to add tasks to the blocking queue for execution, because LinkedBlockingQueue can be thought of as an infinite queue that can store tasks indefinitely.
If you use a bounded queue such as ArrayBlockingQueue, the task will first be added to the ArrayBlockingQueue. When the ArrayBlockingQueue is full, the number of threads will be increased according to the value of maximumPoolSize. If the number of threads is increased but cannot be handled, the ArrayBlockingQueue will continue to be full, then the full task will be handled using the reject policy RejectedExecutionHandler. The default is AbortPolicy.
28. What is the thread scheduling algorithm used in Java?
Preemptive. After a thread runs out of CPU, the operating system calculates a total priority based on thread priority, thread hunger and other data and allocates the next time slice to a thread for execution.
29. What is the function of Thread.sleep (0)?
This question is related to the above question, so I connected it. Because Java uses a preemptive thread scheduling algorithm, it may occur that a thread often obtains control of CPU. In order to enable some low-priority threads to obtain control of CPU, you can use Thread.sleep (0) to manually trigger an operation for the operating system to allocate time slices, which is also an operation to balance CPU control.
30. What is spin?
A lot of the code in synchronized is very simple code, and the execution time is very fast. It may not be worthwhile to lock the waiting thread at this time, because thread blocking involves switching between user mode and kernel mode. Since the code in synchronized executes very fast, you might as well ask the thread waiting for the lock not to be blocked, but to do a busy loop at the boundary of synchronized, which is spin. If you do a lot of busy loops and find that the lock has not been acquired, and then block, this may be
What is the Java memory model
The Java memory model defines a specification for multithreaded access to Java memory. The complete explanation of the Java memory model is not clear in a few sentences here. Let me briefly summarize several parts of the Java memory model:
(1) Java memory model divides memory into main memory and working memory. The state of the class, that is, the variables shared between the classes, is stored in the main memory. Every time the Java thread uses these variables in the main memory, it will read the variables in the main memory and have a copy of these variables in its own working memory. When you run your own thread code, you will use these variables to operate on your own working memory. After the thread code is executed, the latest value is updated to the main memory
(2) several atomic operations are defined to manipulate variables in main memory and working memory.
(3) the usage rules of volatile variable are defined.
(4) happens-before, the principle of first occurrence, defines some rules that operation A must occur first in operation B, for example, in the same thread, the code in front of the control flow must first occur in the code behind the control flow, an action to release the lock unlock must first occur in the subsequent action of locking the lock for the same lock, and so on. As long as these rules are met, no additional synchronization measures are required. If a piece of code does not comply with all the happens-before rules, it must be thread-unsafe
32. What is CAS?
CAS, full name is Compare and Swap, that is, compare-replace. Suppose there are three operands: the memory value V, the old expected value A, and the value B to be modified. If and only if the expected value An and memory value V are the same, the memory value will be changed to B and true will be returned, otherwise do nothing and return false. Of course, CAS must cooperate with volatile variables in order to ensure that the variable you get each time is the latest value in the main memory, otherwise the old expected value A will always be a constant value A for a thread, and as long as a CAS operation fails, it will never succeed.
33. What are optimistic locks and pessimistic locks?
(1) optimistic lock: just like its name, optimistic lock is optimistic about thread safety problems caused by concurrent operations. Optimistic lock thinks that competition does not always occur, so it does not need to hold a lock. Compare-replace these two actions as an atomic operation to try to modify variables in memory, if the failure indicates a conflict, then there should be corresponding retry logic.
(2) pessimistic lock: still like its name, the pessimistic lock is pessimistic about the thread safety problems caused by concurrent operations. Pessimistic locks think that competition will always occur, so every time you operate on a resource, you will hold an exclusive lock, just like synchronized, no matter three, seven or twenty-one, just lock the resource directly.
34. What is AQS?
To put it simply, the full name of AQS,AQS is AbstractQueuedSychronizer, which translates to abstract queue synchronizer.
If the foundation of java.util.concurrent is CAS, then AQS is the core of the whole Java concurrency package, which is used by ReentrantLock, CountDownLatch, Semaphore, and so on. AQS actually connects all Entry in the form of a two-way queue, such as ReentrantLock, where all waiting threads are placed in an Entry and connected into a two-way queue. If the previous thread uses ReentrantLock, the first Entry of the two-way queue actually starts running.
AQS defines all operations on two-way queues, but only tryLock and tryRelease methods are open to developers. Developers can rewrite tryLock and tryRelease methods according to their own implementation to achieve their own concurrency functions.
35. Thread safety of singleton mode?
The first thing to say is that the thread safety of the singleton pattern means that an instance of a class is created only once in a multithreaded environment. There are many ways to write the singleton pattern. Let me sum it up:
(1) the writing of the hungry Chinese singleton model: thread safety
(2) the writing of lazy singleton pattern: non-thread safety.
(3) the writing of the singleton mode of double check lock: thread safety
36. What is the function of Semaphore?
Semaphore is a semaphore that limits the number of concurrency of a block of code. Semaphore has a constructor that can pass in an int integer n, indicating that a piece of code can only be accessed by n threads at most. If n is exceeded, please wait until a thread finishes executing this block of code before the next thread enters. From this, you can see that if the int integer naught 1 passed in the Semaphore constructor is equivalent to becoming a synchronized.
37. Why do you synchronize when there is only one statement "return count" in the size () method of Hashtable?
This is my previous confusion, I do not know if you have thought about this problem. If there are multiple statements in a method and all operate on the same class variable, then not locking in a multithreaded environment is bound to cause thread safety problems, which is easy to understand, but why lock the size () method when there is only one statement?
There are two main reasons for understanding this problem in the course of slowly working and studying:
(1) only one thread can execute the synchronization method of a fixed class at a time, but for the asynchronous method of a class, multiple threads can access it at the same time. So, there may be a problem, thread A may be executing the put method of Hashtable to add data, thread B can normally call the size () method to read the number of current elements in the Hashtable, then the read value may not be the latest, thread A may have added data, but thread B has not read size, then the size read for thread B must be inaccurate. After synchronization is added to the size () method, it means that thread B calls the size () method only after thread A has finished calling the put method, thus ensuring thread safety.
(2) CPU executes code, not Java code, which is critical and must be kept in mind. The Java code is eventually translated into machine code, which is the code that can really interact with the hardware circuit. Even if you see only one line of Java code, even if you see only one line of bytecode generated after Java code is compiled, it doesn't mean that there is only one operation for this statement at the bottom. A "return count" assumption is translated into three assembly statements for execution, and one assembly statement corresponds to its machine code. It is entirely possible that after the first sentence is executed, the thread switches.
38. Which thread is called the constructor and static block of the thread class?
This is a very tricky and cunning question. Remember: the constructor and static block of the thread class is called by the thread in which the thread class new resides, while the code in the run method is called by the thread itself.
If the above confuses you, let me give you an example. Suppose new in Thread2 new Thread2 in the Thread1,main function, then:
(1) the construction method and static block of Thread2 are called by main thread, and the run () method of Thread2 is called by Thread2 itself.
(2) the construction method and static block of Thread1 are called by Thread2, and the run () method of Thread1 is called by Thread1 itself.
39. Which is the better choice, synchronization method or synchronization block
Synchronous block, which means that code outside the synchronous block is executed asynchronously, which is more efficient than synchronizing the entire method. Please know one principle: the smaller the scope of synchronization, the better.
With this, I would like to mention that although the less the scope of synchronization, the better, there is still an optimization method called lock coarsening in the Java virtual machine, which is to increase the scope of synchronization. This is useful, such as StringBuffer, which is a thread-safe class, and naturally the most commonly used append () method is a synchronization method. When we write code, we repeatedly append the string, which means repeatedly locking-> unlocking, which is bad for performance, because it means that the Java virtual machine has to switch between kernel state and user state repeatedly on this thread. Therefore, the Java virtual machine carries out a lock coarsening operation on the code called multiple append methods, extending multiple append operations to the beginning and end of the append method and turning it into a large synchronization block, which reduces the number of locking-> unlocking, and effectively improves the efficiency of code execution.
40. How do businesses with high concurrency and short task execution time use thread pools? How do businesses with low concurrency and long task execution time use thread pools? How to use thread pool for businesses with high concurrency and long execution time?
This is a problem I saw on the concurrent programming network. Put this problem on the last one. I hope everyone can see and think about it, because this question is very good, very practical, and very professional. On this issue, my personal views are:
(1) for services with high concurrency and short task execution time, the number of thread pool threads can be set to CPU cores + 1 to reduce thread context switching.
(2) businesses with low concurrency and long task execution time should be distinguished:
A) if the business takes a long time to focus on IO operations, that is, IO-intensive tasks, because IO operations do not occupy CPU, so do not let all CPU idle, you can increase the number of threads in the thread pool and let CPU handle more business
B) if the business time is long and focused on computing operations, that is, computing-intensive tasks, there is no way to do this. Like (1), the number of threads in the thread pool is set less to reduce thread context switching.
(3) High concurrency and long business execution time. The key to solving this type of task does not lie in the thread pool but in the design of the overall architecture. Seeing whether some data in these businesses can be cached is the first step, and adding the server is the second step. As for the setting of thread pool, refer to other articles about thread pool. Finally, the problem of long business execution time may also need to be analyzed to see if middleware can be used to split and decouple tasks.
These are all the contents of this article entitled "what are the common Java multithreading problems?" Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.