Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the concurrent multithreaded interview questions in the foundation of java

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

What are the concurrent multithreaded interview questions in the foundation of java? I believe many inexperienced people don't know what to do about it. Therefore, this paper summarizes the causes and solutions of the problems. Through this article, I hope you can solve this problem.

01 what is a thread?

Thread is the most important unit that the operating system can schedule. It is included in the process and is the actual operating unit in the process, which can speed up the "multi-thread" operation.

02 what is thread safety and thread unsafety?

Thread safety:

In the case of multi-thread access, the locking mechanism is adopted. When a thread accesses a certain data of this class, it is protected and other threads cannot access it until the thread has finished reading it. There will be no data disruption or data contamination. Vector is "synchronous" to achieve thread safety, and ArrayList, which is similar to it, is not thread safe.

Thread unsafe:

That is, without data access protection, it is possible that multiple threads change the data one after another, resulting in dirty data thread safety problems are caused by global variables and static variables. If there are only read operations for global variables and static variables in each thread, generally speaking, this global variable is thread-safe; if multiple threads perform write operations at the same time, thread synchronization needs to be considered. otherwise, thread safety may be affected.

03 what is a spin lock?

Spin lock is the synchronization mechanism of low-level in SMP architecture.

When thread A wants to acquire a spin lock that is held by another thread lock, thread A spins in a loop to see if the lock is already available.

Spin locks need to be noted:

Hongmeng official Strategic Cooperation to build HarmonyOS Technology Community

Since CPU is not released when spinning, the thread holding the spin lock should release the spin lock as soon as possible, otherwise the thread waiting for the spin lock will spin there, which will waste CPU time.

The thread that holds the spin lock should release the spin lock before sleep so that other threads can acquire the spin lock.

The pre-JVM spin will consume CPU. If the time does not adjust the doNotify method, the doWait hair will spin straight, and the CPU will consume too much.

The spin lock is more suitable for the case where the lock keeps the lock for a short time, in which case the spin lock is more efficient.

Spin locking is a mechanism that is quite effective for multiprocessors and is basically not done in single-processor preemptive systems.

04 what is CAS?

Hongmeng official Strategic Cooperation to build HarmonyOS Technology Community

The abbreviation of CAS (compare and swap), translated and exchanged in Chinese.

CAS directly calls the cmpxchg (assembly instruction) instruction of CPU without going through JVM, directly benefiting the java native JNI (Java Native Interface is JAVA native call).

Make use of the CAS instruction of CPU, at the same time, complete the blocking algorithm of Java by JNI, and realize the original operation. Other original operations are done with similar features.

The whole java.util.concurrent is built on top of CAS, so J.U.C has a significant improvement in performance for the synchronized blocking algorithm.

CAS is an optimistic locking technology. When multiple threads try to make "CAS update the same variable at the same time", only one of them can update the value of the variable. "all the other threads fail, and the failed thread will not be suspended." is told that it has failed in this competition and can try again.

Making CAS will greatly degrade program performance when thread conflicts are serious; CAS is only suitable for situations with fewer thread conflicts.

Synchronized has improved optimization since jdk1.6. The underlying implementation of synchronized mainly depends on the queue of Lock-Free. The basic idea is to block after spin and continue to compete for locking after competitive switching, which slightly sacrifices fairness, but achieves throughput. In the case of fewer thread conflicts, the performance is similar to that of CAS; in the case of severe thread conflicts, the performance is much worse than that of CAS.

What are optimistic locks and pessimistic locks?

1. Pessimistic lock

Before JDK1.5, Java relies on the synchronized keyword to ensure synchronization, which coordinates access to the shared state through the lock protocol caused by the lock, which ensures that no matter which thread holds the lock of the shared variable, it accesses these variables in an exclusive manner. Exclusive lock is actually a pessimistic lock, so it can be said that synchronized is pessimistic lock.

two。 Optimistic lock

Optimistic lock (Optimistic Locking) is actually a kind of thought. Compared with the pessimistic lock, the optimistic lock assumes that there will be no conflict under the same situation, so when the data is updated, the conflict of the data will be formally detected. If a conflict is found, the error information of the subscriber will be returned, and the customer will decide what to do. Memcached makes it possible for cas optimistic locking technology to ensure the reliability of data.

06 what is AQS?

1. AbstractQueuedSynchronizer, referred to as AQS, is a framework for building locks and synchronization containers. In fact, many classes in the concurrent package are built on AQS, such as ReentrantLock,Semaphore,CountDownLatch,ReentrantReadWriteLock,FutureTask, and so on. AQS solves the volume details of the design when implementing the synchronization container.

2. AQS makes the queue of each FIFO represent the thread queuing for the lock. The queue head node is called "Sentinel node" or "dumb node", and it is not associated with any thread. Other nodes are associated with waiting threads, and each node maintains a waiting state waitStatus.

07 what is the original operation? What are the original atomic classes classes in Java Concurrency API?

Hongmeng official Strategic Cooperation to build HarmonyOS Technology Community

The original operation refers to an operation task unit that is not affected by other operations. The primitive operation is a segment that avoids the need for data in a multithreaded environment.

Int++ is not a primitive operation, so when one thread reads its value and adds 1, the other thread may read the previous value, which raises an error.

In order to solve this problem, we must make sure that the addition operation is original, and we can use synchronization technology to do this before JDK1.5.

The JDK1.5,java.util.concurrent.atomic package provides installation classes of int and long types that automatically guarantee that their operations are original and do not need to be synchronized.

08 what is the Executors framework?

Java provides four thread pools through Executors, which are:

NewCachedThreadPool creates a cacheable thread pool. If the thread pool exceeds the processing needs, it can flexibly recycle idle threads. If the thread pool can be recycled, create a new thread. NewFixedThreadPool creates a fixed thread pool to control the number of thread concurrency, and the excess thread will wait in the queue. NewScheduledThreadPool creates a pool of scheduled threads to hold scheduled and periodic tasks. NewSingleThreadExecutor creates a single-threaded thread pool, which only takes care of tasks, ensuring that all tasks are executed in a specified order (FIFO, LIFO, priority).

09 what is a blocking queue? How to make the blocking queue implement the producer-consumer model?

1. JDK7 provides 7 blocking queues. (also belongs to concurrent containers)

ArrayBlockingQueue: a bounded blocking queue consisting of array structures. LinkedBlockingQueue: a bounded blocking queue consisting of linked list structures. PriorityBlockingQueue: a number of boundary blocking queues that hold priority. DelayQueue: a boundary blocking queue that enables priority queues to be implemented. SynchronousQueue: a blocking queue that does not store elements. LinkedTransferQueue: a boundary blocking queue consisting of a linked list structure. LinkedBlockingDeque: a two-way blocking queue consisting of a linked list structure.

2. Concept: a blocking queue is a queue that holds two additional operations on the basis of the queue.

3. 2 additional operations:

3.1. Hold blocking insertion method: when the queue is full, the queue blocks the thread inserting the blocking element until the queue is dissatisfied.

3.2 hold blocking removal method: when the queue is empty, the thread that gets the element will wait for the queue to become empty.

10 what are Callable and Future?

1. Callable and Future are "interesting" pairs. When we need to get the thread's execution results, we need to retrieve them. Callable is responsible for producing results, while Future is for obtaining results.

2. Callable calls the generic type to define its return type. The Executors class provides some useful ways to perform tasks within the Callable in the thread pool. Because the Callable task is parallel, you must wait for the result it returns. The java.util.concurrent.Future object solves this problem.

3. The thread pool returns several Future objects after submitting the Callable task, so that the thread pool can know the status of the Callable task and get the execution result returned by Callable. Future provides the get () destroy method to wait for the Callable to finish and get its execution result.

11 what is FutureTask?

1. FutureTask can obtain the execution result asynchronously or cancel the execution task. By passing the task of Runnable or Callable to FutureTask, directly adjust its run method or release the thread pool to execute, and then you can obtain the execution result asynchronously through FutureTask's get method. Therefore, FutureTask is often suitable for time-consuming computing, and the main thread can obtain the result after completing the self-imposed task. In addition, FutureTask can also ensure that even if the run method is called multiple times, it will only perform several Runnable or Callable tasks, or cancel the FutureTask execution through cancel, and so on.

2. Futuretask can perform multi-tasking and avoid the occurrence of creating data machine locks multiple times in the case of concurrent execution.

12 what is the implementation of synchronous and concurrent containers?

Synchronization container:

1. The main representatives are Vector and Hashtable, as well as Collections.synchronizedXxx.

2. the granularity of the lock is the whole of the current object.

3. The iterator fails in time, that is, if it is found to be modified during the iteration, it will throw a ConcurrentModificationException.

Concurrency container:

1. The main representatives are ConcurrentHashMap, CopyOnWriteArrayList, ConcurrentSkipListMap and ConcurrentSkipListSet.

2. The granularity of locks is dispersed and fine-grained, that is, reading and writing are different locks.

3. The iterator has weak compactness, that is, it can tolerate concurrent modifications and will not throw ConcurrentModificationException.

ConcurrentHashMap adopts segmented locking technology. In synchronous containers, there are "containers" and "locks". But in ConcurrentHashMap, the array part of the hash table is divided into segments, and each segment maintains two locks to achieve efficient concurrent access.

13 what is multi-threaded up and down switch?

1. Multithreading: refers to the concurrent technology of multiple threads from software or hardware.

2. Benefits of multithreading:

So that the tasks of "multithreading can occupy time in the program" can be dealt with in the background, such as downloading pictures and screens to take the advantage of multi-core processors, and concurrent execution makes the system run faster and more smoothly, and the user experience is better.

3. Disadvantages of multithreading:

The number of threads reduces the readability of the code; more threads need more memory space, and attention should be paid to thread safety when multiple threads compete for the same resource.

4. Multi-threaded up and down switch:

CPU circulates the task through the time allocation algorithm, and the current task will be switched to the next task after the current task has been held for a period of time. However, the state of the previous task is saved before switching so that the state of the task can be loaded again the next time you switch back to the task.

14 the design concept and composition of ThreadLocal?

The ThreadLocal class in Java allows us to create variables that can only be read and written by the same thread. Therefore, if the snippet code contains a reference to the ThreadLocal variable, even if two threads execute the code at the same time, they will still have access to the ThreadLocal variable of the target.

Concept: thread local variables. In concurrent programming, member variables without any processing is actually thread-safe, each thread is operating on the same variable, obviously not safe, and we also know that the keyword volatile can not guarantee thread safety. So in some cases, we need to be full of the condition that the variable is the same, but each thread makes the same initial value, that is, a new copy of the same variable. In this case, ThreadLocal is often suitable, such as the database connection of DAO. We know that DAO is a singleton, so its property Connection is not a thread-safe variable. Each of our threads needs to make others, and each make its own. In this case, ThreadLocal has solved the problem better.

Principle: in essence, each thread maintains a map, the key of this map is threadLocal, and the value of this set is the value of our set. Every time a thread is in get, it takes a value from its own variable. Since it takes a value from its own variable, there is certainly no thread safety problem. Generally speaking, the state of the ThreadLocal variable has not changed at all, it just acts as a "key". In addition, an initial value is provided for each thread.

Implementation mechanism: each Thread object maintains a ThreadLocalMap such as a ThreadLocal Map, which can be stored as a ThreadLocal.

15 ThreadPool (thread pool) method and advantages?

Advantages of ThreadPool:

Reduce the number of threads created and destroyed, each thread can be repeated, but multiple tasks can adjust the number of threads in the thread pool according to the capacity of the system, in case the server is tired down due to excessive memory consumption (each thread needs about 1MB memory, the more the thread opens, the more memory is consumed, and finally crashes)

-reduce the time spent creating and destroying threads and the overhead of system resources

-if you do not make the thread pool, it may cause the system to create "quantity threads" and run out of system memory.

The top-level connection of the Java thread pool is Executor, but strictly speaking, Executor is not a thread pool, it is just a fixture for thread execution. The real thread pool pickup is ExecutorService.

When the number of threads falls short of corePoolSize, create a thread to execute the task.

When the number of threads is equal to corePoolSize and the workQueue is not full, put it in the workQueue

When the number of threads is equal to corePoolSize and when the workQueue is full, the new task creates new threads to run, and the total number of threads is equal to maximumPoolSize.

Hold the rejectedExecution of the handler when the total number of threads is equal to maximumPoolSize and the workQueue is full. That is, the rejection strategy.

16 other Concurrent packages: ArrayBlockingQueue, CountDownLatch, and so on.

1. A bounded blocking queue composed of ArrayBlockingQueue array structure.

2. CountDownLatch allows one or more threads to wait for other threads to complete the operation; join allows the current thread to wait for the execution of the join thread to finish. The principle of its implementation is to constantly check whether the join thread is alive, and if the join thread is alive, let the current thread wait forever.

17 what's the difference between synchronized and ReentrantLock?

Basics:

1. Retractable lock. A repeatable lock means that the same thread can acquire the same lock multiple times. ReentrantLocksynchronized are repeatable locks.

two。 Interruptible lock. Interruptible lock refers to whether the thread can respond to the interrupt when it tries to acquire the lock. Synchronized is an uninterruptible lock, while interrupt ReentrantLock provides interrupt functionality. Fair lock and fair lock. Fair lock means that when multiple threads try to acquire the same lock at the same time, the lock is acquired in the order in which the thread achieves, while fair lock allows the thread to "jump the queue". Synchronized is a fair lock, and the default implementation of the ReentrantLock is fair lock, but it can also be set to fair lock.

3.CAS operation (CompareAndSwap). CAS operation simply means compare and exchange. The CAS operation contains three operands-- memory location (V), expected original value (A), and new value (B). If the value of the memory location matches the expected value, the processor automatically updates the location value to the new value. Otherwise, the processor does nothing. In either case, it returns the value of that position before the CAS instruction. CAS effectively states that "I think position V should contain the value A; if it does, put B in this position; otherwise, don't change the position, just tell me the current value of the location."

4.Synchronized: isynchronized is a built-in keyword in java that provides an exclusive lock style. The acquisition and release of synchronized locks are implemented by JVM. Users do not need to release locks that are displayed. However, synchronized also has its limitations:

When a thread tries to acquire a lock, it will block if the lock is not acquired.

If the thread that acquires the lock sleeps or blocks, other threads must wait until the current thread is exception.

5.ReentrantLock:

ReentrantLock is the API layer mutual exclusion lock provided after JDK 1.5, which requires the lock () and unlock () exclusive methods to complete with the try/finally statement block.

Wait can be interruptible to avoid, deadlock occurs (if another thread is holding a lock, it will wait for the time given by the parameter, in the process of waiting, if the lock is acquired, it will return true; if the wait times out, it will return false)

When multiple threads of fair lock and fair lock wait for the same lock, the lock must be obtained according to the time order of applying for the lock. Synchronized lock fair lock. The default constructor of ReentrantLock is the created fair lock, which can be set to fair lock by parameter true, but the performance of fair lock is not very good.

What's wrong with 18 Semaphore?

Semaphore is a semaphore, and its function is to limit the number of concurrency of a block of code.

What is the Lock connection (Lock interface) in 19 Java Concurrency API? What advantage does it have to synchronize?

1. Lock and "synchronization" method and synchronization block provide more scalable lock operation. They allow for more flexible structures, can have completely different properties, and can hold conditional objects of multiple related classes.

2. Its advantages are:

You can make the lock fairer, it allows the thread to respond to interruptions while waiting for the lock, it allows the thread to try to acquire the lock, and when the method acquires the lock, it can return or wait for a period of time in a different range. acquire and release locks in a different order

Why do you want to synchronize when there is only one sentence "return count" in the size () method of 20 Hashtable?

1. Only one thread can synchronize a fixed class at the same time, but for the "synchronization" method of a class, multiple threads can access it at the same time. So, there may be a problem, thread A may be in the implementation of Hashtable put method to add data, thread B can normally adjust the "size ()" method to read the number of current elements in the Hashtable, then the read value may not be the latest, thread A may have added data, but thread B has not read size, then thread B for thread B to read the size may not be accurate.

2. After adding synchronization to the size () method, it means that the thread B size () method can be adjusted only after thread A has finished calling the put method, thus ensuring thread safety.

What is the concurrency of 21 ConcurrentHashMap?

1. Operating mechanism (partition idea): it introduces the concept of a "segmented lock", which can be understood as splitting the Map of a bunch into N segment, and deciding which HashTable to put the key into according to key.hashCode (). It can provide the same thread safety, but the efficiency is increased N times, and the default is 16 times higher.

2. Response: when reading > writing, it is suitable for caching, initialized when the program starts, and then can be accessed by multiple threads.

3. Hash conflict:

Introduction: the method of calling "hashCode ()" in HashMap to calculate hashCode. Because two different objects in Java may have a similar hashCode, different keys may have a similar hashCode, from the product that causes the conflict. Hash conflict resolution: use balanced tree instead of linked list. When the number of elements in the same hash exceeds a certain value, it will switch from linked list to balanced tree.

4. Lock read: the reason why ConcurrentHashMap has good concurrency is that ConcurrentHashMap locks read and write locks, and facilitates segmented locking (not on all entry, but on some entry).

The count (jdk1.6) is judged before reading, where the count is modified by volatile (when the variable is modified by volatile, each time the variable is changed, the change result will be written to the system main memory, which benefits the cache performance of the multiprocessor. Other processors will find that the memory address corresponding to the self-inflicted cache has been modified, and the cache address of the free processor will be set to invalid. And force to get the latest data from the main memory of the system.) Therefore, it is possible to realize the read lock.

5. The concurrency of ConcurrentHashMap is the parallel of segment, which defaults to 16, which means that a maximum of 16 threads can operate on ConcurrentHashMap at the same time, which is the best advantage of ConcurrentHashMap over Hashtable.

22 ReentrantReadWriteLock read-write lock?

1. Read-write lock: it is divided into read lock and write lock, multiple read locks are not mutually exclusive, read locks and write locks are mutually exclusive, this is controlled by jvm self-locking, you just need to put on the corresponding lock.

2. If your code reads only data and can read many readers at the same time, but cannot write at the same time, put a read lock on it.

3. If your code modifies the data, only one of them can be written and cannot be read at the same time, then put a write lock on it. In short, lock when reading, and lock when writing!

23 what is the difference between CyclicBarrier and CountDownLatch?

CyclicBarrier and CountDownLatch are under the java.util.concurrent package.

24 LockSupport equipment?

LockSupport is the lower-level class in JDK to create basic thread blocking for locks and other synchronization fixture classes. The core AQS:AbstractQueuedSynchronizer of the java lock and synchronizer framework blocks and wakes up threads by calling LockSupport. Park () and LockSupport. Unpark ().

25 Condition connection and its implementation principle?

In the java.util.concurrent package, there are two very special fixture classes, Condition and ReentrantLock, so that past users know that ReentrantLock (duplicate locks) is the implementation of exclusive locks provided by jdk's concurrent package.

We know that when a thread is synchronized, it is possible to "block" and wait for a signal, while abandoning the lock so that other threads can compete for the lock.

In synchronized, we can make the "wait () and notify" methods of Object achieve this wait and wake up.

But how do you implement such wait and notify in Lock? The answer is Condition. The main purpose of learning Condition is to learn the source code of blockqueue and concurrenthashmap later, as well as to further understand ReentrantLock.

26 understanding of the Fork/Join framework?

1. Fork is to divide each task into two tasks.

2. Join is to merge the implementation results of these tasks, and finally get the result of this task.

27 what is the difference between wait () and sleep ()?

1. The sleep () thread method is a static method of the thread class (Thread), which allows the thread to sleep and gives the execution opportunity to other threads. When the sleep time is over, the thread is ready to compete with other threads for the execution time of cpu.

Because sleep () is the static checking method of static, it cannot change the machine lock of the object. When the sleep () method is called in the synchronized block, the thread sleeps, but the machine lock of the object is not released, and other threads still access the object.

2. Wait () wait () is the method of the Object class. When a thread executes to the wait method, it enters a waiting pool associated with the object and releases the machine lock of the object so that other threads can access it. The waiting thread can be awakened by the notify,notifyAll thread method.

28 the five states of the thread (five states, create, ready, run, block, and die)?

Threads usually have five states: create, ready, run, block, and die.

The first is the creation state. In creating a thread object, there is no start method for calling the object, which means that the thread is in the created state.

The first is the ready state. When the thread object's start method is called, the thread is in a ready state, but the thread scheduler has not yet set the thread as the current thread and is in the ready state. After the thread runs, it will also be ready when it comes back from waiting or sleeping. The third is the state of operation. The thread scheduler sets the thread in the ready state as the current thread, and then the thread enters the running state and starts running the code in the run function.

The fourth is the blocking state. When a thread is running, it is paused, usually to wait for a certain time to die (for example, a resource is ready) before continuing to run. Methods such as sleep,suspend,wait can cause thread blocking.

The fifth is the death state. If the run execution of a thread ends or after the stop method is adjusted, the thread will die. For the dead thread, the method then makes the "start" method ready.

29 what is the difference between start () method and run () method?

1. Start () method to start a single thread, which really realizes the multi-thread operation.

2. If you directly call run (), it is tantamount to calling an "ordinary function". The method of directly calling "run ()" must wait for the completion of the implementation of the run () method before you can execute the code of the "run ()" method, so there are still only threads in the execution path, which does not have the characteristics of threads at all, so it is necessary to make the "start ()" method not the run () method when executing multiple threads.

What is the difference between 30 Runnable connection and Callable connection?

The return value of the run () method in the Runnable receiver is void, and what it does is simply to execute the code in the run () method.

The call () method in the Callable connection has a return value, which is a generic type. It can be used with Future and FutureTask to obtain the result of asynchronous execution.

31. How to write the volatile keyword?

Multithreading is mainly developed around the two characteristics of scalability and originality, so that the variables modified by the volatile keyword ensure their convertibility between multithreads, that is, each time the volatile variable is read, it must be the latest data.

The underlying implementation of the code is not as simple as the level language-Java program we see. Its implementation is Java code-> bytecode-> the code corresponding to the execution of the byte code-> the code is compiled into assembly language-> it interacts with the hardware circuit. In reality, in order to obtain better performance, JVM may reorder instructions, and some unexpected problems may occur in multithreading. Enabling volatile will reorder the meaning of forbidden words, which, of course, reduces the efficiency of code execution.

How to get thread dump artifacts in 32 Java?

Viewing the thread dump is the best way to solve the problems such as dead loop, deadlock, blocking, slow opening and so on. The so-called thread dump is the thread stack. There are two steps to get the thread stack:

To get the pid of the thread, you can also enable the thread ps-ef in the Linux environment by issuing the jps command. | grep java

Print the thread stack, and you can also print kill-3 pid in the Linux environment by using the kill jstack pid command

In addition, the Thread class provides a "getStackTrace ()" method that can also be used to get the thread stack. This is the "instance" method, so this method is bound to a specific thread instance, and each time you get the stack of the current operation of a specific thread.

What's the difference between a thread and a process?

Process is the basic unit of system input resource allocation, which has a unique memory address space.

Thread is the basic unit of CPU unique operation and scheduling, there is no separate address space, unique stack, local variables, registers, program counters and so on.

The overhead of creating a process, including the creation of a virtual address space, requires a large amount of system resources

Create thread overhead, basically only "kernel objects" and "stack".

The "single process" method directly accesses the resources of another process; multiple threads in the same process share the resources of the process.

Process switching cost, thread switching cost, inter-process communication cost, inter-thread communication cost.

A thread belongs to a process and cannot be held alone. Each process must have fewer threads and become the main thread.

What are the four types of threads implemented by 34 threads?

Inherit the Thread class and override the run method

Implement Runnable connection, override the run connection method, and implement the instance object of the implementation class of Runnable connection as the target of the Thread constructor

Implement Callable connection to create Thread threads through the FutureTask wrapper

Create a thread through a thread pool

How does a business with high concurrency and short task execution time make a thread pool? How does a business with concurrency and task execution time make a thread pool? How does the concurrent "business executive time" business make the thread pool?

Businesses with high concurrency and short task execution time: the number of threads in the thread pool can be set to + 1 of CPU cores to reduce the switching between threads.

Businesses with concurrency and task execution time should be distinguished:

Hongmeng official Strategic Cooperation to build HarmonyOS Technology Community

If business time is focused on IO operations, that is, IO-intensive tasks, because IO operations do not occupy CPU, so do not let all CPU idle, you can add "number of threads in the thread pool" to let CPU handle more business.

If the business time is focused on computing operations, that is, computing-intensive tasks, there is no way to do this, as in (1), the number of threads in the thread pool is set to less, reducing the switching of threads up and down.

The key to solving this type of task does not lie in the design of the overall architecture. To see whether some data of these businesses can be cached is the first step, and to add the server is the first step. Please refer to the setting of thread pool (2). Finally, the problem of business execution time may also need to be analyzed to see if middleware can split and decouple tasks.

What will happen if the thread pool queue is full when you submit the task?

1. If you make the LinkedBlockingQueue of the queue, that is, the boundary queue, it doesn't matter, continue to add tasks to the blocking queue to wait for execution, because the LinkedBlockingQueue can almost be thought of as a "poor" queue, which can limit the storage of tasks.

2. If you use the bounded queue to say ArrayBlockingQueue, the task queue will first be added to the ArrayBlockingQueue. If the ArrayBlockingQueue is full, it will cause the reject policy RejectedExecutionHandler to handle the full task. The default is AbortPolicy.

37 the level of lock: magic lock, object lock, class lock?

Method lock (when synchronized modifies the method):

Declare the synchronized method by adding the synchronized keyword to the declaration.

The synchronized membership method controls access to class member variables

Each class instance corresponds to a lock, and each synchronized method must acquire the lock ability of the class instance that adjusts the method, otherwise the thread to which it belongs will block, and the lock will not be released until it is returned from the method, and then the blocked thread will be able to acquire the lock and re-enter the executable state. This mechanism ensures that for each class instance at the same time, only "most" of the member functions declared as synchronized are in the executable state, and the slave effectively avoids the access conflict of class member variables.

Object locks (synchronized decorations or code blocks):

When you adjust the synchronization method or enter the synchronization area of an object when there is synchronized method or synchronized block in the object, you must first obtain the object lock. If the object lock for this object has been occupied by another caller, you need to wait for the lock to be released. (the magic lock is also the object lock)

All objects in java contain a mutex, which is automatically acquired and released by JVM. When a thread enters the "synchronized" method, it acquires the lock of the object. Of course, if a thread has already acquired the lock of the object, the current thread will wait; if the synchronized method returns normally or throws an exception, JVM will automatically release the lock of the object. This also reflects one of the benefits of adding locks to synchronized. When an exception is thrown, the lock can still be automatically released by JVM.

Class locks (synchronized modifies static methods or blocks of code):

No matter how many times a class is instantiated, the static enumeration and static variables are only available in memory. Therefore, "Dan" a static method is declared as synchronized. All the instantiated objects of this class are adjusting this method and share the same lock, which we call a class lock.

The object lock is to control the synchronization between the instance methods, and the class lock is to control the synchronization between the static methods (or static variable mutexes).

What will happen if a thread in the synchronization block throws an exception?

If the synchronized method returns normally or throws an exception, JVM will automatically release the object lock.

What's the difference between concurrent programming (concurrency) and parallel programming (parallellism)?

Explanation: parallel refers to the occurrence of two or more events at the same time; concurrent refers to the occurrence of two or more events at the same time interval.

Explain parallel: parallel is multiple events on different entities, and concurrency is multiple events on the same entity.

Explanation 3: handle multiple tasks "at the same time" on multiple processors and multiple tasks on multiple processors at the same time. Such as hadoop distributed clusters, so the standard of concurrent programming is to fully benefit each core of the processor in order to achieve the most efficient processing performance.

40 how to ensure that the iTunes + results are correct under multithreading?

Volatile can only guarantee the accessibility of your data. What you get is the latest data, not the originality.

The original AtomicInteger is guaranteed.

Synchronized can guarantee not only the convertibility of shared variables, but also the originality of the operation in the lock.

What happens if an exception occurs with 41 threads?

If the exception is not caught, the thread stops executing.

Another important point is that if the thread holds an object's monitor, the object monitor will be released immediately.

42 how do I share data between two threads?

By sharing objects between threads, and then evoking and waiting through wait/notify/notifyAll and await/signal/signalAll, the blocking queue BlockingQueue is designed to share data between threads.

What is the makeup of the 43-year producer consumer model?

It is the most important work of the producer-consumer model to improve the operation efficiency of the whole system by balancing the "production capacity" of producers and the consumption energy of consumers.

Decoupling, which is a practice attached to the producer-consumer model, means that "there are fewer connections between producers and consumers, and the fewer connections, the more you can develop independently" without mutual constraints.

44 how do I wake up a blocked thread?

If the thread is blocked by calling the wait (), sleep (), or join () method

Suspend and resume:Java abandon suspend () to suspend the thread because suspend () does not release any lock resources while causing the thread to pause. Other threads are unable to access the lock occupied by it. The suspended thread can not continue until the corresponding thread executes the resume () method, and other threads blocked in the lock can continue to execute. However, if the resume () operation occurs before suspend (), the thread will suspend and occupy the lock at the same time, resulting in a deadlock. Moreover, for a suspended thread, its thread state is still Runnable.

Wait, notify:wait and notify must cooperate with synchronized to release the lock, because the lock must be held before tuning, wait will release the lock immediately, and notify will not release the lock until the synchronization block has finished executing.

The Condition object is provided by the await and singal:Condition classes, which is obtained by new ReentLock (). NewCondition (), which is the same as wait and notify, because the "Lock after lock" method makes the "wait" method

Park and unpark:LockSupport are "regular" thread blocking tools that allow threads to block anywhere. In contrast to Thread.suspenf (), it makes up for the continued execution of thread execution due to the previous death of resume (). As opposed to Object.wait (), it does not need to acquire a lock on an object first, nor does it throw an IException exception. The specified thread can be awakened. "can be" if the thread encounters IO blocking, because IO is implemented by the operating system, and Java code has no way to directly contact the operating system.

What is the thread scheduling algorithm found in 45 Java?

Preemptive. After "each thread" completes the CPU, the operating system calculates the total priority based on thread priority, thread hunger and other data and allocates "time" to a thread.

46 thread safety of singleton mode?

The first thing to say is that the thread safety of singleton mode means that an instance of a class is only created in a multithreaded environment. There are many ways to write the singleton model, which I would like to sum up:

(1) the writing of the hungry Chinese singleton model: thread safety

(2) the writing of lazy singleton pattern: thread safety

(3) the writing of the singleton mode of double check lock: thread safety

47 by which thread are the constructors and static blocks of thread classes tuned?

The constructors and static blocks of the thread class are adjusted by the thread where the thread class new is located, and the code of the "run method" is adjusted by the thread itself.

Which is the better choice, synchronization method or synchronization block?

A synchronous block is a better choice because it does not lock the entire object (or you can let it lock the entire object). The synchronization method locks the entire object, even if there are multiple unrelated synchronization blocks in the class, which usually causes them to stop and wait for the lock on the object to be acquired.

Synchronized (this) and the synchronized method of static (read further on the static synchronized method) can only prevent multiple threads from synchronizing code segments of the same object at the same time.

If you want to lock multiple object binders, you can lock a fixed object, or lock the Class object of this class.

Synchronized locks the object in parentheses "," not the code. For the "synchronized" method of static, the lock is the object itself, that is, this.

49 how to detect deadlocks? How to prevent deadlock?

Concept: it refers to the phenomenon that two or more processes wait for each other due to the competition for resources in the course of implementation. If they are outside, they will push forward the law. At this point, it is said that the system is in deadlock.

Four necessary conditions for deadlocks:

Mutually exclusive condition: the process does not allow other processes to access the assigned resource. if other processes access the resource, they can only wait, and the process that owns the resource will release the resource after completion.

Request and retention conditions: after the process has obtained the fixed resource, the process makes a request for other resources, but the resource may be occupied by other processes, and the request is blocked, but the resources obtained by the process remain unchanged.

Inalienable condition: refers to the resources that have been acquired by the process, which cannot be deprived before the completion of the exercise, and can only be released after the completion of the exercise.

Loop waiting condition: it refers to the loop waiting resource relationship between the processes after the process is deadlocked.

The reason for deadlock delivery:

Due to the phenomenon of competitive resources deadlock: when the number of resources shared by multiple processes in the system is not enough to meet the needs of all processes, it will cause the phenomenon of competitive deadlock of all resources.

The process advances in the wrong order and sends out the deadlock

Check for deadlocks:

There are two containers, one for holding the lock that the thread is requesting, and the other for holding the lock that the thread already holds. The following tests are done before each lock is added

Detect whether the lock currently being requested is already held by other threads, and if so, find those threads

Traverse the thread returned in step 9 to check whether the self-held lock is being requested by any of these threads. If step 2 returns true, a deadlock has occurred

Deadlock release and prevention:

Control do not let the four necessary conditions come true.

What does 50 HashMap need to pay attention to in a multithreaded environment?

Pay attention to the problem of dead loop. The put operation of HashMap leads to capacity expansion. This action will cause the problem of thread dead loop in the case of multi-thread concurrency.

1. HashMap is not thread-safe; Hashtable is thread-safe, but inefficient because Hashtable makes "synchronized" and all threads compete for locks; "ConcurrentHashMap is not only thread-safe but efficient," because it contains an segment array that stores data in segments and assigns locks to each segment of data, which is the so-called lock segmentation technology.

2. Why HashMap is not thread safe:

The same key in put causes the value of one of the threads to be overwritten

Multiple threads expand at the same time, resulting in data loss

The expansion of multithreading leads to the formation of a circular structure in the Node linked list, resulting in a .Next () dead loop, resulting in a CPU profit ratio close to 100%.

3. ConcurrentHashMap is the most effective.

51 what is a daemon thread? What do you have?

The daemon thread (that is, daemon thread) is a service thread, which exactly means serving other threads. This is what it does-- "there are only two kinds of other threads, and that is the customer thread." So there are two types of java threads.

1. Daemon threads, such as garbage collection threads, are the most typical daemon threads.

2. "user thread" is the custom thread that should be "program".

52 how to implement thread string execution?

a. In order to control the order in which threads execute, such as ThreadA- > ThreadB- > ThreadC- > ThreadA loops, we need to determine the order of waking and waiting. At this point, we can implement this token with Obj.wait (), Obj.notify (), and synchronized (Obj) at the same time.

Because of this dependency, the thread needs to wait for the last object to release the lock, which guarantees the order in which the class thread executes the lock.

b. Normally, wait means that after acquiring the object lock, the thread releases the object lock actively, while the thread hibernates until another thread wakes up the notify () of the object before it can continue to acquire the object lock and continue to execute. Wake notify () is a wake-up operation on a thread waiting for an object lock. However, it is worth noting that after the notify () call, the object lock is not released on the button, but ends at the corresponding synchronized () {} statement block. After releasing the object lock, JVM randomly selects the thread holding the wait () waiting for the object lock, assigns it the object lock, wakes up the thread, and continues to hold the lock.

53 can kill drop a thread when it is running?

a. No, threads have five states: new (new), runnable (running), running (running), block (blocking) and dead (dead).

b. The thread ends its life cycle only when the thread run method or the main thread main method ends, or when an exception is thrown.

54 about synchronized

Of all the synchronized methods of an object, only one thread can access these synchronized methods at some point.

If the method is synchronized, then the synchronized keyword means that locking the current object (that is, this) is equivalent to synchronized (this) {}

If the "synchronized" method is static, then the synchronized represents a lock on the class object corresponding to the current object (no matter how many objects each class is transformed into, there are only two corresponding class objects)

55 step-by-step lock, deadlock mechanism in program database and its solution

Basic principle: each state value represents the lock, and the occupation and release of the lock are identified by the state value.

Three kinds of distributed locks:

The first kind: Zookeeper:

The main logic of the distributed lock based on zookeeper instantaneous ordered nodes is as follows. The idea is that when each client locks a function, it becomes a unique instantaneous ordered node under the recording of the specified node corresponding to the function on the zookeeper. It is easy to determine whether or not to acquire a lock. You only need to judge the one with the highest sequence number in the ordered node. When the lock is released, you only need to delete the instantaneous node. At the same time, it can avoid lock release caused by service downtime and deadlock problem of "production".

[advantages] the lock is secure, the zk can be persisted, and the client status of the lock can be monitored in real time. When the client goes down, the instantaneous node disappears and the lock is released by zk. This also eliminates the logic that "timeout determination is required in the process of implementing locks in distributed caches."

[disadvantages] the performance overhead is relatively low. Because it needs dynamic production and destruction of instantaneous nodes to achieve the lock function. Therefore, it is not suitable to provide "concurrent scenarios" directly.

[implementation] distributed locks can be easily realized by directly using curator, the third library of zookeeper.

[suitable scenario] it is used in scenarios where reliability is required to be "constant" and the degree of concurrency is not high. Such as timing total / incremental synchronization of core data.)

The second memcached:

Memcached comes with the add function, and the distributed lock can be realized by exploiting the characteristics of the add function. The difference between add and set is that if multithreading concurrently with set, each set succeeds, but the last stored value is based on the thread of the last set. In the case of add, by contrast, add adds the first arriving value and returns true, while subsequent additions return false. With this point, distributed locks can be easily implemented.

[advantages] concurrent effect

[disadvantages] memcached adopts the LRU replacement strategy, so if there is not enough memory, it may result in the loss of lock information in the cache. If the memcached is persisted and restarted, the information will be lost.

[make "scenarios]" concurrent scenarios. Need 1) plus timeout to avoid deadlock; 2) provide "enough" memory space for locking service; 3) stable clustering management.

The third Redis:

Redis distributed lock can combine the advantages of zk distributed lock security and high efficiency in memcached concurrency scenarios. Its implementation is similar to that of memcached and can be implemented by using setnx. It is important to note that the redis of this slave also needs to set a timeout to avoid deadlock. It can be implemented on the jedis client.

Database deadlock mechanism and resolution case:

Deadlock: deadlock refers to the phenomenon that two or more transactions wait for each other due to competition for lock resources during the execution process.

Processing mechanism: the easiest way to solve deadlocks is not to wait, to turn any wait into a rollback, and to restart the transaction. But it may affect concurrency performance.

-timeout rollback. Innodb_lock_wait_time sets the timeout.

-wait-for-graph method: follow the time-out rollback, which is a more active deadlock detection method. This style is also adopted by the InnoDB engine.

Why is there no security problem in 56 spring singleton (ThreadLocal)

1. ThreadLocal:spring enables ThreadLocal to solve thread safety problems. ThreadLocal provides a unique copy of variables for each thread, isolating data access conflicts from multiple threads. Because each thread has a copy of its own variable, there is no need to synchronize the variable from thread. ThreadLocal provides thread-safe shared objects, and unsafe variables can be encapsulated in ThreadLocal when writing multithreaded code. To sum up, for the problem of multi-threaded resource sharing, the synchronization mechanism adopts the formula of "time for space" and ThreadLocal adopts the pattern of "space for time". The former only provides backup variables for different threads to queue up for access, while the latter provides backup variables for each thread, so it can be accessed at the same time without affecting each other. In many cases, the ThreadLocal synchronization mechanism directly makes it easier and easier for the synchronized synchronization mechanism to solve thread safety problems, and the resulting program has more concurrency.

2. Singleton: the Bean of the slave state (the slave state is the secondary operation, and the data cannot be saved. Stateless Bean, which is an object without instance variables, cannot hold data, is an immutable class, and is thread-safe.) It is suitable for invariant mode, and the technology is singleton mode, so that you can share instances and improve performance.

57 thread pool principle

Make "scenario: assume that the time required for each server to complete the task is: T1-create thread time, T2-time to execute the task in the thread, and T3-thread destruction time. If the T1+T3 is farther than T2, you can use the thread pool to improve server performance

Composition:

Thread pool manager (ThreadPool): focus on creating and managing thread pools, including creating thread pools, destroying thread pools, and adding new tasks

PoolWorker: a thread in a thread pool that is in a waiting state when there is no task and can execute tasks in a loop.

Task pickup (Task): the connection that must be implemented for each task for thread scheduling, which mainly defines the completion of the task, the completion of the task, the execution status of the task, and so on.

Task queue (taskQueue): stores tasks that are not processed. Provides a buffer mechanism.

Principle: thread pool technology is a technology that focuses on how to shorten or adjust the T1Mague T3 time to improve the performance of server programs. It arranges T1Magine T3 at the time when the server program starts and ends or during some idle time period, so that when the server program processes client requests, there will be no T1Magery T3 overhead.

Do the process:

1. When the thread pool is first created, there are no threads (you can also set the parameter prestartAllCoreThreads to start the expected number of main threads). The task queue is passed in as a parameter. However, even if the queue has tasks, the thread pool will not hold them on the thread pool.

2. When adding several tasks by calling the execute () method, the thread pool will make the following judgment:

If the number of threads running is less than corePoolSize, then create a thread to run this task on the

If the number of threads running is less than or equal to corePoolSize, put the task in the queue

If the queue is full at this time, and the number of threads running is less than maximumPoolSize, then you still need to create a "kernel thread" task.

If the queue is full and the number of threads running is equal to or equal to maximumPoolSize, the thread pool throws an exception RejectExecutionException.

3. When a thread finishes a task, it removes a task from the queue to execute.

4. When the "thread" has something to do and exceeds the scheduled time (keepAliveTime), the thread pool will judge that if the number of threads currently running is less than corePoolSize, then the thread will be stopped. So when all the tasks in the thread pool are completed, it eventually shrinks to the sink of corePoolSize.

58 java locks multiple objects

For example, when transferring money in the bank account system, two accounts need to be locked. At this time, the order makes it possible for the two synchronized to have a deadlock.

59 how to start a java thread

1. Inherit the Thread class

2. Implement Runnable connection

3. Directly in the function body:

Compare:

1. Realize the advantages of Runnable connection:

1) suitable for multiple threads with the same program code to process the same resource

2) the restriction of single inheritance in java can be avoided.

3) increase the robustness of the program, code can be shared by multiple threads, code and data can be isolated.

2. Inherit the advantages of Thread class:

1) Thread classes can be abstracted when you need to design an "abstract" pattern.

2) Multithreading synchronization

3. Take advantage of the function body.

1) you need to inherit thread or implement Runnable, which is abbreviated to the domain.

What are the locked expressions in 60 java, and how to write them?

1. There are two kinds of locks in java: normal lock or object lock (lock on "static" method or code block), and class lock (lock on static method or class).

2. Note: other threads can access unlocked methods and code; synchronized modifies static methods and instance methods at the same time, but the result of operation is alternate, which proves that class locks and object locks are two different locks that control different areas, and they are not disturbing each other.

61 how to ensure that the data is not lost

1. Make messages queued and messages persistent

2. Add flag bits: 0 is not processed, 1 is being processed, 2 has been processed. Deal with it regularly.

62. Why does ThreadLocal cause memory leaks?

1. OOM implementation:

The implementation of ThreadLocal is like this: each Thread maintains a ThreadLocalMap mapping table, the key of this mapping table is the ThreadLocal instance itself, and value is the Object that really needs to be stored. 2. That is, the ThreadLocal itself does not store the value, it just acts as a key to let the thread get the value from the ThreadLocalMap. It is worth noting that the dotted line in the figure indicates that ThreadLocalMap makes the weak reference of ThreadLocal as Key, and the weakly referenced object will be recycled during GC.

ThreadLocalMap uses weak references of ThreadLocal as key, and if a ThreadLocal does not have external strong references to trigger it, then the ThreadLocal is bound to be recycled when the system GC, so that Entry with key of null will appear in ThreadLocalMap, and there will be no way to access these value of Entry with key of null, if the current thread does not finish any longer The value of these Entry whose key is null will have a strong reference chain: Thread Ref-> Thread-> ThreaLocalMap-> Entry-> value permanent recovery, resulting in memory leakage.

3, precaution: in ThreadLocal get (), set (), remove () will clear the thread ThreadLocalMap all the value whose key is null. But these passive precautions do not guarantee memory leaks:

(1) make the ThreadLocal of static prolong the lifetime of ThreadLocal, which may lead to memory leak.

(2) if the ThreadLocal is allocated so that the get (), set (), remove () methods are no longer used, it will lead to a memory leak because this piece of memory always exists.

Improvement of ConcurrentHashmap in 63 jdk8

In order to achieve parallel access, Java 7 introduces the Segment structure and implements segmented locks, which theoretically equals the number of Segment.

In order to further improve the concurrency, Java 8 abandons the segmented lock case, which is a direct array. At the same time, in order to improve the addressing performance under hash collision, Java 8 converts the linked list (the addressing time complexity is O (N)) into the red tree (O (long (N) when the degree of the linked list exceeds the setting threshold (8).

What are the classes under the 64 concurrent package?

ConcurrentHashMap, Future, FutureTask, AtomicInteger...

65 Thread a _ thread _

1. CountDownLatch class

There are two synchronization helper classes, which often wait for a condition to be issued before you can execute the subsequent process. The CountDownLatch is initialized with a given count, and the countDown () method is called, and the await method is blocked until the count reaches zero.

The important methods are countdown () and await ().

2. Join method

Add thread B to the end of thread An and execute it only when A has finished executing it.

3. Notify, wait destroy, wake up and wait waiting in Java, the key is the synchronized code block, the parameters should be the same between threads, and often use Object as a parameter.

66 how to optimize the performance of concurrent systems? How to prevent inventory from being oversold?

1. "concurrent system performance optimization: optimize program, optimize service configuration, optimize system configuration.

Try to make the cache, including user cache, information cache, etc., spend more memory to do cache, you can reduce the amount of interaction with the database and improve performance.

Tools such as "jprofiler" identify performance bottlenecks and reduce additional overhead.

Optimize database query statements to reduce the direct completion of "hibernate" and other tools (only time-consuming queries are optimized).

Optimize the database structure, do more indexes, and improve the query efficiency.

The function of statistics is cached as far as possible, or relevant reports are counted daily or regularly, so as to avoid the function of statistics when needed.

Can make the "static html" as much as possible, reducing the parsing of the container (try to display the dynamic content as static as possible).

After solving the above problems, make the server cluster to solve the bottleneck problem of a single server.

two。 Prevention of oversold inventory:

Pessimistic locks: locks are added during inventory updates, and other threads are not allowed to modify

Database lock: select xxx for update

Distributed lock

Optimistic lock: make the update with the version number. Each thread can be modified concurrently, but in the case of concurrency, only one thread will successfully modify, and the others will return a failure.

Redis watch: monitor key-value pairs. If a transaction commits an exec and discovers a change in the monitored monitoring pair, the transaction will be cancelled.

Message queuing: serializes the operation of modifying inventory through FIFO queues.

Summary:

In general, you can't put pressure on the database, so the style of "select xxx for update" is unavoidable in concurrent scenarios. The synchronization queue of FIFO can be combined with inventory limit queue, but it is "not suitable" in the scenario with large inventory. So relatively speaking, I tend to choose optimistic locks / cache locks / distributed locks.

After reading the above, have you mastered the methods of concurrent multi-threaded interview questions in the foundation of java? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report