In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article introduces the knowledge of "how to understand locks in iOS development". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
OSSpinLock
It has been described in the above article that OSSpinLock is no longer safe, the main reason is that when the low-priority thread gets the lock, the high-priority thread enters the busy wait (busy-wait) state, which consumes a lot of CPU time, so that the low-priority thread cannot get the CPU time, so it cannot complete the task and release the lock. This problem is called priority inversion.
Why do busy waits cause low-priority threads not to get the time slice? It also starts with the thread scheduling of the operating system.
Modern operating systems usually use time slice rotation algorithm (Round Robin, referred to as RR) when managing ordinary threads. Each thread is assigned a time slice (quantum), usually about 10-100ms. When a thread runs out of its own time slice, it is suspended by the operating system and placed in a waiting queue until the next time a time slice is allocated.
The realization principle of spin Lock
The purpose of the spin lock is to ensure that only one thread can access the critical area, and its use can be described by the following pseudo code:
Do {Acquire Lock Critical section / / critical section Release Lock Reminder section / / Code that does not require lock protection} copy code
In the Acquire Lock step, we apply for locking in order to protect the code in the critical section (Critical Section) from being executed by multiple threads.
The idea of implementing spin lock is very simple. Theoretically, all you need to do is to define a global variable to represent the availability of the lock. The pseudo code is as follows:
Bool lock = false; / / any thread can apply for a lock do {while (lock) if it is not locked at first; / / if the lock is true, it will always be in an endless loop, which is equivalent to applying for a lock lock = true; / / so that other threads cannot obtain the lock Critical section / / critical lock = false / / equivalent to releasing the lock, so that other threads can enter the critical section Reminder section / / code that does not need lock protection} copy the code
If the comments are written clearly, they will no longer be analyzed line by line. Unfortunately, there is a problem with this code: if at first there are multiple threads executing the while loop at the same time, they will not get stuck here, but will continue to execute, so the reliability of the lock cannot be guaranteed. The solution is also simple, as long as make sure that the process of applying for a lock is atomic.
Atomic operation
In a narrow sense, the atomic operation means an uninterruptible operation, that is, the thread will not be suspended by the operating system during the operation, but will certainly be finished. In a uniprocessor environment, an assembly instruction is obviously an atomic operation, because interrupts are also implemented through instructions.
However, in the case of multiprocessors, an operation that can be performed by multiple processors at the same time is still not an atomic operation. Therefore, the real atomic operation must be supported by hardware. For example, on x86 platform, if the instruction is prefixed with "LOCK", the corresponding machine code will lock the bus during execution, so that other CPU can no longer perform the same operation, thus ensuring the atomicity of the operation from the hardware level.
These very low-level concepts do not need to be fully mastered, as long as we know that the above lock application process can be done with an atomic operation test_and_set, which can be expressed as follows in pseudocode:
Bool test_and_set (bool * target) {bool rv = * target; * target = TRUE; return rv;} copy the code
The purpose of this code is to set the value of target to 1 and return the original value. Of course, when it is implemented, it is done through an atomic instruction.
Summary of spin lock
At this point, the implementation principle of spin lock is very clear:
Bool lock = false; / / any thread can apply for a lock do {while (& lock); / / test_and_set is an atomic operation Critical section / / critical section lock = false; / / equivalent to releasing the lock, so that other threads can enter the critical section Reminder section / / code that does not need lock protection} to copy the code
If the execution time of the critical section is too long, it is not a good idea to use spin locks. We have previously introduced the time slice rotation algorithm, in which threads exit their own time slices in a variety of cases. One of them is that the time slice is used up and is forcibly preempted by the operating system. In addition, the thread will actively give up the time slice when it is engaged in an Istroke O operation or when it goes to sleep. Obviously in the while loop, the thread is busy and so on, wasting CPU time in vain, and finally being preempted by the operating system because of the timeout. If the critical area takes a long time to execute, such as reading and writing files, this kind of busy is unnecessary.
Semaphore
In my previous article introducing the underlying implementation of GCD, I briefly described the implementation principle of semaphore dispatch_semaphore_t, which will eventually call the sem_wait method, which is implemented in glibc as follows:
Int sem_wait (sem_t * sem) {int * futex = (int *) sem; if (atomic_decrement_if_positive (futex) > 0) return 0; int err = lll_futex_wait (futex, 0); return-1;) copy the code
First of all, the value of the semaphore is reduced by one and whether it is greater than zero is determined. If it's greater than zero, it means you don't have to wait, so go back immediately. The specific wait operation is implemented in the lll_futex_wait function, lll is the abbreviation of low level lock. This function is implemented by assembling code and calling the system call SYS_futex to put the thread to sleep and actively give up the time slice. This function may also be used in the implementation of mutex.
Taking the initiative to give up time slices does not always represent efficiency. Giving up a time slice causes the operating system to switch to another thread, a context switch that usually takes about 10 microseconds and requires at least two switches. If the waiting time is short, such as only a few microseconds, busy waiting is more efficient than thread sleep.
As you can see, the implementation of spin lock and semaphore is very simple, which is why the addition and unlocking of spin lock and semaphore are the first and second respectively. Again, adding and unlocking time can not accurately reflect the efficiency of the lock (for example, time slice switching cannot occur), it can only measure the complexity of lock implementation to a certain extent.
Pthread_mutex
Pthread stands for POSIX thread and defines a set of cross-platform thread-related API,pthread_mutex representation mutexes. The implementation principle of mutex is very similar to semaphores, instead of using busy, etc., but blocking threads and sleeping, requiring context switching.
Common uses of mutexes are as follows:
Pthread_mutexattr_t attr; pthread_mutexattr_init (& attr); pthread_mutexattr_settype (& attr, PTHREAD_MUTEX_NORMAL); / / define lock attributes pthread_mutex_t mutex; pthread_mutex_init (& mutex, & attr) / / create lock pthread_mutex_lock (& mutex); / / apply for lock / / critical section pthread_mutex_unlock (& mutex); / / release lock replication code
For pthread_mutex, its usage and before has not changed much, the more important is the type of lock, there can be PTHREAD_MUTEX_NORMAL, PTHREAD_MUTEX_ERRORCHECK, PTHREAD_MUTEX_RECURSIVE and so on, the specific features are not explained, there are a lot of related information on the Internet.
In general, a thread can only apply for a lock once and release the lock only if it is acquired. Multiple requests for a lock or releasing an unacquired lock will lead to a crash. Assuming that the lock is applied again when the lock has been acquired, the thread will go to sleep waiting for the lock to be released, so it is impossible to release the lock, resulting in a deadlock.
However, this often happens, such as when a function applies for a lock and calls itself recursively in the critical zone. Fortunately, pthread_mutex supports recursive locks, that is, allowing a thread to recursively apply for locks, as long as the type of attr is changed to PTHREAD_MUTEX_RECURSIVE.
Implementation of mutex
When applying for a lock, the mutex calls the pthread_mutex_lock method, which is implemented differently on different systems, and sometimes it is implemented using semaphores. Even if it does not use semaphores, it will call the lll_futex_wait function, resulting in thread sleep.
As mentioned above, busy waiting may be more efficient if the critical area is short, so in some versions of the implementation, testandtest will be tried a certain number of times (for example, 1000 times) first, which can improve performance if mutexes are used incorrectly.
In addition, because pthread_mutex has many types and can support recursive locks, it is necessary to judge the type of lock when applying for locking, which is why it is similar to the implementation of semaphores, but slightly less efficient.
NSLock
NSLock is a lock that Objective-C exposes to developers in the form of objects. Its implementation is very simple. Through macros, lock methods are defined:
# define MLOCK\ n-(void) lock\ n {\ n int err = pthread_mutex_lock (& _ mutex);\ n / / error handling. } copy the code
NSLock only encapsulates a pthread_mutex internally with an attribute of PTHREAD_MUTEX_ERRORCHECK, which loses some performance in exchange for an error prompt.
The reason for using the macro definition here is that there are several other locks within OC whose lock methods are exactly the same, except that the types of internal pthread_mutex mutexes are different. Through macro definition, you can simplify the definition of methods.
The reason NSLock is slightly slower than pthread_mutex is that it requires method calls, and because of the cache, multiple method calls do not have much impact on performance.
NSCondition
The underlying layer of NSCondition is implemented through the condition variable (condition variable) pthread_cond_t. Conditional variables are a bit like semaphores, providing thread blocking and signaling mechanisms, so they can be used to block a thread and wait for some data to be ready, and then wake up the thread, such as the common producer-consumer pattern.
How to use conditional variables
Many articles about pthread_cond_t have mentioned that it needs to be used with mutexes:
Void consumer () {/ / Consumer pthread_mutex_lock (& mutex); while (data = = NULL) {pthread_cond_wait (& condition_variable_signal, & mutex); / / waiting for data} / /-there is new data. The following code handles ↓ / / temp = data / /-if there is new data, the above code is responsible for processing ↑ pthread_mutex_unlock (& mutex);} void producer () {pthread_mutex_lock (& mutex); / / production data pthread_cond_signal (& condition_variable_signal); / / signal to consumers that they have new data pthread_mutex_unlock (& mutex);} copy code
Naturally we wonder, "if you don't use mutexes, what's wrong with using only conditional variables?" . The problem is that the temp = data; code is not thread-safe, and maybe other threads have modified the data before you read the data. So we need to make sure that the data consumers get is thread-safe.
In addition to being awakened by the signal method, the wait method is sometimes falsely awakened, so you need the judgment in the while loop here to make a second confirmation.
Why use conditional variables
There are many articles about conditional variables, but most of them ignore a basic question: "Why use conditional variables? it just controls the execution order of threads. Can you simulate a similar effect with semaphores or mutexes?"
There are few relevant materials on the Internet, so let me briefly talk about my personal views. Semaphores can replace condition to some extent, but mutexes are not. In the code of the producer-consumer model given above, the essence of the pthread_cond_wait method is the transfer of the lock, the consumer gives up the lock, and then the producer acquires the lock. Similarly, pthread_cond_signal is a process of lock transfer from producer to consumer.
If we use mutexes, we need to change the code like this:
Void consumer () {/ / Consumer pthread_mutex_lock (& mutex); while (data = = NULL) {pthread_mutex_unlock (& mutex); pthread_mutex_lock (& another_lock) / / equivalent to wait's other mutex pthread_mutex_lock (& mutex);} pthread_mutex_unlock (& mutex);} copy code
The problem with this is that it is possible for the producer to execute the code before waiting for anotherlock, thereby releasing the anotherlock. In other words, we cannot guarantee that the operations of releasing a lock and waiting for another lock are atomic, nor can we guarantee the order of "wait first, then release another_lock".
Using semaphores does not have this problem, because the wait and wake-up of semaphores do not need to satisfy the order, and semaphores only indicate how many resources are available, so there is no such problem. However, compared with the atomic lock transfer guaranteed by pthread_cond_wait, there seems to be some risk in using semaphores (there is nothing wrong with non-atomic operations for the time being).
However, one advantage of using condition is that we can call the pthread_cond_broadcast method to notify all waiting consumers, which cannot be achieved with semaphores.
NSCondition's approach
NSCondition actually encapsulates a mutex and a condition variable. It unifies the lock method of the former and the wait/signal of the latter in the NSCondition object and exposes to the user:
-(void) signal {pthread_cond_signal (& _ condition);} / / in fact, this function is defined through macros, which is expanded like this-(void) lock {int err = pthread_mutex_lock (& _ mutex);} copy the code
Its add and unlock process is almost the same as that of NSLock, and it should be time-consuming in theory (as is the case with practical testing). It takes a little longer to show in the figure, and I guess it is possible that the tester has attached initialization and destruction of variables before and after each addition and unlocking.
NSRecursiveLock
As mentioned above, recursive locks are also implemented through the pthread_mutex_lock function, and the type of lock is determined inside the function. If it is shown to be a recursive lock, recursive calls are allowed. Just add a counter to one, and the lock release process is the same.
NSRecursiveLock differs from NSLock in that the type of internally encapsulated pthread_mutex_t object is PTHREAD_MUTEX_RECURSIVE.
NSConditionLock
NSConditionLock is implemented with the help of NSCondition, and its essence is a producer-consumer model. "conditions satisfied" can be understood to provide new content for producers. The NSConditionLock internally holds a NSCondition object, as well as the _ condition_value property, which is assigned when initialized:
/ / simplified version code-(id) initWithCondition: (NSInteger) value {if (nil! = (self = [super init])) {_ condition = [NSCondition new] _ condition_value = value;} return self;} copy the code
Its lockWhenCondition method is actually the consumer approach:
-(void) lockWhenCondition: (NSInteger) value {[_ condition lock]; while (value! = _ condition_value) {[_ condition wait];}} copy code
The corresponding unlockWhenCondition method is the producer, using the broadcast method to notify all consumers:
-(void) unlockWithCondition: (NSInteger) value {_ condition_value = value; [_ condition broadcast]; [_ condition unlock];} copy code @ synchronized
This is actually a lock at the OC level, mainly at the expense of performance for grammatical simplicity and readability.
We know that @ synchronized needs to be followed by an OC object, which is actually used as a lock. This is achieved through a hash table, and OC uses an array of mutexes (which you can understand as a lock pool) at the bottom to get the corresponding mutex by de-hashing the object.
That's all for "how to understand locks in iOS development". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.