Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of concurrency and Modality in linux driver

2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

I would like to share with you the example analysis of concurrency and parallelism in the linux driver. I believe most people don't know much about it, so share this article for your reference. I hope you will gain a lot after reading this article. Let's learn about it together.

First of all, what is concurrency and concurrence? Concurrency means that multiple execution units are executed simultaneously and in parallel. On the other hand, the access of concurrent execution units to shared resources (global and static variables on hardware resources and software) can easily lead to race conditions. Situations that can lead to concurrency and concurrency are:

SMP (Symmetric Multi-Processing), symmetric multiprocessing architecture. SMP is a tightly coupled and shared memory system model, which is characterized by the fact that multiple CPU use a common system bus, so they can access common peripherals and memory.

Interrupt. Interrupts can interrupt an executing process, and racing can occur if the interrupt handler accesses a resource that the process is accessing. Interrupts can also be interrupted by new, higher-priority interrupts, so concurrency among multiple interrupts can also lead to race.

Preemption of kernel processes. Linux is preemptive, so one kernel process may be preempted by another high-priority kernel process. If two processes access shared resources together, there will be a state.

In the above three cases, only SMP is parallel in the real sense, while the others are parallel in macro and serial in micro. But all of them will lead to the problem of competition for critical sharing areas. The way to solve the race problem is to ensure the mutually exclusive access to the shared resources, that is, when one execution unit accesses the shared resources, the other execution units are prohibited. So how to achieve mutually exclusive access to shared resources in the linux kernel? In linux driver programming, the common methods to solve concurrency and parallelism are semaphore and mutex, Completions mechanism, spin lock (spin lock), and some other ways without lock. Let's introduce them one by one.

Semaphores and mutexes

The semaphore is actually an integer value, and its core is that a process that wants to enter the critical area will call P on the relevant semaphore; if the value of the semaphore is greater than zero, the value decreases by 1 and the process continues. Conversely, if the value of the semaphore is 0 (or less), the process must wait until someone else releases the semaphore. Unlocking a semaphore is done by calling V; this function increments the value of the semaphore and, if necessary, wakes up the waiting process. When the initial value of the semaphore is 1, it becomes a mutex.

Typical usage of semaphores:

/ / declare semaphore struct semaphore sem / / initialize semaphores void sema_init (struct semaphore * sem, int val) / / the following two forms are commonly used: # define init_MUTEX (sem) sema_init (sem, 1) # define init_MUTEX_LOCKED (sem) sema_init (sem, 0) / / the following is a shortcut to initialize semaphores The most commonly used DECLARE_MUTEX (name) / / initialization name semaphore is 1DECLARE_MUTEX_LOCKED (name) / / the initialization semaphore is 0pm / common operation DECLARE_MUTEX (mount_sem) Down (& mount_sem); / / get semaphore... critical section / / critical area... up (& mount_sem); / / release semaphore

Other common down operations are

/ / similar to down (), the process that goes into sleep because of down () cannot be interrupted by the signal, while the process that goes into sleep because of down_interruptible () can be interrupted by the signal, and / / the signal will also cause the function to return, and the return value is non-0int down_interruptible (struct semaphore * sem); / / attempts to get the semaphore sem, if it gets it immediately, it gets the semaphore and returns 0, otherwise, it returns non-0. It does not cause the caller to sleep, but can use int down_trylock (struct semaphore * sem) in the interrupt context; Completions mechanism

Completion provides a better synchronization mechanism than semaphores, which are used by one execution unit to wait for another execution unit to finish something.

/ / define completion struct completion my_completion; / / initialize completioninit_completion (& my_completion); / define and initialize shortcuts: DECLEAR_COMPLETION (my_completion); / / wait for a completion to be woken up void wait_for_completion (struct completion * c); / / Wake up completion void cmplete (struct completion * c); void cmplete_all (struct completion * c); spin lock

If a process wants to access critical resources and the test lock is idle, the process acquires the lock and continues to execute; if the test results show that the lock throw is occupied, the process will repeat the "test and set" operation in a small loop. Perform the so-called "spin" and wait for the spin lock holder to release the lock. Spin locks are similar to mutexes, but mutexes cannot be used in code that may sleep, while spin locks can be used in code that can sleep, and typical applications can be used in interrupt handlers. Related operations of spin lock:

/ / define the spin lock spinlock_t spin; / / initialize the spin lock spin_lock_init (lock); / / obtain the spin lock: if the lock can be acquired immediately, it acquires the lock and returns, otherwise, spin until the lock holder releases spin_lock (lock); / / attempts to obtain the spin lock: if the lock can be obtained immediately, it gets and returns true, otherwise it immediately returns false and no longer spins spin_trylock (lock) / / release spin lock: use spin_unlock (lock) with spin_lock (lock) and spin_trylock (lock); use of spin lock: / / define a spin lock spinlock_t lock;spin_lock_init (& lock); spin_lock (& lock); / / acquire spin lock to protect critical area. / / critical area spin_unlock (); / / unlock

Kernel preemption will be prohibited during spin lock holding. Spin lock ensures that the critical area is not disturbed by preemptive processes in other CPU and this CPU, but the code path that gets the lock may also be affected by interruptions and bottom half (BH) when executing the critical section. To prevent this effect, spin lock derivatives need to be used:

Spin_lock_irq () = spin_lock () + local_irq_disable () spin_unlock_irq () = spin_unlock () + local_irq_enable () spin_lock_irqsave () = spin_lock () + local_irq_save () spin_unlock_irqrestore () = spin_unlock () + local_irq_restore () spin_lock_bh () = spin_lock () + local_bh_disable () spin_unlock_bh () = spin_unlock () + local _ bh_enable () some other options

These are the locking mechanisms that are often used in linux driver programming. Here are some other implementations in the kernel.

Unlocked algorithm

Sometimes, you can reinvent your algorithm to completely avoid the need for locking. Many reader / writer situations-if there is only one writer-are often able to work in this way. If the writer is careful to make the data structure, as seen by the reader, consistent, it is possible to create an unlocked data structure. There is a general lock-free ring buffer implementation in the linux kernel.

Atomic variables and bit operations

Atomic operations refer to operations that are not interrupted by other code paths during execution. Both atomic variables and bit operations are atomic operations. The following is an introduction to its related operations.

/ / set the value of the atomic variable void atomic_set (atomic_t * v, int I); / / set the value of the atomic variable to iatomic_t v = ATOMIC_INIT (0); / / define the atomic variable v and initialize to 0 / get the value of the atomic variable atomic_read (atomic_t * v); / / return the value of the atomic variable / / the atomic variable plus / minus void atomic_add (int I, atomic_t * v) / / Atomic variable plus ivoid atomic_sub (int I, atomic_t * v); / / Atomic variable minus I / / Atomic variable self-increasing / self-decreasing void atomic_inc (atomic_t * v); / / Atomic variable increasing 1void atomic_dec (atomic_t * v) / / Atomic variable reduction 1 / / operation and test: test whether the atomic variable is 0 after self-increment, self-subtraction and subtraction (no addition). If it is 0, return true, otherwise return falseint atomic_inc_and_test (atomic_t * v); int atomic_dec_and_test (atomic_t * v); int atomic_sub_and_test (int I, atomic_t * v) / / Operation and return: add / subtract and increase / subtract atomic variables and return new values int atomic_add_return (int I, atomic_t * v); int atomic_sub_return (int I, atomic_t * v); int atomic_inc_return (atomic_t * v); int atomic_dec_return (atomic_t * v); bit atomic operation: / / set bit void set_bit (nr, void * v) / / set the nr bit of the addr address, that is, write 1 / / clear bit void clear_bit (nr, void * addr); / / clear the nr bit of the addr address, that is, write the 0 / / change bit void change_bit (nr, void * addr); / / reverse the nr bit of a pair of addr addresses / / test bit test_bit (nr, void * addr) / / return the nr bit of the addr address / / Test and operate: equivalent to executing test_bit (nr, void * addr) followed by xxx_bit (nr, void * addr) int test_and_set_bit (nr, void * addr); int test_and_clear_bit (nr, void * addr); int test_and_change_bit (nr, void * addr); seqlock (sequence lock)

Using the seqlock lock, the read execution unit will not be blocked by the write execution unit, that is, the read execution unit can continue to read when the write execution unit writes to the shared resources protected by the seqlock lock, without waiting for the write execution unit to complete the write operation, and the write execution unit does not need to wait for all the read execution units to complete the read operation. Write execution units are still mutually exclusive. If a write occurs during a read operation, the data must be re-read. The seqlock lock must require that protected shared resources do not contain pointers.

/ / acquire sequence lock void write_seqlock (seqlock_t * sl); int write_tryseqlock (seqlock_t * sl); write_seqlock_irqsave (lock, flags) write_seqlock_irq (lock) write_seqlock_bh () / / release sequence lock void write_sequnlock (seqlock_t * sl) The mode of write_sequnlock_irqrestore (lock, flags) write_sequnlock_irq (lock) write_sequnlock_bh () / write execution unit using sequence lock is as follows: write_seqlock (& seqlock_a);... / write operation code block write_sequnlock (& seqlock_a); read execution unit operation: / / read start: return sequence lock sl current sequence number unsigned read_seqbegin (const seqlock_t * sl) Read_seqbegin_irqsave (lock, flags) / / rereading: after accessing the shared resource protected by sequential lock sl, the read execution unit needs to call this function to check whether there is a write operation during read access. If there is a write operation, reread int read_seqretry (const seqlock_t * sl, unsigned iv); read_seqretry_irqrestore (lock, iv, flags) / / the read execution unit uses sequential locks as follows: do {seqnum = read_seqbegin (& seqlock_a); / / read operation block.} while (read_seqretry (& seqlock_a, seqnum)); read-copy-update (RCU)

Read-copy-update (RCU) is an advanced mutual exclusion method that can achieve very high efficiency when appropriate. RCU can be regarded as a high-performance version of read-write lock. Compared with read-write lock, RCU not only allows multiple read execution units to access protected data at the same time, but also allows multiple read execution units and multiple write execution units to access protected data at the same time. However, RCU cannot replace the read-write lock, because if there are too many writes, the improvement in the performance of the read execution unit cannot make up for the loss caused by the write execution unit. Because it is seldom used at ordinary times, so don't say much.

The above is all the contents of the article "sample Analysis of concurrency and normality in linux drivers". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report