In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces the relevant knowledge of "Java with lock concurrency, no lock concurrency and CAS instance analysis". The editor shows you the operation process through the actual case, and the operation method is simple, fast and practical. I hope this article "Java with lock concurrency, no lock concurrency and CAS instance analysis" can help you solve the problem.
Mutex concurrency for most programmers (and I'm basically one of them, of course), concurrent programming is almost equivalent to adding a lock to the relevant data structures. For example, if we need a stack that supports concurrency, the easiest way is to add a lock std::sync::Mutex to a single-threaded stack. (add Arc to allow multiple threads to own the stack) use std::sync:: {Mutex, Arc}
# [derive (Clone)]
Struct ConcurrentStack {
Inner: Arc
}
Impl ConcurrentStack {
Pub fn new ()-> Self {
ConcurrentStack {
Inner: Arc::new (Mutex::new (Vec::new ()
}
}
Pub fn push (& self, data: t) {
Let mut inner = self.inner.lock () .unwrap ()
(* inner) .push (data)
}
Pub fn pop (& self)-> Option {
Let mut inner = self.inner.lock () .unwrap ()
(* inner) .pop ()
}
}
The benefits are obvious: the code is easy to write, after all, almost the same as the single-threaded version. You just need to write it in a single-threaded version, then lock the data structure, and then acquire and release (basically automatic in Rust) locks if necessary. So what's the problem? First of all, aside from the fact that you may forget to acquire and release locks (thanks to Rust, which is almost impossible in Rust), you may face the problem of deadlocks (the dining of philosophers). Then, regardless of the fact that some low-priority tasks may preempt the resources of high-priority tasks for a long time (because locks are first), when the number of threads is large, most of the time is spent on synchronization (waiting for locks to be acquired). Performance becomes very poor. Consider a concurrent database with a large number of reads and occasional writes, if it is processed with a lock, even if the database does not have any updates, synchronization needs to be done between every two reads, which is too expensive! Lock-free concurrency therefore, a large number of computer scientists and programmers turn their attention to lock-free concurrency. Unlocked objects: if a shared object guarantees that no matter what other threads do, some threads will complete an operation Her91 on it after a limited number of system steps. That is, at least one thread will work on its operation. Concurrency that uses locks clearly does not fall into this category: if the thread that acquired the lock is delayed, no thread can do anything during that time. In extreme cases, if a deadlock occurs, no thread can complete any operation. CAS (compare and swap) primitive so you may wonder, how to achieve lock-free concurrency? Are there any examples? Before we do that, let's take a look at a recognized atomic primitive that is very important in lock-free concurrency: CAS. The process of CAS is to compare a stored value with a specified value, and only if they are the same will the stored value be modified to the new specified value. CAS is an atomic operation (supported by processors, such as x86's compare and exchange (CMPXCHG)), which ensures that writes will fail if other threads have changed the storage value. Types in std::sync::atomic in the Rust standard library provide CAS operations, such as atomic pointer std::sync::atomic::AtomicPtr pub fn compare_and_swap (
& self
Current: * mut T
Order: Ordering
)-> * mut T
(here, don't worry about what ordering is, that is to say, ignore Acquire, Release, Relaxed) Lock-free stack (naive version) #! [feature (box_raw)]
Use std::ptr:: {self, null_mut}
Use std::sync::atomic::AtomicPtr
Use std::sync::atomic::Ordering:: {Relaxed, Release, Acquire}
Pub struct Stack {
Head: AtomicPtr
}
Struct Node {
Data: T
Next: * mut Node
}
Impl Stack {
Pub fn new ()-> Stack {
Stack {
Head: AtomicPtr::new (null_mut ())
}
}
Pub fn pop (& self)-> Option {
Loop {
/ / snapshot
Let head = self.head.load (Acquire)
/ / if the stack is empty
If head = = null_mut () {
Return None
} else {
Let next = unsafe {(* head) .next}
/ / if the status quo of the snapshot has not changed
If self.head.compare_and_swap (head, next, Release) = = head {
/ / read the content and return
Return Some (unsafe {ptr::read (& (* head) .data)})
}
}
}
}
Pub fn push (& self, t: t) {
/ / create a node and convert it to a * mut pointer
Let n = Box::into_raw (Box::new (Node {
Data: t
Next: null_mut ()
}))
Loop {
/ / snapshot
Let head = self.head.load (Relaxed)
/ / update node based on snapshot
Unsafe {(* n) .next = head;}
/ / if during this period, the snapshot is still not out of date
If self.head.compare_and_swap (head, n, Release) = = head {
Break
}
}
}
}
As we can see, both pop and push have the same idea: first pop or push on the snapshot, and then try to replace the original data with CAS. If the snapshot is consistent with the data, the data has not been written during this period, so the update is successful. If it is inconsistent, it means that other threads have modified the data during this period, and then start all over again. This is an unlocked stack. It seems that everything is done! It's true that you're probably done with memory release, but only if you're writing Java, or any other language with GC. The problem now is that in Rust, a language without GC, return Some (unsafe {ptr::read (& (* head) .data)}) in pop has no one to release head. This is a memory leak! Well, it seems that it is not easy to have unlocked concurrency. That's all for "Java locking concurrency, unlocked concurrency and CAS instance analysis". Thank you for reading. If you want to know more about the industry, you can follow the industry information channel. The editor will update different knowledge points for you every day.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.