In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article introduces the knowledge of "how to write your own concurrent queue class in linux". Many people will encounter this dilemma in the operation of actual cases, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Design concurrent queues
The code is as follows:
# include
# include
Using namespace std
Template
Class Queue
{
Public:
Queue ()
{
Pthread_mutex_init (& _ lock, NULL)
}
~ Queue ()
{
Pthread_mutex_destroy (& _ lock)
}
Void push (const T & data)
T pop ()
Private:
List _ list
Pthread_mutex_t _ lock
}
Template
Void Queue::push (const T & value)
{
Pthread_mutex_lock (& _ lock)
_ list.push_back (value)
Pthread_mutex_unlock (& _ lock)
}
Template
T Queue::pop ()
{
If (_ list.empty ())
{
Throw "element not found"
}
Pthread_mutex_lock (& _ lock)
T _ temp = _ list.front ()
_ list.pop_front ()
Pthread_mutex_unlock (& _ lock)
Return _ temp
}
The above code is valid. However, consider a situation where you have a long queue (which may contain more than 100000 elements), and at some point during code execution, far more threads read data from the queue than add data. Because the add and pull data operations use the same mutex, the speed at which the data is read affects the thread that writes the data. So, how about using two locks? One lock is used for read operations and the other for write operations. The modified Queue class is given.
The code is as follows:
Template
Class Queue
{
Public:
Queue ()
{
Pthread_mutex_init (& _ rlock, NULL)
Pthread_mutex_init (& _ wlock, NULL)
}
~ Queue ()
{
Pthread_mutex_destroy (& _ rlock)
Pthread_mutex_destroy (& _ wlock)
}
Void push (const T & data)
T pop ()
Private:
List _ list
Pthread_mutex_t _ rlock, _ wlock
}
Template
Void Queue::push (const T & value)
{
Pthread_mutex_lock (& _ wlock)
_ list.push_back (value)
Pthread_mutex_unlock (& _ wlock)
}
Template
T Queue::pop ()
{
If (_ list.empty ())
{
Throw "element not found"
}
Pthread_mutex_lock (& _ rlock)
T _ temp = _ list.front ()
_ list.pop_front ()
Pthread_mutex_unlock (& _ rlock)
Return _ temp
}
Design concurrent blocking queues
Currently, if the reader thread tries to read data from a queue that does not have data, it simply throws an exception and continues to execute. However, this is not always what we want, and the reader thread will probably want to wait (that is, block itself) until data is available. Such queues are called blocked queues. How do I get the reader thread to wait after discovering that the queue is empty? One approach is to poll the queue periodically. However, because this approach does not guarantee that data is available in the queue, it can result in a significant waste of CPU cycles. The recommended approach is to use conditional variables, that is, variables of type pthread_cond_t.
The code is as follows:
Template
Class BlockingQueue
{
Public:
BlockingQueue ()
{
Pthread_mutexattr_init (& _ attr)
/ / set lock recursive
Pthread_mutexattr_settype (& _ attr,PTHREAD_MUTEX_RECURSIVE_NP)
Pthread_mutex_init (& _ lock,&_attr)
Pthread_cond_init (& _ cond, NULL)
}
~ BlockingQueue ()
{
Pthread_mutex_destroy (& _ lock)
Pthread_cond_destroy (& _ cond)
}
Void push (const T & data)
Bool push (const T & data, const int seconds); / / time-out push
T pop ()
T pop (const int seconds); / / time-out pop
Private:
List _ list
Pthread_mutex_t _ lock
Pthread_mutexattr_t _ attr
Pthread_cond_t _ cond
}
Template
T BlockingQueue::pop ()
{
Pthread_mutex_lock (& _ lock)
While (_ list.empty ())
{
Pthread_cond_wait (& _ cond, & _ lock)
}
T _ temp = _ list.front ()
_ list.pop_front ()
Pthread_mutex_unlock (& _ lock)
Return _ temp
}
Template
Void BlockingQueue:: push (const T & value)
{
Pthread_mutex_lock (& _ lock)
Const bool was_empty = _ list.empty ()
_ list.push_back (value)
Pthread_mutex_unlock (& _ lock)
If (was_empty)
Pthread_cond_broadcast (& _ cond)
}
There are two aspects of concurrent blocking queue design that need to be noted:
1. Instead of using pthread_cond_broadcast, you can use pthread_cond_signal. However, pthread_cond_signal releases at least one thread waiting for a conditional variable, which is not necessarily the reader thread with the longest waiting time. Although using pthread_cond_signal does not impair the functionality of blocking queues, it may cause some reader threads to wait too long.
two。 False thread awakening may occur. Therefore, after waking up the reader thread, make sure that the list is not empty before continuing with processing. It is strongly recommended that you use pop () based on while loops.
Design a concurrent blocking queue with timeout limit
In many systems, if new data cannot be processed within a specific period of time, the data is not processed at all. For example, the automatic receiver of the news channel shows real-time stock quotes from financial exchanges, and it receives new data every n seconds. If some of the previous data cannot be processed within n seconds, it should be discarded and the latest information displayed. Based on this concept, let's take a look at how to add a timeout limit to concurrent queue add and pull operations. This means that if the system cannot perform add and pull operations within the specified time limit, it should not perform the operation at all.
The code is as follows:
Template
Bool BlockingQueue:: push (const T & data, const int seconds)
{
Struct timespec ts1, ts2
Const bool was_empty = _ list.empty ()
Clock_gettime (CLOCK_REALTIME, & ts1)
Pthread_mutex_lock (& _ lock)
Clock_gettime (CLOCK_REALTIME, & ts2)
If ((ts2.tv_sec-ts1.tv_sec))
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.