Click here to Skip to main content
15,860,859 members
Articles / General Programming / Threads

A Standard Multi-threaded Dynamic Queue

Rate me:
Please Sign up or sign in to vote.
4.79/5 (24 votes)
28 Oct 2010GPL36 min read 150.4K   5.6K   111   52
This is a standard Windows / C++ implementation of a multi-threaded queue.

Introduction

This is a standard Windows / C++ implementation of a multi-threaded queue after:

M. Michael and M. Scott. "Nonblocking algorithms and preemption-safe locking on
                          multiprogrammed shared - memory multiprocessors."
Journal of Parallel and Distributed Computing, 51(1):1-26, 1998.

The queue implemented in this article is a de-facto standard in a multithreaded environment for a dynamic queue. The article cited is one of a long list of references. The queue is also explained in the textbook: "C++ Concurrency in Action - Practical Multithreading" by Anthony Williams.

The explicit goal of a multithreaded queue is to let different threads in your application to pass messages in the form of specified objects. For instance, you may have producer threads that do create messages for consumer threads that are able to respond to such messages and perform operations onto them.

The queue presented in this article is the most basic queue as dictated by the requirement of the fastest possible execution times in a real time programming environment.

The algorithm - Brief explanation

A multithreaded queue has basically an initialize section and two operations: push and pop, or alternatively, enqueue and dequeue. In a multithreaded environment, the emphasis is on the locking - unlocking of the resource while concurrently enqueueing or dequeueing items from the queue.

The two operations on the queue must allow the concurrent access to the shared resource. Hence, they must use locks (on mutexes) to hold on in order to regulate access to the shared resource.

This standard implementation of the dynamic queue solves this problem by representing the queue itself as a simple, singly linked list that is never empty; it always contains at least one single dummy node. This allows for parallel concurrent accesses of the head / tail of the queue as one element will always be there in order to prevent concurrent access of the same node. Hence, the two operations push and pop need only a lock each to regulate concurrent access to the tail / head of the queue, respectively.

It never occurs that such locks need to be held simultaneously: each operation will access the head / tail of the queue in complete independence from the other operation. This allows for the fastest implementation possible for a multithreaded queue as each lock will be held only to independently serialize the two operations; i.e., each lock is necessary only to serialize multithread access to the head or tail of the queue but not to serialize simultaneous access to the head and tail of the queue itself.

Hence the two operations push and pop run in complete independence, and may run in parallel without disrupting the state of the queue. The locks are necessary only to safely access each of the two operations alone from multiple threads; i.e., we need to serialize access to the head or tail of the queue. This is the fastest implementation possible for a multithreaded queue using locks. Non-blocking algorithms are possible, but they require increased complexity coding.

The algorithm - Pseudo-code

This is a brief description in pseudo-code of how the queue looks like in a procedural language.

In the code associated with the article, the pseudo code is translated into C++. Hence the initialisation will be performed in the constructor of the queue.

Initialisation

C++
structure node_t  {value: data type, next: pointer to node_t}
structure queue_t {Head: pointer to node_t, Tail: 
          pointer to node_t, H_lock: lock type, T_lock: lock type}

INITIALIZE(Q: pointer to queue_t)
node = new node()              # Allocate a free node
node->next = NULL              # Make it the only node in the linked list
Q->Head = Q->Tail = node       # Both Head and Tail point to it
Q->H_lock = Q->T_lock = FREE   # Locks are initially free

Enqueueing and dequeueing items (push and pop, respectively)

C++
ENQUEUE or PUSH(Q: pointer to queue_t, value: data type)
node = new node()        # Allocate a new node from the free list
node->value = value      # Copy enqueued value into node
node->next = NULL        # Set next pointer of node to NULL
lock(&Q->T_lock)         # Acquire Tail lock in order to access Tail
   Q->Tail->next = node  # Link node at the end of the linked list
   Q->Tail = node        # Swing Tail to node
unlock(&Q->T_lock)       # Release Tail lock

DEQUEUE or POP(Q: pointer to queue_t, pvalue: pointer to data type): boolean
lock(&Q->H_lock)              # Acquire Head lock in order to access Head
   node = Q->Head             # Read Head
   new_head = node->next      # Read next pointer
   if (new_head == NULL)      # Is queue empty?
      unlock(&Q->H lock)      # Release Head lock before return
      return FALSE            # Queue was empty
   endif
   *pvalue = new_head->value  # Queue not empty. Read value before release
   Q->Head = new_head         # Swing Head to next node
unlock(&Q->H_lock)            # Release Head lock
free(node)                    # Free node
return TRUE                   # Queue was not empty, dequeue succeeded

The solution associated to the article in C++ for the queue keeps the "move semantics" presented in the above pseudo code: pointers are used to pass and store by reference the objects (items) composing the queue. This is in contrast with a different solution possible based on a templated approach that, instead, would implement the queue as a template based queue where the elements (items) of the queue are stored by value, i.e., implementing a "copy semantics". In general, in real time programming environments, where the requirement of the fastest possible execution times is of primary concern, the "move semantics" is the standard solution to this kind of problems as it is inherently faster than the "copy semantics" approach. Hence, the queue presented in this article is implemented using pointers and respecting the philosophy of the solution cited in the original paper from where this work stems. A clear exception to this rule is, of course, when the template parameter is itself a pointer; then we will regain the "move semantics" for the queue. A template version of the queue is provided for such purposes.

Using the code

Multiple threads access the queue with the push method in order to queue new items, and multiple threads access the pop method in order to retrieve the items. Each object is passed by reference via its pointer.

Let's declare our queue:

C++
CMultiThreadSingleQueue Queue;

Using the push method is as simple as:

C++
CObject* ptrToObj = &SomeObject;
Queue.Push( (void*) ptrToObj );

Using the pop method is as simple as:

C++
void* ptrToObj = NULL;
while ( Queue.Pop( ptrToObj ) )
{
  // Do something with the objet, for instance
  if ( ptrToObj )
     ((CObject*) ptrToObj)->DoWork();
}

It is important to note that the pop method should be used in a polling loop or synchronised with external events for continuous operation; for instance, in a polling situation, one would use the following approach:

C++
while (true) 
{
   void* ptrToObj = NULL;
   while ( Queue.Pop( ptrToObj ) )
   {
      // Do something with the objet, for instance
      if ( ptrToObj )
         ((CObject*) ptrToObj)->DoWork();
   }
   Sleep(nTimeInMilliseconds); // Polling wait in the loop
   if ( m_bDoStop ) // boolean to exit thread
       break;
}

while for an event based synchronised operation mode, one would use the following approach:

C++
while (true) 
{
   // Wait for event signalled
   WaitForSingleObject(hHandleToEvent,nTimeInMilliseconds);
   // Do work
   void* ptrToObj = NULL;
   while ( Queue.Pop( ptrToObj ) )
   {
      // Do something with the objet, for instance
      if ( ptrToObj )
         ((CObject*) ptrToObj)->DoWork();
   }
   if ( m_bDoStop ) // boolean to exit thread
       break;
}

About the demo application

This application demonstrates the use of some standard "Queues" used in real time programming techniques for inter-thread communication.

The sample offers example of use of:

  1. Class CMultiThreadSingleQueue: Concurrent unbounded queue with First In First Out (FIFO) semantics. This queue's intended use is as a buffer in between multiple producers and multiple consumers. The sample shows the use by 3 producers and 2 consumers.
  2. Class CCircBuffer: Concurrent bounded queue with First In First Out (FIFO) semantics. This queue's intended use is as a buffer in between a single producer and a single consumer. Multiple consumers and producers are not allowed. Messages are passed in between the two threads in FIFO order, and may also be stored in a shared segment memory for inter-process communication.
  3. Class CDoubleBuffer: Flip flop double buffer as it may be used in between two threads only: one producer and one consumer. The double buffer is a structure with two identical buffers that are used in a concurrent way by the two threads: when one thread writes to a buffer, the other thread may read from the second buffer; after these operations end, the buffers are swapped and the process continues. Messages are passed in between the two threads in FIFO order, and may also be stored in a shared segment memory for inter-process communication.

The sample application also demonstrates the techniques used in managing threads for producers and consumers (worker threads) and for writing a simple multi-tabbed dialog application (class CTabCtrlEx). The GridCtrl library must be compiled before the demo app. These working solutions are for Visual Studio 2008 Service Pack 1.

Points of interest

This is the de facto standard multithreaded queue with locks as used in the research.

History

  • 6 October 2010 - First draft.
  • 7 October 2010 - Added more information about specific points.
  • 21 October 2010 - Added a demo Win32/MFC application.

License

This article, along with any associated source code and files, is licensed under The GNU General Public License (GPLv3)


Written By
Software Developer (Senior)
Italy Italy
Senior Software Developer in C/C++ and Oracle.
Ex-physicist holding a Ph.D. on x-ray lasers.

Comments and Discussions

 
QuestionRemove Specific Messages (not only the last one) Pin
UrbanBlues8-Jan-16 21:17
UrbanBlues8-Jan-16 21:17 
AnswerRe: Remove Specific Messages (not only the last one) Pin
federico.strati8-Jan-16 22:55
federico.strati8-Jan-16 22:55 
QuestionWin 32 Thread safe queue Pin
vasanthneel31-Jan-13 3:23
vasanthneel31-Jan-13 3:23 
AnswerRe: Win 32 Thread safe queue Pin
federico.strati8-Jan-16 22:58
federico.strati8-Jan-16 22:58 
BugExcellent! But, include afxmt.h Pin
Member 421587329-Sep-11 20:34
Member 421587329-Sep-11 20:34 
GeneralRe: Excellent! But, include afxmt.h Pin
federico.strati7-Nov-11 20:31
federico.strati7-Nov-11 20:31 
GeneralMy vote of 5 Pin
yoaz12-Sep-11 23:08
yoaz12-Sep-11 23:08 
GeneralIssues Pin
Karpov Andrey22-Dec-10 3:25
Karpov Andrey22-Dec-10 3:25 
GeneralMy vote of 5 [modified] Pin
benben24-Nov-10 6:36
benben24-Nov-10 6:36 
GeneralMy vote of 3 Pin
User 58385210-Nov-10 21:54
User 58385210-Nov-10 21:54 
GeneralRe: My vote of 3 Pin
federico.strati2-Dec-10 22:14
federico.strati2-Dec-10 22:14 
GeneralMemory allocator usage [modified] Pin
Lior Kogan10-Oct-10 1:58
Lior Kogan10-Oct-10 1:58 
GeneralRe: Memory allocator usage Pin
federico.strati10-Oct-10 5:22
federico.strati10-Oct-10 5:22 
GeneralRe: Memory allocator usage [modified] Pin
Lior Kogan10-Oct-10 6:43
Lior Kogan10-Oct-10 6:43 
GeneralRe: Memory allocator usage Pin
Andriy Tylychko12-Oct-10 4:28
Andriy Tylychko12-Oct-10 4:28 
GeneralRe: Memory allocator usage Pin
Yuantu Huang13-Oct-10 23:43
Yuantu Huang13-Oct-10 23:43 
QuestionStandard Queue? Pin
Yuantu Huang9-Oct-10 14:27
Yuantu Huang9-Oct-10 14:27 
AnswerRe: Standard Queue? Pin
federico.strati9-Oct-10 20:48
federico.strati9-Oct-10 20:48 
It is a standard in the following sense:
it is adopted in research as test bench against which the other
implementations are benchmarked.

Also, it is the founding algorithm of many works such as
the intel threading building blocks "concurrent queue" and
others.

It is in widespread use in many technologies.

In such sense it is a standard, given the spread of its uses.

Cheers
GeneralRe: Standard Queue? Pin
Yuantu Huang9-Oct-10 21:51
Yuantu Huang9-Oct-10 21:51 
GeneralRe: Standard Queue? Pin
federico.strati10-Oct-10 5:18
federico.strati10-Oct-10 5:18 
GeneralRe: Standard Queue? Pin
Yuantu Huang10-Oct-10 23:45
Yuantu Huang10-Oct-10 23:45 
GeneralRe: Standard Queue? Pin
trotwa11-Oct-10 4:11
trotwa11-Oct-10 4:11 
GeneralRe: Standard Queue? Pin
Yuantu Huang11-Oct-10 23:34
Yuantu Huang11-Oct-10 23:34 
GeneralRe: Standard Queue? Pin
trotwa12-Oct-10 23:03
trotwa12-Oct-10 23:03 
GeneralRe: Standard Queue? Pin
Yuantu Huang13-Oct-10 23:16
Yuantu Huang13-Oct-10 23:16 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.