Click here to Skip to main content
13,800,880 members
Click here to Skip to main content
Add your own
alternative version


68 bookmarked
Posted 27 Apr 2005

A ThreadPool implementation

, 22 May 2005
Rate this:
Please Sign up or sign in to vote.
This article describes a ThreadPool implementation.


This article is about a thread pool. A pool manages requests from clients. A request is a pointer to a function or method, parameter for it, identity number, and priority number. Management is storing requests from clients and executing them in parallel by different threads. The order of execution is by priority, this scheduler is non-preemptive - when thread begins its execution nothing can stop it.

This code includes interesting subjects like OOD and polymorphism, multi-threading, synchronization, generic programming (templates) and STL containers.

I attached the source file of class pool and a simple main which shows how to use it.


The interface of the pool class is very simple. It contains only a few functions.

  • In ctor it receives number which limits the number of running threads in parallel. For example, if this number is 1 we get a sequential execution.
  • Run function creates the main thread which does all the management. This function should not be called more then once; there is only one pool per object pool. If you want to create a lot of pools do it in the right way (Pool p1 (2), p2 (2) ;), the pool class is not singleton.
  • Stop function kills the main thread. It saves requests that are being posted.
  • Enqueue function adds a new request to a pool. The name enqueue is because requests are stored in the priority queue. This function is threading safe.
  • Wait function waits until all requests are finished.
  • Wait for request function waits until a specific request is finished.
  • In dtor it stops the management, that is, there is no function Stop, Pause or something like that. If you want to stop a pool, kill it.

The order of calling functions is not important. You can Run and then Enqueue a new request.




Request is a function or method to be executed. Function means a C-Style function, also called “__cdecl”. Method means a member function of some class - a non static function inside a class, also called “__thiscall”. In the case of functions it is simple if you pass a pointer to a function and it works, but in the case of methods it does not work so easily. To execute a method you need an object of some class. This is a way to execute a method. See the ThreadRequestMethod.h file.

template <typename ClassT, typename ParamT = LPVOID> 
class ThreadRequestMethod : public ThreadRequestBase 
      virtual void Execute() { (m_Client->*m_ClientMethod)(m_Param); 
      ClassT *m_Client; 
      void (ClassT::*m_ClientMethod)(ParamT *param); 
      ParamT *m_Param; 

Priority Queue

To implement the priority queue, I used STL. There is a ready priority queue. There is a way to "teach" the queue of STL to work with our priorities of the members in this queue.

The way is to define a functor: structure which is derived from "binary_function" object, and to override the operator (). This functor is defined inside a private struct of a pool.

// functor - used in stl container - priority queue.
template <class ClassT>
struct less_ptr : public binary_function<ClassT, ClassT, bool> 
    bool operator()(ClassT x, ClassT y) const
        return x->GetPriority() < y->GetPriority();

This functor is good for every class which contains a function "int GetPriority ()". Now the definition of priority queue is as follows:

priority_queue <
    > RequestQueue;

The third template defines how to manage priorities.

The queue contains Request Wrappers. It wraps the request from clients and contains a pointer to the pool. It is necessary because a function which can be executed in a separate thread must be a C-function or a static function of some class. (I don't know another option.) So a Wrapper contains a pointer to "this".

Multi-threading and synchronization

Every request gets a thread to execute its function. There can be a lot of threads running in parallel and there are variables that every thread touches. To make safe those variables, I protected them by “Critical Sections” of Windows API. In the first version of this pool I used Mutexes but this way is not efficient as you can see from the following table from MSDN.

Table 1. Synchronization Objects Summary


Relative speed

Cross process

Resource counting

Supported platforms

Critical Section



No (Exclusive Access)





No (Exclusive Access)












Metered Section





* Events can be used for resource counting, but they do not keep track of the count for you.

This is a way to ensure that only one thread will execute “// do something …” at the same time.


// do something …

Another feature of Windows API that I used is events. The reason why I used it is to prevent wasting of CPU. This is a way to create an event:

HANDLE event = CreateEvent (NULL, false, false, "");

This is a way to use an event:

  1. WaitForSingleObject (event, INFINITE);
  2. SetEvent (event);

See function "RunMainThread" to see how it saves CPU time.

How does it work

This is the life cycle of a request. First a client side:


ThreadRequestBase *r = new ThreadRequest<Param>(&func, param, priority);

Submitting to a pool


Start a Pool

bool res = Pool->Run();


This is the pool side (or inside of pool).

The request is added to the priority queue in function Enqueue. In the main thread function there is an infinite loop and inside there is a check if the pool can execute another request. It Dequeues a request from the queue and runs some pool function, not the client request function, and increases the variable which counts the number of running threads. Inside a pool function, the client function is executed and after this it decreases the number of running threads, and signals by SetEvent to the main thread that a request was finished. Using of this event prevents the main thread from wasting CPU. Then the main thread cannot execute another request, it waits for this signal in a blocking function.

To be improved

The pool creates a new thread for each request. When the request is finished the thread is dead. This is not an effective way but it is very simple. So the effective solution for pool management is to create the correct number of threads and use them to execute requests.


This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here


About the Author

Ratner Yuri
Web Developer
Israel Israel
I am a student at the end of MCs computer science, work in Polycom as a C++ programmer.

You may also be interested in...

Comments and Discussions

GeneralMy vote of 1 Pin
Member 47337393-May-11 4:36
memberMember 47337393-May-11 4:36 
QuestionCould you explain? Pin
WREY16-May-05 14:51
memberWREY16-May-05 14:51 
AnswerRe: Could you explain? Pin
Ratner Yuri16-May-05 22:03
memberRatner Yuri16-May-05 22:03 
GeneralRe: Could you explain? Pin
WREY19-May-05 12:02
memberWREY19-May-05 12:02 
GeneralRe: Could you explain? Pin
Ratner Yuri20-May-05 7:14
memberRatner Yuri20-May-05 7:14 
GeneralAlternative to wrapper class Pin
Joe Pizzi11-May-05 18:05
memberJoe Pizzi11-May-05 18:05 
QuestionWhy a priority queue? Pin
yafan28-Apr-05 6:10
memberyafan28-Apr-05 6:10 
AnswerRe: Why a priority queue? Pin
Ratner Yuri29-Apr-05 7:55
memberRatner Yuri29-Apr-05 7:55 
AnswerRe: Why a priority queue? Pin
staceyw1-May-05 6:45
memberstaceyw1-May-05 6:45 
GeneralRe: Why a priority queue? Pin
yafan23-May-05 11:01
memberyafan23-May-05 11:01 
I see your point.

From my own experience I have never seen the idea of protecting a common resource with a CriticalSection to be all that good, especially when a substantial number of threads are involved. Firstly, Critical section can block contention. Secondly, they manifest a phenomena called: "lock convoy".
Lock convoys are bad when you are designing an application to be scalable since all the threads end up being serialized on a single critical section. Probably better to use InitializeCriticalSectionAndSpinCount(...) instead. However, even this doesn't eradicate the problem completely.


GeneralSome issues Pin
Tim Smith28-Apr-05 4:58
memberTim Smith28-Apr-05 4:58 
GeneralRe: Some issues Pin
Ratner Yuri29-Apr-05 8:32
memberRatner Yuri29-Apr-05 8:32 
GeneralRe: Some issues Pin
Paolo Vernazza3-May-05 5:50
memberPaolo Vernazza3-May-05 5:50 
General+ InterlockedIncrement Pin
BorisKoltsov3-May-05 22:22
memberBorisKoltsov3-May-05 22:22 
GeneralRe: Some issues Pin
Paolo Vernazza4-May-05 0:08
memberPaolo Vernazza4-May-05 0:08 
Generalnice. Pin
Angus He28-Apr-05 0:52
memberAngus He28-Apr-05 0:52 
GeneralRe: nice. Pin
Ratner Yuri29-Apr-05 8:33
memberRatner Yuri29-Apr-05 8:33 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

Permalink | Advertise | Privacy | Cookies | Terms of Use | Mobile
Web02 | 2.8.181215.1 | Last Updated 23 May 2005
Article Copyright 2005 by Ratner Yuri
Everything else Copyright © CodeProject, 1999-2018
Layout: fixed | fluid