Click here to Skip to main content
15,867,568 members
Articles / Programming Languages / C++
Article

Windows Thread Pooling and Execution Chaining

Rate me:
Please Sign up or sign in to vote.
4.66/5 (12 votes)
26 Apr 20043 min read 124.6K   3.3K   75   13
An article that describes a template based thread pool implementation with execution chaining

Image 1

Introduction

Thread pooling describes a technique by which threads of execution are managed and to which work is distributed. Additional semantics such as concurrency control may also be defined. Thread pooling is a nice way to:

  • Manage complexity

    Thread pooling is a natural fit for state based processing – if you can decompose your system into a set of state machines, thread pooling works nicely and effectively in realizing your design. This provides the added benefit of simplifying the debugging of multithreaded applications in most cases

  • Make your applications scale:

    Properly implemented, the thread pool can enforce concurrency limits that will make your application scale.

  • Introduce new code while minimizing risk:

    Thread pooling lets you break execution into work units that are best described as development sandboxes. Sandboxes are fun and safe! How? Thread pooling promotes loose coupling between processes and naturally separates data from process. Any coupling between processes typically happens at a well-defined data point. This can be a lot easier to maintain over time, especially in large multithreaded applications.

Design

The conceptual model of a thread pool is simple: the pool starts threads running; work is queued to the pool; available threads execute the queued work. Using templates the pool may be defined independent of the thread/work implementation (a technique known as static polymorphism).

Image 2

Figure 1. Thread pool collaboration diagram

The thread pool is responsible for thread creation; threads commence execution at worker::thread_proc. Requests are queued to the thread pool; the worker prepares the request and the request is queued. When a thread is available to process the work, it may request the pending work from the thread pool with thread_pool::get_queued_status. If there is no pending work, the thread is suspended until work is available.

Chaining

While our worker implementation allows us to queue work, we can go one step further. The thread pool promised to help us break problems into discrete steps that maintain state minimizing complexity and risk while maximizing the raw power we can squeeze out of our box. However, our current implementation only allows us to queue one piece of work at a time making it cumbersome to logically group sequential work together. We also need some way of knowing when that work is done so that we might queue some more work.

Example:

Take the system down; rebuild the system data; bring the system back online.

Three steps, three pieces of work. I would like to be able to[1]:

thread_pool::instance().queue_request(
    (core::chain(), new system_down, new rebuild_data, new system_up));

What is chain? A work unit! Rather, chain simply acts as a container for the real work unit, who in turn is just a container of work units:

struct chain {
struct data : work_unit, std::list<smart_pointer<work_unit> > {
void process();
};

chain() : m_work(new data) {}
chain& operator,(work_unit* p_work);
operator work_unit*() { return m_work; }
smart_pointer<data> m_work;
};  // struct chain

chain::operator,does just as advertised:

m_work->push_back(p_work);
return *this;

and chain::data::process() is just as simple:

front()->process();
pop_front();

// if not empty, requeue
if (true == empty()) return;
thread_pool<worker_thread>::instance().queue_request(this);

Using the code

Initialize the thread pool you would like to use. As thread pools are parameterized singletons, there will be a thread pool instance for each type of worker used. The class global::thread_pool is a convenient typedef for core::thread_pool<core::worker_thread>.

global::thread_pool::instance().initialize();

If you choose core::worker_thread as your worker implementation, all work will be derived from core::work_unit and your work will be performed when process is called.

struct mywork : core::work_unit
{
    void process() throw()
    {
        // work is processed here
    }
}

To queue work, create an instance of your class and initialize it as necessary. Use thread_pool::queue_request to queue the work.

// demonstrate chaining
global::thread_pool::instance().queue_request(
    (core::chain(), new work_1, new work_2, new work_3));

To shutdown the thread pool, use thread_pool::shutdown.

global::thread_pool::instance().shutdown();

About the demo program

The demo program does the following:

  • Initializes the global::thread_pool instance.
  • Instantiates three different types of work
  • Instantiates a chain tying the work together.
  • Queues and processes the work.
  • Shuts down the thread pool.

If you are a member of a team, you can quickly divide and distribute the work to the team to implement as work units. Each work unit can be tested independently and integrated as a final product. Each person has a sandbox to play in.

Thread pools are a fantastic tool for writing large, scalable systems quickly and safely without sacrificing performance. Happy Coding!

Points of interest

[1] I chose to overload the comma because it makes for nice lists – this is a useful tool for writing self documenting code.

History

  • 24/04/2004 Article creation

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here


Written By
Web Developer
United States United States

Joshua Emele lives in San Francisco. A member of
Plugware Solutions, Ltd. and specializes in network, database, and workflow applications in c++.
He is madly in love with life and his partner and enjoys teaching, playing classical guitar,
hiking, and digital electronics.



Plugware Solutions, Ltd. provides design, review, integration and implementation
consulting services and is the maker of the Plugware Web Services Platform.


Comments and Discussions

 
Questionvoid process() throw() <--- force throw nothing ! Pin
Luiz Salamon7-Aug-13 8:11
Luiz Salamon7-Aug-13 8:11 
GeneralMy vote of 5 Pin
Ashish Tyagi 405-Apr-11 7:22
Ashish Tyagi 405-Apr-11 7:22 
QuestionCompilation error in VC++8 Pin
pcbiswas7-Oct-07 17:43
pcbiswas7-Oct-07 17:43 
AnswerRe: Compilation error in VC++8 Pin
pcbiswas7-Oct-07 17:49
pcbiswas7-Oct-07 17:49 
Generalnice article Pin
ThatsAlok1-May-07 1:36
ThatsAlok1-May-07 1:36 
GeneralAdded ability to check completion status Pin
JoshCman9-Jan-07 9:37
JoshCman9-Jan-07 9:37 
Questionthread object on the stack Pin
osy10-Nov-06 3:38
osy10-Nov-06 3:38 
GeneralPool Stats Pin
mellery1-Oct-04 6:54
mellery1-Oct-04 6:54 
GeneralRe: Pool Stats Pin
jacksonwei.jin24-Jun-10 23:32
jacksonwei.jin24-Jun-10 23:32 
hi all ,
I have another question , how to know the all tasks have been done .

the main thread need to wait all tasks had been done ,then free thread pool .

Could anyone make suggestion ?
Generalsome notes Pin
Andrew Tyapuhin5-Jun-04 1:11
Andrew Tyapuhin5-Jun-04 1:11 
GeneralVisual C++ v6 support Pin
Andrew Tyapuhin28-May-04 0:04
Andrew Tyapuhin28-May-04 0:04 
Generalnice coding Pin
Vladimir Ralev25-May-04 2:50
Vladimir Ralev25-May-04 2:50 
GeneralRe: nice coding Pin
PlugwareSolutionsLtd25-May-04 3:15
PlugwareSolutionsLtd25-May-04 3:15 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.