|
Once again, this is trying to implement CVs with events. Which isn't a trivial task and Schmidt did a damn good job.
But, the mistake you and others are making is saying that because it is hard (if not impossible) to make CVs using events, then events must be bad.
I have read the Schmidt article multiple times and I can never find where he says that events don't work for MT applications. He has a lot of valid criticisms for events, but he never says they don't work. (If I missed this, then PLEASE quote the exact phrase.)
Your logic is that since events don't work for every application, then they work for none. This is totally flawed logic. All I am saying is that events work just fine for many numerous applications but not for all. I don't even make the claim that they are easy.
All I want is some proof. All I get is opinions or articles that don't support the claim being made.
In turn, as you requested, I have provided an example where events work just fine. As expected, this isn't good enough even though it satisfies the logical requirements of the argument.
Tim Smith
I know what you're thinking punk, you're thinking did he spell check this document? Well, to tell you the truth I kinda forgot myself in all this excitement. But being this here's CodeProject, the most powerful forums in the world and would blow your head clean off, you've got to ask yourself one question, Do I feel lucky? Well do ya punk?
|
|
|
|
|
But, the mistake you and others are making is saying that because it is hard (if not impossible) to make CVs using events, then events must be bad.
I don't believe this is a mistake. What makes it difficult to use events to implement a CV is the need to synchronize with external state. So every problem you face when implementing a low level CV synchronization object will be the same problems you face when trying to use events in (most) cases where you have to synchronize with external state. Again, to prove me wrong on this all you have to do is implement the bounded buffer example (or something similar) using events.
Your logic is that since events don't work for every application, then they work for none.
Again, and please hear me this time because I don't like repeating, this has never been my claim. My claim is that they don't work for the most common applications, and it's often difficult to distinguish the cases where they won't work with out a lot of experience and thought. If you're going to argue with me the least you can do is limit your arguements to what I've claimed.
I can turn your own argument against you:
Your logic is that since events can be used safely in a few applications, then they are always, or at least generally, safe.
All I want is some proof. All I get is opinions or articles that don't support the claim being made.
It appears that you think this is true only because you continue to think I've made an argument that I have not. You and I are in complete agreement that there are cases where events can be used effectively and safely. But this does not mean that events are, in _general_, safe. Way more importantly, it doesn't mean that they should be included in a MT library, or that MS hadn't made a mistake in providing only events for synchronized waiting. The latter doesn't sound like something you can dispute after some admisions you've made here, and the former should not cause the concern you exhibit since you've already admitted that CVs are no more difficult to use then events and can solve the same problems.
BTW, don't read the above as any sort of attack on MS. I defend MS on a regular basis, and in general like the technologies they produce. But they do make mistakes, like all the rest of us.
William E. Kempf
|
|
|
|
|
You said that events work out of "pure luck". This implies that there is no skill involved at all.
There isn't anything magical about events. The WIN32 implementation is well defined. As long as you don't make assumptions about what they do, they work just fine.
Tim Smith
I know what you're thinking punk, you're thinking did he spell check this document? Well, to tell you the truth I kinda forgot myself in all this excitement. But being this here's CodeProject, the most powerful forums in the world and would blow your head clean off, you've got to ask yourself one question, Do I feel lucky? Well do ya punk?
|
|
|
|
|
Ok, neutral corners here for a moment.
If we get to the core of the issue here, it all might be choice of words.
My original objection was that events were characterized as unreliable or dangerous to use even when used properly. I think we can both agree that when used correctly, they work just fine. (We just differ on the amount of work it takes to make sure they are used correctly.)
Now, I also think we can agree on that events are harder to use than CVs and can be very tricky. This is something I have agreed with ever since I read that article on the monitor object.
IMHO, it isn't valid to just write off events as some people do. They are a very powerful and primitive tool. Unfortunately that also means you can hurt yourself badly if you don't watch out. I think it is a real crime when I run into people who actually believe some of the junk written about events and really do think the are functionally flawed.
If threads and shared pointers end up in the standard, then I have no problems with the BOOST implementations. The shared pointer really seems to be the right combination of elements to meet a very tall design order. The threads implementation is based on a well known and well understood solution. If you asked me, I probably wouldn't have added events either, but my reasons have more to do with the portability than correctness.
My problem is with the one solution for all problems. We have a standard hash_map that works well for most problems, but in my case, it was slow as a dog. Since I know how hash_maps work, I was able to rewrite my version that met my specific needs. Then we get into shared_ptr which has it's own problems. We all know that the programmers who don't understand why auto_ptr didn't do what they needed it to do won't understand shared_ptr either. We all know someone is going to improperly create two shared_ptrs that reference the same object thus creating two different counts. We all know that 30 seconds after shared_ptr is standardized, someone is going to try to use it to ref count a COM object.
But all those errors are not the fault of the shared_ptr or auto_ptr. Sure it would be nice if shared_ptr and auto_ptr didn't allow us to use them in unsafe situations such as auto_ptr in collections. But we can never protect programmers from themselves. There are plenty of ways to screw up CVs. Just look at the laundry list of pitfalls listed in the monitor object paper from 79. How many programmers have failed to properly maintain their mutexes? How many programs have been created with a collection of worker threads processing events from a CV based event queue. Unfortunately the programmer never released the mutex while processing the event so all the events convoy and the event producers spend all their time waiting to place things on the queue? How many of them just to get rid of the problem unlock the mutex after coming out of the wait leaving the CV unprotected during access?
So even though I agree that events are harder to use, I don't see them as hard to use as you do. PThreads isn't idiot proof. You have to understand what you are doing to make a good pthreads program. However, the same holds true for events. In the case of something like an event queue, the pthread closes up two or three very important windows. But it also opens up a one or two different errors that programmers might make.
Tim Smith
I know what you're thinking punk, you're thinking did he spell check this document? Well, to tell you the truth I kinda forgot myself in all this excitement. But being this here's CodeProject, the most powerful forums in the world and would blow your head clean off, you've got to ask yourself one question, Do I feel lucky? Well do ya punk?
|
|
|
|
|
Ok, neutral corners here for a moment.
Good. This post actually shows some hope that we can resolve our differences here.
IMHO, it isn't valid to just write off events as some people do. They are a very powerful and primitive tool. Unfortunately that also means you can hurt yourself badly if you don't watch out. I think it is a real crime when I run into people who actually believe some of the junk written about events and really do think the are functionally flawed.
That depends on your interpretation of "functionally flawed". But that aside, this is where I still can't understand where you're coming from. We agree that CVs are easier to use correctly. We agree that CVs can be used in every case that you could safely use an event, as well as the numerous cases where you couldn't. Given this, what reasoning would possibly lead you to be so upset with the exclusion of events in Boost.Threads? The *only* argument that I can think of is that events can give you a slight performance benefit in some cases, but the performance benefit is so slight that it will apply only in a very few cases, and typically those will be cases where you won't be writing portable code any way. It hardly seems that this argument is strong enough to warrant their inclusion... especially when implementing events on most platforms is going to require usage of a CV, mutex and boolean flag any way. In other words, you can't get the benefit portably any way.
The threads implementation is based on a well known and well understood solution.
Actually, Boost.Threads wasn't "based on" any particular solution. There have been numerous designs that started more "windows like" before they eventually came around to looking like POSIX. This happened because of analysis and research, not because of any POSIX influence. All that the results indicate is that POSIX was very well designed, which shouldn't be surprising since it's a standard.
If you asked me, I probably wouldn't have added events either, but my reasons have more to do with the portability than correctness.
The reason was never because of "correctness". It was because of "safety" (the same reason that scoped locks are used, for instance). Portability only applies if the goal of the event is to be optimally efficient. A portable event interface is trivial.
My problem is with the one solution for all problems. We have a standard hash_map that works well for most problems, but in my case, it was slow as a dog. Since I know how hash_maps work, I was able to rewrite my version that met my specific needs.
I don't get this. First of all, we don't have a standard hash_map. Secondly, are you claiming the design caused the implementation to be "dog slow", or only that the implementation you used was "dog slow". The latter has nothing at all to do with standardization.
But all those errors are not the fault of the shared_ptr or auto_ptr. Sure it would be nice if shared_ptr and auto_ptr didn't allow us to use them in unsafe situations such as auto_ptr in collections. But we can never protect programmers from themselves.
So are you saying that you just don't like standards? I don't get your argument. We all know printf() can be misused, as can every call to anything in the C or C++ standards. But that's not a reason to not have standards. The logic doesn't fit. Nor does it make sense to not try and prevent as many possible misuses in the design of a standard library component as possible.
There are plenty of ways to screw up CVs. Just look at the laundry list of pitfalls listed in the monitor object paper from 79. How many programmers have failed to properly maintain their mutexes? How many programs have been created with a collection of worker threads processing events from a CV based event queue. Unfortunately the programmer never released the mutex while processing the event so all the events convoy and the event producers spend all their time waiting to place things on the queue? How many of them just to get rid of the problem unlock the mutex after coming out of the wait leaving the CV unprotected during access?
Again, I don't follow your argument. If I read everything you've said at face value I'd have to conclude that we should simply stop programming all together since there are so many examples of misuses of library components. That doesn't make sense. What makes sense is to design for as few posssible misuses as you can. That's why CVs were chosen instead of events. That's why Boost.Threads mutexes expose locking through ScopedLocks and calls to boost::condition::wait() and it's variants take a lock instead of a mutex. There's still areas to abuse these, such as passing a try_lock that failed to lock (though we get an exception in this case), failing to put shared resources back into a valid state when exceptions are thrown, etc.
So even though I agree that events are harder to use, I don't see them as hard to use as you do. PThreads isn't idiot proof. You have to understand what you are doing to make a good pthreads program. However, the same holds true for events. In the case of something like an event queue, the pthread closes up two or three very important windows. But it also opens up a one or two different errors that programmers might make.
The difference is being able to easily prove the correctness of the code.
William E. Kempf
|
|
|
|
|
That depends on your interpretation of "functionally flawed". But that aside, this is where I still can't understand where you're coming from. We agree that CVs are easier to use correctly. We agree that CVs can be used in every case that you could safely use an event, as well as the numerous cases where you couldn't. Given this, what reasoning would possibly lead you to be so upset with the exclusion of events in Boost.Threads? The *only* argument that I can think of is that events can give you a slight performance benefit in some cases, but the performance benefit is so slight that it will apply only in a very few cases, and typically those will be cases where you won't be writing portable code any way. It hardly seems that this argument is strong enough to warrant their inclusion... especially when implementing events on most platforms is going to require usage of a CV, mutex and boolean flag any way. In other words, you can't get the benefit portably any way.
1. I never said I was upset that events were not included in BOOST. I have no idea where you got that idea from. But I am concerned about the "least common denominator" problem. But my main argument is with the conjecture that events are functionally flawed and thus work just by "pure luck".
2. How many WIN32 resources do CVs require? I counted 4. If all you need is a simple signal when an operation has completed, why use a bunch of resources when an event will work just fine. If events are so evil, then CRITICAL_SECTIONS should be rewritten to use CVs and not an event.
3. The last part of your paragraph is the most interesting part. Why is it valid to require WIN32 to create CVs but not valid for other operating systems to hack together events?
hash_maps
Hash maps are very close to being in the standard. Everyone references the SGI version. As far as being dog slow, YES they can be because the developers made design decisions. They produced a version that would work best for most implementations but by no means does that mean it works well for all. In my case it was wasting far too much time resizing the hash_map to keep the length of the hash threads as short as it could. This ended up slowing down my application due to the highly volatile nature of my hash_map. By starting with a small fixed width of the hash map and not allowing it to rehash, I was able to improve performance.
So are you saying that you just don't like standards? I don't get your argument. We all know printf() can be misused, as can every call to anything in the C or C++ standards. But that's not a reason to not have standards. The logic doesn't fit. Nor does it make sense to not try and prevent as many possible misuses in the design of a standard library component as possible.
I never said I don't like standards. But I have serious questions about the direction that the C++ standard MIGHT be heading.
Are the some of the standards really making programming easier? Look at all the problems with auto_ptr. Take a look at Meyers' arguments about using algorithms and functors for correctness and speed. Too bad his example has an obscure bug and runs at least 2 times slower than doing it by hand. He even admits that even though algorithms and functor reduce bugs in theory, that if taken too far you just end up introducing more problems into your code than algorithms and functors solve. Thus, doing it the old way and the new way are both perfectly valid and should be practiced when appropriate even though algorithms and functors generally might be safer.
How many elements of the standard were added to make code easier to write but all they ended up doing was trading one type of bug for another? Where are the studies that show that auto_ptr and or shared_ptr make code more bug free? Or are we just going with our gut feeling riding the wave of generic programming without really testing our theories.
I have no problems at all with standards in general. But I am starting to have reservations about some of the things being added. I really question this one size fits all approach to programming. Even SGI realized this with something as simple as the vector. If you don't like the doubling of the vector size when it needs to grow, you can use methods to micromanage the size.
Again, I don't follow your argument. If I read everything you've said at face value I'd have to conclude that we should simply stop programming all together since there are so many examples of misuses of library components. That doesn't make sense. What makes sense is to design for as few posssible misuses as you can.
Actually, you are the one who says that just because a methodology can be tricky to use, you shouldn't program with it. Meyers (?) had a great quote that basically said that pointers are a devil and an angel. Also take a look at the old goto arguments. Millions of programs have been written without pointers or gotos available to the programmer. However, when used properly, they are very powerful little features. But by your logic they shouldn't be used because we have developed safer and better (in theory) methods of doing the same thing.
The difference is being able to easily prove the correctness of the code.
Are you saying it is impossible to prove the correctness of code when events are used?
Tim Smith
I know what you're thinking punk, you're thinking did he spell check this document? Well, to tell you the truth I kinda forgot myself in all this excitement. But being this here's CodeProject, the most powerful forums in the world and would blow your head clean off, you've got to ask yourself one question, Do I feel lucky? Well do ya punk?
|
|
|
|
|
I threw this together in the last half-hour. Been tested on a dual CPU system too.
#include "stdafx.h"
#include "process.h"
class CBoundedBuffer
{
public:
CBoundedBuffer (int nSize) : m_nBegin (0), m_nEnd (0), m_nBuffered (0), m_nSize (nSize)
{
assert (nSize >= 0);
::InitializeCriticalSection (&m_cs);
m_pBuffer = new int [nSize];
m_hEventDequeue = ::CreateEvent (NULL, FALSE, FALSE, NULL);
m_hEventQueue = ::CreateEvent (NULL, FALSE, FALSE, NULL);
}
~CBoundedBuffer ()
{
if (m_pBuffer)
delete [] m_pBuffer;
if (m_hEventDequeue)
::CloseHandle (m_hEventDequeue);
if (m_hEventQueue)
::CloseHandle (m_hEventQueue);
::DeleteCriticalSection (&m_cs);
}
void Send (int nValue)
{
while (true)
{
::EnterCriticalSection (&m_cs);
bool fPlaced = m_nSize != m_nBuffered;
if (fPlaced)
{
m_pBuffer [m_nEnd] = nValue;
m_nEnd = (m_nEnd + 1) % m_nSize;
++m_nBuffered;
}
::LeaveCriticalSection (&m_cs);
if (!fPlaced)
::WaitForSingleObject (m_hEventDequeue, INFINITE);
else
{
::SetEvent (m_hEventQueue);
break;
}
}
}
int Receive ()
{
int nValue;
while (true)
{
::EnterCriticalSection (&m_cs);
bool fGotOne = m_nBuffered != 0;
if (fGotOne)
{
nValue = m_pBuffer [m_nBegin];
m_nBegin = (m_nBegin + 1) % m_nSize;
--m_nBuffered;
}
::LeaveCriticalSection (&m_cs);
if (!fGotOne)
::WaitForSingleObject (m_hEventQueue, INFINITE);
else
{
::SetEvent (m_hEventDequeue);
break;
}
}
return nValue;
}
protected:
int *m_pBuffer;
int m_nBegin;
int m_nEnd;
int m_nBuffered;
int m_nSize;
HANDLE m_hEventQueue;
HANDLE m_hEventDequeue;
CRITICAL_SECTION m_cs;
};
CBoundedBuffer g_sBuffer (2);
#define NUM_THREADS 32
#define NUM_TRIES 10000
unsigned int __stdcall Sender (LPVOID pValue)
{
int nStart = (int) pValue * NUM_TRIES;
for (int n = nStart; n < nStart + NUM_TRIES; ++n)
{
g_sBuffer .Send (n);
printf ("Sent: %d\n", n);
}
g_sBuffer .Send (-1);
return 0;
}
unsigned int __stdcall Receiver (LPVOID pValue)
{
int nThread = (int) pValue;
int n;
do
{
n = g_sBuffer .Receive ();
printf ("Received: %d on r(%d)\n", n, nThread);
} while (n != -1);
return 0;
}
int _tmain(int argc, _TCHAR* argv[])
{
HANDLE hHandles [NUM_THREADS * 2];
unsigned int dwSendThreadID;
unsigned int dwRecvThreadID;
for (int i = 0; i < NUM_THREADS; i++)
{
hHandles [i] = (HANDLE) _beginthreadex (NULL, 0, Sender, (LPVOID) i, 0, &dwSendThreadID);
hHandles [i + NUM_THREADS] = (HANDLE) _beginthreadex (NULL, 0, Receiver, (LPVOID) i, 0, &dwRecvThreadID);
}
::WaitForMultipleObjects (NUM_THREADS * 2, hHandles, TRUE, INFINITE);
for (int j = 0; j < NUM_THREADS * 2; j++)
::CloseHandle (hHandles [j]);
return 0;
}
(Hopefully none of the code gets chewed up by the message board.)
Problems with this implementation:
1. Order is not preserved. Just because thread A was the first to get to the send queue, it doesn't mean that thread B might not end up adding to the queue first. (The post wait, pre lock window)
2. There is a slight chance that a single receive thread might end up having to process more than one event if all threads were caught between releasing the lock and waiting, while 2 sends were placed in the queue. (The post lock, pre wait window)
3. The same problem can happen with the sends. However, once one of the send threads are able to place a value into the buffer, then the event will be re-triggered at dequeue and the other send thread will be released. In theory, a send might be starved and unable to place items on the queue if there are enough send threads pounding the queue. I don't know how much of a problem this would be given that I haven't studied the priority system in WIN32 for a long time. I do remember that the VMS priority system would allow a starved thread's priority to float up and thus sooner or later get the CPU. That is unless it is totally being locked out by a high priority thread. If that is the case, you really have a problem with your priority design. (Also the post lock, pre wait window)
However, none of these flaws are fatal.
Now, the important point is that these are the EXACT problems that CVs fix well. However, that doesn't mean you can't produce an event version that works.
Tim Smith
I know what you're thinking punk, you're thinking did he spell check this document? Well, to tell you the truth I kinda forgot myself in all this excitement. But being this here's CodeProject, the most powerful forums in the world and would blow your head clean off, you've got to ask yourself one question, Do I feel lucky? Well do ya punk?
|
|
|
|
|
This solution has the potential for a "lost wakeup" and deadlock.
int Receive ()
{
int nValue;
while (true)
{
::EnterCriticalSection (&m_cs);
bool fGotOne = m_nBuffered != 0;
if (fGotOne)
{
nValue = m_pBuffer [m_nBegin];
m_nBegin = (m_nBegin + 1) % m_nSize;
--m_nBuffered;
}
::LeaveCriticalSection (&m_cs);
if (!fGotOne)
::WaitForSingleObject (m_hEventQueue, INFINITE);
else
{
::SetEvent (m_hEventDequeue);
break;
}
}
return nValue;
}
The small contrived example is likely to never catch on this race condition, so you can run your test a thousand times and not likely find the bug. But it's there, waiting for the stars to align and the moon to be in Jupiter and your system to be running in a critical situation in which the deadlock will cost you millions of dollars or even human lives. Yes, I'm exagerating, but I'm doing so to make a point. This isn't a correct solution, and at least suggests the dangers inherant in the event synchronization model.
And again, this bug was explained in the paper by Douglas Schmidt on implementing CVs on Win32. That's why I pointed you at that article.
William E. Kempf
|
|
|
|
|
*BOW*
Let it never be said that I don't take a point gracefully.
But what does this prove. It proves that events can be tricky little bastards to even people who have been doing them for a long time.
My downfall was using your termination method which I didn't like and wouldn't have done myself. I never would have implemented my termination like that. The lost wakeup only raises it's head when a -1 falls into that post lock/pre wait window. In all other cases you get a temporary convoy.
But I state again, the implementation was flawed. But can easily be fixed.
(edit)
Oh, I just want to make sure you have read up on the Schmidt article. You do know that the lost wakeup problem Schmidt talks about is specific to PulseEvent and poor handling of manual reset events set via SetEvent? SetEvent with auto resets doesn't have this problem as long as you don't go around calling ResetEvent. But does have the problem of multiple events might only release one thread if the event is set multiple times prior to any thread reaching the wait. Schmidt doesn't even seem to cover this.
Tim Smith
I know what you're thinking punk, you're thinking did he spell check this document? Well, to tell you the truth I kinda forgot myself in all this excitement. But being this here's CodeProject, the most powerful forums in the world and would blow your head clean off, you've got to ask yourself one question, Do I feel lucky? Well do ya punk?
|
|
|
|
|
Ok, fixed. I replaced that very hap-hazard -1 termination with a more well defined termination. The -1 depended on the number of sending threads being equal to the number of receiving threads. Otherwise, you run a great risk of not stopping all your receiver threads or some events not getting processed.
#include "stdafx.h"
#include "process.h"
class CBoundedBuffer
{
public:
CBoundedBuffer (int nSize) : m_nBegin (0), m_nEnd (0), m_nBuffered (0), m_nSize (nSize)
{
assert (nSize >= 0);
::InitializeCriticalSection (&m_cs);
m_pBuffer = new int [nSize];
m_hEventDequeue = ::CreateEvent (NULL, FALSE, FALSE, NULL);
m_hEventQueue = ::CreateEvent (NULL, FALSE, FALSE, NULL);
m_hEventTerm = ::CreateEvent (NULL, TRUE, FALSE, NULL);
}
~CBoundedBuffer ()
{
if (m_pBuffer)
delete [] m_pBuffer;
if (m_hEventDequeue)
::CloseHandle (m_hEventDequeue);
if (m_hEventQueue)
::CloseHandle (m_hEventQueue);
if (m_hEventTerm)
::CloseHandle (m_hEventTerm);
::DeleteCriticalSection (&m_cs);
}
void Send (int nValue)
{
while (true)
{
::EnterCriticalSection (&m_cs);
bool fPlaced = m_nSize != m_nBuffered;
if (fPlaced)
{
m_pBuffer [m_nEnd] = nValue;
m_nEnd = (m_nEnd + 1) % m_nSize;
++m_nBuffered;
}
::LeaveCriticalSection (&m_cs);
if (!fPlaced)
::WaitForSingleObject (m_hEventDequeue, INFINITE);
else
{
::SetEvent (m_hEventQueue);
break;
}
}
}
bool Receive (int *pnValue)
{
while (true)
{
if (ReceivePeek (pnValue))
return true;
HANDLE ahHandles [2] = { m_hEventQueue, m_hEventTerm };
DWORD dwWho = ::WaitForMultipleObjects (2, ahHandles, FALSE, INFINITE);
if (dwWho == WAIT_OBJECT_0)
{
;
}
else if (dwWho == WAIT_OBJECT_0 + 1)
{
return ReceivePeek (pnValue);
}
else
;
}
}
bool ReceivePeek (int *pnValue)
{
::EnterCriticalSection (&m_cs);
bool fGotOne = m_nBuffered != 0;
if (fGotOne)
{
*pnValue = m_pBuffer [m_nBegin];
m_nBegin = (m_nBegin + 1) % m_nSize;
--m_nBuffered;
}
::LeaveCriticalSection (&m_cs);
if (fGotOne)
::SetEvent (m_hEventDequeue);
return fGotOne;
}
void Terminate ()
{
::SetEvent (m_hEventTerm);
}
protected:
int *m_pBuffer;
int m_nBegin;
int m_nEnd;
int m_nBuffered;
int m_nSize;
HANDLE m_hEventQueue;
HANDLE m_hEventDequeue;
HANDLE m_hEventTerm;
CRITICAL_SECTION m_cs;
};
CBoundedBuffer g_sBuffer (2);
#define NUM_THREADS 32
#define NUM_TRIES 100
unsigned int __stdcall Sender (LPVOID pValue)
{
int nStart = (int) pValue * NUM_TRIES;
for (int n = nStart; n < nStart + NUM_TRIES; ++n)
{
g_sBuffer .Send (n);
printf ("Sent: %d\n", n);
}
return 0;
}
unsigned int __stdcall Receiver (LPVOID pValue)
{
int nThread = (int) pValue;
int n;
while (g_sBuffer .Receive (&n))
{
printf ("Received: %d on r(%d)\n", n, nThread);
};
return 0;
}
int _tmain(int argc, _TCHAR* argv[])
{
HANDLE hHandles [NUM_THREADS * 2];
unsigned int dwSendThreadID;
unsigned int dwRecvThreadID;
for (int i = 0; i < NUM_THREADS; i++)
{
hHandles [i] = (HANDLE) _beginthreadex (NULL, 0, Sender, (LPVOID) i, 0, &dwSendThreadID);
hHandles [i + NUM_THREADS] = (HANDLE) _beginthreadex (NULL, 0, Receiver, (LPVOID) i, 0, &dwRecvThreadID);
}
::WaitForMultipleObjects (NUM_THREADS, hHandles, TRUE, INFINITE);
printf ("Terminating\n");
g_sBuffer .Terminate ();
::WaitForMultipleObjects (NUM_THREADS, hHandles + NUM_THREADS, TRUE, INFINITE);
for (int j = 0; j < NUM_THREADS * 2; j++)
::CloseHandle (hHandles [j]);
return 0;
}
This should fix the termination problem which would have only been an issue with multiple receive threads. No we are back to having just the temporary convoy problem.
(edit)
Oh, I know you can end up with a lost wakeup problem here if you terminate the receiver threads improperly. However, that was covered in my limitations section.
(edit 2)
So we don't waste time, if you think you see an error, please provide a time line graph. If you don't I will just ask you to anyway.
Tim Smith
I know what you're thinking punk, you're thinking did he spell check this document? Well, to tell you the truth I kinda forgot myself in all this excitement. But being this here's CodeProject, the most powerful forums in the world and would blow your head clean off, you've got to ask yourself one question, Do I feel lucky? Well do ya punk?
|
|
|
|
|
But what does this prove. It proves that events can be tricky little bastards to even people who have been doing them for a long time.
But Tim, that's the argument that started this whole thing.
But I state again, the implementation was flawed. But can easily be fixed.
Then please do! This isn't meant as a challenge to further this argument, because I think you've already agreed to the only thing I've asserted from the beginning. I ask, because if you can solve this then the chances are very great you've also solved how to implement CVs on Win32 and likely in a way that's cleaner then the currently known best solution. That would be beneficial to me, at least.
|
|
|
|
|
The argument was that events work by "pure luck" which is a statement William made. For some reason people seem to think that I am arguing that events are just as good or better than CVs. I never made such a statement. People have been making totally illogical statements that events don't function. They work just fine but can be tricky little bastards.
As one person said, (not exact quote) "events are not suitable for resource contention or waiting." Of course, if you look at WIN32 critical sections, events are being used for resource contention AND waiting all in this small little package. Thus totally disproving the statement. After all, when someone says "X can not be used for Y", you simply disprove this by providing one example where "X is used for Y". The logical fallacy people seem to be falling into is that they are trying to use the same single example for showing that a problem such as "For all problems X, solution Y provides greater coverage than solution Z". Showing that there exists a problem X that Z doesn't cover in no way proves the statement. The real funny part is that I haven't been arguing that events do everything CVs do. I openly admit that to try to get back to the original argument that events are functionally flawed.
As far as implementing CVs on WIN32, that has been done already and I have made no claim that I can and see little point in trying.
Take a look at my fixed code and find a hole in it. I think I might have discovered why some people see a lost wake where there is none. Due to poor reading of the Schmidt (sp?) article, people have come away with the invalid idea that events in general have a lost wake problem. It seems that when people read ::SetEvent, they think ::PulseEvent. As we all should know, ::PulseEvent won't wake anybody if nobody is waiting. However, this isn't a problem at all with ::SetEvent. Unless you play stupid games with ::ResetEvent, any event will remain set and thus a latter wait will be used to satisfy a wait. However, since ::SetEvents aren't counted, you do have to take care make sure if the events are being used to manage something like a work queue, then all work entries are executed prior to waiting again. This problem results from the window between releasing the lock on the event queue and waiting for the event. I covered this pitfall which can result in temporary convoys but will never fail unless the active worker thread is improperly terminated. Just like CVs have their implementation problems, this is a well known pitfall with events. Now events do fail totally if instead of an event queue, you just have a single value being managed and each change in the value must be seen by a worker thread. However, I am not 100% sure that CVs handle this case if all worker threads are busy when a new value comes in. Of course, if the worker thread never releases the mutex while processing the change value, then you will never get concurrent worker thread processing and in effect have just implemented a depth 1 event queue. Which of course is basically just one step better than doing the work inline to the process.
(FYI: If in a specific implementation temorary convoys could have a significant effect on the application, then the code can be modified slightly to trade temporary convoys for extra false wakes.)
Tim Smith
I know what you're thinking punk, you're thinking did he spell check this document? Well, to tell you the truth I kinda forgot myself in all this excitement. But being this here's CodeProject, the most powerful forums in the world and would blow your head clean off, you've got to ask yourself one question, Do I feel lucky? Well do ya punk?
|
|
|
|
|
From section 5 of his paper:
The SignalObjectsAndWait solution in Section 3.4 is a good approach if fairness is paramount. However, this approach is not as efficient as other solutions, nor is it as portable. Therefore, if efficiency or portability are more important than fairness, the SetEvent approach described in Section 3.2 may be more suitable. Naturally, the easiest solution would be for Microsoft to simply provide condition variables in the Win32 API.
Tim Smith
I know what you're thinking punk, you're thinking did he spell check this document? Well, to tell you the truth I kinda forgot myself in all this excitement. But being this here's CodeProject, the most powerful forums in the world and would blow your head clean off, you've got to ask yourself one question, Do I feel lucky? Well do ya punk?
|
|
|
|
|
|
The Stanley Lippman thing only made me suspicious, but if MS got Sutter, then I finally believe they are serious about C++ after reading this article.
I'm really looking forward to seeing what Herb can do to help MS with beefing up the VC++ compiler and helping them contribute to Boost.
CodeGuy
The WTL newsgroup: over 1300 members! Be a part of it. http://groups.yahoo.com/group/wtl
|
|
|
|
|
With Herb and Lippman Microsoft is going to the light
Oh,right now I'm felling the force go stronger on Standard C++
Cheers,
Joao Vaz
|
|
|
|
|