|
Gary R. Wheeler wrote: I always wondered: what did he-who-shall-not-be-mentioned-even-by-his-initials have against critical sections? If you are synchronizing threads within a process, critical sections are much more lightweight than a mutex. IIRC, they don't even require a kernel transition most of the time, unlike mutexes.
I asked that once. The answer (please put down any liquids and sit down, I will not be held responsible for spewed food or liquid onto expensive monitors, laptops, cell phones or the like... or of personal fainting).... The answer is that Critical Sections are an invention of Microsoft and are not portable. Only mutexes are Unix based and therefore perfect.
_________________________
John Andrew Holmes "It is well to remember that the entire universe, with one trifling exception, is composed of others."
Shhhhh.... I am not really here. I am a figment of your imagination.... I am still in my cave so this must be an illusion....
|
|
|
|
|
El Corazon wrote: The answer is that Critical Sections are an invention of Microsoft and are not portable.
Oh Judas H. Priest. Wrap the freakin' thing in its own class, compile it with mutexes in Unix and with critical sections under Windows. On a bad day (migraine included) I could have such a class working in five minutes, single-handed.
|
|
|
|
|
both linux and windows supports OpenMP and OpenMP supports critical sections. OpenMP is not a Microsoft invention.
logic never works with him.
_________________________
John Andrew Holmes "It is well to remember that the entire universe, with one trifling exception, is composed of others."
Shhhhh.... I am not really here. I am a figment of your imagination.... I am still in my cave so this must be an illusion....
|
|
|
|
|
Fortunately, he is dead to us now...
|
|
|
|
|
Just one question. How many people work on your code base?
|
|
|
|
|
It varies, but we have a group of a dozen maintaining four products, with a couple branches each. Why do you ask?
|
|
|
|
|
Gary R. Wheeler wrote: Why do you ask?
Because it is *relatively* easy to get multithreading right if you own the code base, but if there are multiple people introducing changes it becomes much harder. And if somebody introduces a race condition that shows up only once in a week on specific hardware - well good luck debugging that
|
|
|
|
|
Nemanja Trifunovic wrote: if there are multiple people introducing changes it becomes much harder
We, as a team, seem to have a good combination of division of labor and cooperation. We've not had very many thread synchronization issues that have arisen because of multiple people being involved in a single code base.
|
|
|
|
|
I too have been doing this for a long time. One reason is my home machine has been at least 2 cores since I had my first dual processor pentium 1 board in 1995. I started the paying job in 1997 and some of my first projects required a multithreaded design. 13 years later I am still doing multithreaded designs but for the most part not as low level as I started. I tend to now use thread pools more often and other libraries that do most of the heavy lifting for me.
John
|
|
|
|
|
John M. Drescher wrote: I tend to now use thread pools more often and other libraries that do most of the heavy lifting for me.
I'm enjoying that feature of .NET. I've got a significantly multi-threaded application in C# I've been working on for the last 18 months, and just recently I added the first 'real' thread. The rest of it has been through the BeginInvoke /EndInvoke constructs, which use the thread pool under the covers for you.
|
|
|
|
|
I have to disagree (does that label me "cargo cult" now?)
It's ok if it's your bread and butter, and you live in an environment where multithreading is everywhere.
However, adding multithreading to a pool of techniques and skills is hard. It has so many side effects and artifacts, e.g. it affects interface design, as this determines whether external locking is necessary, and preventing deadlocks requires global knowledge of the application. That's a lot of burder for libraries with "Unknown" reuse.
|
|
|
|
|
peterchen wrote: (does that label me "cargo cult" now?)
Certainly not.
peterchen wrote: That's a lot of burden for libraries with "Unknown" reuse.
Agreed. I believe most library authors shouldn't bother with multithreading concerns, unless it's an expected attribute of the environment in which the library is going to be used. It's nice when the author documents any threading concerns for use in that environment.
As a matter of fact, when an author makes a library thread-safe it can actually make it more difficult to use it in a multithreaded environment, since the user no longer has control over the thread synchronization mechanisms used.
Software Zen: delete this;
|
|
|
|
|
I discovered that MFC doesn't support parallel code. I had some C++ code that was fully decoupled, no sharing of memory, etc., but it was using Microsoft's STL, and the performance was actually WORSE on a 4 core machine (real cores) than on a single core machine. I ended up launching each task a separate PROCESS and voila, I suddenly achieved 100% CPU utilization. Pathetic, in my opinion, that a "modern" language backed by a "modern" framework doesn't actually work in a multicore machine. I have yet to try something similar with C#, I assume it's not backed by the same memory management schemes.
Marc
|
|
|
|
|
MFC or STL ???
2 bugs found.
> recompile ...
65534 bugs found.
|
|
|
|
|
To be honest Marc, I would have expected you to track down the bottleneck and post the reasons here.
|
|
|
|
|
Andre xxxxxxx wrote: To be honest Marc, I would have expected you to track down the bottleneck and post the reasons here.
I did, as much as I needed to for the time investment--Microsoft's STL and alloc's that it was doing through MFC. A simple threaded app test confirmed that was the problem. And actually, I posted about this about 8 months ago when I first was trying to figure out the problem.
Marc
|
|
|
|
|
Marc Clifton wrote: but it was using Microsoft's STL
well.... I use vectors in threaded operations all the time. your lists and trees will bog down in threading, but a vector will only kill you on an expansion. I control the expansion, thus preventing any issues with STL vectors, making them the easiest and fastest to use. I have used some of the others, but you really have to know how STL is storing and accessing the information to know what you have to do to use it in parallel.
_________________________
John Andrew Holmes "It is well to remember that the entire universe, with one trifling exception, is composed of others."
Shhhhh.... I am not really here. I am a figment of your imagination.... I am still in my cave so this must be an illusion....
|
|
|
|
|
El Corazon wrote: but a vector will only kill you on an expansion.
Well, that's exactly what was going on. And unfortunately, as this was an analysis routine that analyzes switch topologies for failure conditions, the modus operandi of the algorithm is expanding various vectors, maps, etc.
Marc
|
|
|
|
|
Marc Clifton wrote: as this was an analysis routine that analyzes switch topologies for failure conditions, the modus operandi of the algorithm is expanding various vectors, maps, etc.
not enough memory to reserve before hand?
_________________________
John Andrew Holmes "It is well to remember that the entire universe, with one trifling exception, is composed of others."
Shhhhh.... I am not really here. I am a figment of your imagination.... I am still in my cave so this must be an illusion....
|
|
|
|
|
El Corazon wrote: not enough memory to reserve before hand?
Ah, the problem is, it's impossible to figure out before hand, though ballpark estimates would definitely be doable. I'll have to look into that.
Marc
|
|
|
|
|
Marc Clifton wrote: Ah, the problem is, it's impossible to figure out before hand, though ballpark estimates would definitely be doable. I'll have to look into that.
if you have the memory, ball-park estimates and even over estimates will help. Most STL implementations add 50% more storage when you run out of reserve storage (though a few double). So if you can get past most of the re-allocs on the small end you may only realloc a few times... and I know it is a sin to use more memory than you need... but ... if you must get it done fast, over-estimate if you have the memory to spare.
_________________________
John Andrew Holmes "It is well to remember that the entire universe, with one trifling exception, is composed of others."
Shhhhh.... I am not really here. I am a figment of your imagination.... I am still in my cave so this must be an illusion....
|
|
|
|
|
El Corazon wrote: and I know it is a sin to use more memory than you need... but ... if you must get it done fast, over-estimate if you have the memory to spare.
That doesn't bother me at all. The number of combinations of failure cases that have to be analyzed are in the billions, so there's no way to hold all that in memory anyways, but there's a lot of state information that does fit in memory easily, and of course every iteration of a failure analysis has to clear various working lists. So thank you, I'll have to give this a try!
Marc
|
|
|
|
|
C++ isn't modern, and neither is MFC.
OTOH, I've used threading without problems in MFC apps - the one thign you should not try on Windows anyway is a multithreaded UI.
|
|
|
|
|
I am using the most primitive and most horrible way of implementing parallelism: manual thread creation/synchronization/communication. Have been doing it for 10 years now and the more I know the more I dislike it.
|
|
|
|
|
I prefer GPU instead of CPU for parallel operations. My favorite is NVIDIA CUDA and its much much faster. 
|
|
|
|