|
Did this post[^] inspire yours? I was thinking more along your lines when I read it.
|
|
|
|
|
|
We lost power at the house yesterday (I'm working from home during the Kung Flu panic), and my UPS simply died (thank god for the auto-save feature in Visual Studio). We've lost internet connectivity twice during the week, as well.
I've already ordered another UPS, but the infrastructure seems to be pretty freakin' weak around here...
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
WTF?
Have the utilities engineers decided to reduce the risk of catching the coronavirus by working from staying at home and putting their feet up with a beer?
I wanna be a eunuchs developer! Pass me a bread knife!
|
|
|
|
|
Mark_Wallace wrote:
staying at home and putting their feet up with a beer?
Sounds like a good plan!!
CQ de W5ALT
Walt Fair, Jr.PhD P. E.
Comport Computing
Specializing in Technical Engineering Software
|
|
|
|
|
Dr.Walt Fair, PE wrote: staying at home and putting their feet up with a beer?
Sounds like a good plan!! I'm not so sure it's such a great idea to annoy John.
Your only hope is that he misses with the first shot, because he doesn't like shooting twice.
I wanna be a eunuchs developer! Pass me a bread knife!
modified 21-Mar-20 12:28pm.
|
|
|
|
|
Mark_Wallace wrote: because he doesn't like shooting twice.
Don’t confuse my reluctance with outright refusal.
I think it’s humorous that you think I’d miss...
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
|
|
You got to know when to fold em
I'm hiding from exercise...I'm in the fitness protection program.
JaxCoder.com
|
|
|
|
|
He picked a fine time to leave us...
If you can't laugh at yourself - ask me and I will do it for you.
|
|
|
|
|
It's funny, I write a lot of it but I'm awful at it. I can never seem to get my locking right in most of my projects. I get it "mostly" right which is even worse because it's that much harder to track down the occasional deadlock than it is to track down something that always causes failure.
Most of the time, I've taken to creating a copy of all the data a 2nd thread needs to do its work, passing it off, and then not synchronizing at all. I do this wherever possible, but I'm surprised I've found a way to make it work in so many scenarios.
Frankly, I think multithreading is a giant hack of computer programming in general because it turns a logic machine non-deterministic. Non-determinism doesn't destroy logic, but it works against the logical flow of the app. If I never needed it I'd be a happy lil monster.
/rant
Real programmers use butterflies
|
|
|
|
|
So what kind of objects are you creating, and how are you doing it?
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
In my MIDI project I just copy all the data I need to play a preview of it into a separate object. Then I take that separate object and I pass it to the thread creation function. That way I avoid having to use any synchronization primitives.
if (null != _previewThread)
{
_previewThread.Abort();
_previewThread.Join();
_previewThread = null;
}
PreviewButton.Text = "Stop";
var f = _ProcessFile();
_previewThread = new Thread(() => { f.Preview(0,true); });
_previewThread.Start();
Here ProcessFile() does a number of operations on the midi file (not physically backed by a file - in memory but in the midi file format) passed in but what it gets back is always a copy, never the original.
The copy is then passed to the thread's creation function. That thread works on it, but other than that the copy is never touched again.
Real programmers use butterflies
|
|
|
|
|
Everyone is awful at writing multithreaded code. I never thought of locking as a hack, but that's actually a fair characterization given that it's usually a workaround for fundamental flaws in the scheduler and/or platform. Hence this[^], which really triggered some sheep who downvoted without comment.
|
|
|
|
|
Excellent reading. Thanks!
Real programmers use butterflies
|
|
|
|
|
Don't get distracted. You've got some critical regions to find.
|
|
|
|
|
Now, where can I find that in C#?
|
|
|
|
|
Jörgen Andersson wrote: Now, where can I find that in C#? I think @code-witch could whip it up by tomorrow.
|
|
|
|
|
A comment, and a good/sad memory, related to your P/V discussion:
As a unversity freshman, I picked up Brinch Hansen's "Operating System Principles", and were truly fascinated by the analysis of concurrency and use of P/V to protect shared resources; I think I could recite chapters of the book by heart . During my sophomore year, "The Architecture of Concurrent Programs" arrived in the university bookstore, and we were a big gang of students rushing to buy it, to read more fascinating discussions of concurrency problems, and the careful programming required to handle them.
The second book was sort of a downturn. All problems were solved, there were none left. Built on top of P/V were critical regions, monitors, and queue mechanisms that made concurrent programming safe and simple! (At least compared to the P/V level!) The job was done. No more problems to be solved. At least that is how it felt at the moment.
This was in the pre-*nix days, or rather: *nix had not yet run down all other OS ideas and discussions, but it was about to. So while we were expecting monitors and critical regions to become The Standard, we rather were told that "There is but one process in an address space, and there is but one address space per process!" That adress space were closed, statically determined. Coming from other worlds where activities/threads and address spaces were more or less separate, independent concepts, we shook our heads. But for many years, we had to struggle with the existence of a file as one - in our eyes completely crazy - implementation of a (binary, queueless!) P/V semaphore. There simply was no room for what we had learned about regions and monitors and queues.
We had one relief: The CHILL programming language (Z.200, 1980) provided regions, nonitors and queues as first-class language elements. As was process concept - we would call it threads today, because they were (very) lightweight and were independent of address space concepts. But CHILL never made it outside the telecom world (the reason why we were involved was that our university had been involved in the definition and implementation).
Even today, there are very few signs of "The Revenge of the Monitor". Or critical region. Still, a lot of *nix based subsystems indicate that concurrency protection is "optional", and my impression is that even when data structures are protected, in most cass it is by an onion skin wrapper on top of P/V. High level constructs are essentially unsupported by most languages and unknown to most programmers. Obviously we can dig up Brinch-Hansen's book (or any other from the same period that discusses it), and hand-craft the high-level mechanisms, but few do. Few know how to do it.
The solutions are there. That's the good thing. Very few use them. That's the sad thing.
(For those who have noticed another post of mine, where I am arguing for autonomous objects communicating through messages: Such an object strongly resembles the original, single-thread, isolated *nix process, doesn't it? Yes in many ways it does. But it wasn't used that way. A process was not considered as an object instance, cooperating with other objects in a composite application solution the way we use OO today. *nix processes were generally not written as message driven state machines. The basic thinking was based on sequential code.)
|
|
|
|
|
Thanks for the interesting retrospective!
It's interesting that Windows has monitors and critical regions, not just semaphores and mutexes.
But my favorite thing in Windows is SetEvent , which I believe would have to be implemented with a condition variable in other operating systems. C++20 will introduce atomic_flag , with functions such as wait , notify_one , and notify_all , which will finally provide something as simple as SetEvent .
|
|
|
|
|
I am just the opposite. I like and enjoy working with threads quite a bit. I work with CUDA also and CPUs and GPUs have significant differences when it comes to multithreading. We had an interesting discussion yesterday at work about a key processing loop in an algorithm we are converting for use on GPUs in CUDA. My colleagues were baffled when I told them in the GPU code there will be no for loop because the thread scheduler implements it for us. We just tell it how many threads to fire. It took them a few minutes to grasp the concept but then the lights came on and they all went, "oh, well @#$%" because it dawned on them how simple it really is. Threading with CPUs is not quite as simple but I have never had much of a problem with it.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
The strong focus on threads indicate that our minds are basically rooted in the sequential, one-at-a-time, mindset. Which also means that we have not really grasped object orientation. The sooner we can detach our minds from this ordered first-things-first-then-the-next way of coding, parallelism becomes much easier.
In my early life as a programmer, I became familiar with two concepts that are essentially forgotten today:
An APL "workspace" is like a sand pit were you add your objects - data arrays, functions, ... - dynamically. Or remove them. The objects interact, like real-world objects. In APL, the interaction is "classical", by calling other functions in a traditional way. But the complete autonomy and independence of the objects when there is no interaction is only partially continued in OO languages of today.
The other concept came with GUIs, popularized by the classic (1984) MacOS and Windows: First, that everything that happens is in the form of atomic events. Handling of an event is (conceptually) instantaneous and brings the object from one consistent, well defined state to another consistent, well defined state.
Second, interaction betwen objects is by message passing, rather than by function calling. All normal messages are queued, non-blocking, asynchronous. While APL objects are definitionwise completely independent of each other, messages may (if used throughout) provide a similar run time independence of of each other. You completely avoid deadlocks: Sure, object A may wait indefinitely for a reply message from object B after sending a request, but that in no way prevents object A from handling all other kinds of messages. You need no locking; shared data resources are modelled as objects receiving request messages, either causing changes in its internal data structures or a reply message with data values. No locking is needed; the message queue provides the sequential order of atomic accesses.
What use are threads, really? If every object interacts with every other object through messages, and has an autonomous right to determine how it will handle them but do it one by one in an atomic fahsion, and is responsible for its own transition between consistent states, the need for threading within each object is very limited. If everything is modelled as objects, all interaction as messages, there is little need for thread mechanisms outside the objects, too.
Each independently executing object must "run on a thread", it has its own life. But that doesn't require threads as we use them today, with synchronization and resource ownership, and lots of bells and whistles, in some thread models also under the management of a higher level "process", wich may imply a more complex, two-level resource ownership model. If all resources are objects, owning itself, you don't need it.
When an object is waiting for the next message, all state information is preserved in its data members. Its runtime stack is empty, its program counter well defined - think of it as similar to an interrupt handler! The message handling system activates the handler by delivering a message (with appropriate queueing mechansism). To utilize several CPU cores, the message handler may run one low-level thread per core, and dispatch messages to as many objects as it has cores/threads. You can do the same using higher-level threads, but at a higher cost, and maybe less flexibility (e.g. if there is an ownership relationship from thread to object). Note that the total stack requirements are limited by the number of messages actively being processed simultaneously, one per core, each no deeper than that needed for processing a single message in an atomic operation.
This is not how we were taught to code in the pre-threading, pre-OO days, but neither threads nor OO has essentially modified our mental models: We still have not learnt to see objects as autonomous, and although we have multiple sequences of operation, they are still well-ordered, with loops but essentially still sequential.
Yet, you can create quite autonomous objects in most OO languages. You can encapsulate data in objects. Your "message" mechanism may be implemented as a function call (the only public one offered by the object), that returns nothing and conceptually returns immediately. The object may alternate among consistent, well defined states. Your thread worries would vanish. You would never more see a deadlock. Some resource requirements would go significantly down (e.g. for stack space).
I am trying to be a realist, though: There are no signs of neither autonomous objects nor message (/event) driven models entering into programming courses; they are too busy teaching students how to create deadlocks and racing conditions using threads
|
|
|
|
|
what you describe sounds a lot like the way the QNX operating system was designed.
Real programmers use butterflies
|
|
|
|
|
Adding, I've considered using message passing models, and I actually do kind of in .NET using ISynchronizeInvoke implementations on System.Windows.Forms.Control when I'm synchronizing threaded operations in .NET in a UI
I've strongly considered implementing a message passing system over ISynchronizeInvoke that works more generally, but I don't know how yet, and even if i did i'm not sure how I would get it to work in a platform agnostic way.
The other issue is finding a good implementation of a concurrent deque in .NET
for some reason in C++ despite doing a lot of ISAPI development i never needed such a system
Real programmers use butterflies
|
|
|
|
|