The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
A comment, and a good/sad memory, related to your P/V discussion:
As a unversity freshman, I picked up Brinch Hansen's "Operating System Principles", and were truly fascinated by the analysis of concurrency and use of P/V to protect shared resources; I think I could recite chapters of the book by heart . During my sophomore year, "The Architecture of Concurrent Programs" arrived in the university bookstore, and we were a big gang of students rushing to buy it, to read more fascinating discussions of concurrency problems, and the careful programming required to handle them.
The second book was sort of a downturn. All problems were solved, there were none left. Built on top of P/V were critical regions, monitors, and queue mechanisms that made concurrent programming safe and simple! (At least compared to the P/V level!) The job was done. No more problems to be solved. At least that is how it felt at the moment.
This was in the pre-*nix days, or rather: *nix had not yet run down all other OS ideas and discussions, but it was about to. So while we were expecting monitors and critical regions to become The Standard, we rather were told that "There is but one process in an address space, and there is but one address space per process!" That adress space were closed, statically determined. Coming from other worlds where activities/threads and address spaces were more or less separate, independent concepts, we shook our heads. But for many years, we had to struggle with the existence of a file as one - in our eyes completely crazy - implementation of a (binary, queueless!) P/V semaphore. There simply was no room for what we had learned about regions and monitors and queues.
We had one relief: The CHILL programming language (Z.200, 1980) provided regions, nonitors and queues as first-class language elements. As was process concept - we would call it threads today, because they were (very) lightweight and were independent of address space concepts. But CHILL never made it outside the telecom world (the reason why we were involved was that our university had been involved in the definition and implementation).
Even today, there are very few signs of "The Revenge of the Monitor". Or critical region. Still, a lot of *nix based subsystems indicate that concurrency protection is "optional", and my impression is that even when data structures are protected, in most cass it is by an onion skin wrapper on top of P/V. High level constructs are essentially unsupported by most languages and unknown to most programmers. Obviously we can dig up Brinch-Hansen's book (or any other from the same period that discusses it), and hand-craft the high-level mechanisms, but few do. Few know how to do it.
The solutions are there. That's the good thing. Very few use them. That's the sad thing.
(For those who have noticed another post of mine, where I am arguing for autonomous objects communicating through messages: Such an object strongly resembles the original, single-thread, isolated *nix process, doesn't it? Yes in many ways it does. But it wasn't used that way. A process was not considered as an object instance, cooperating with other objects in a composite application solution the way we use OO today. *nix processes were generally not written as message driven state machines. The basic thinking was based on sequential code.)
It's interesting that Windows has monitors and critical regions, not just semaphores and mutexes.
But my favorite thing in Windows is SetEvent, which I believe would have to be implemented with a condition variable in other operating systems. C++20 will introduce atomic_flag, with functions such as wait, notify_one, and notify_all, which will finally provide something as simple as SetEvent.
I am just the opposite. I like and enjoy working with threads quite a bit. I work with CUDA also and CPUs and GPUs have significant differences when it comes to multithreading. We had an interesting discussion yesterday at work about a key processing loop in an algorithm we are converting for use on GPUs in CUDA. My colleagues were baffled when I told them in the GPU code there will be no for loop because the thread scheduler implements it for us. We just tell it how many threads to fire. It took them a few minutes to grasp the concept but then the lights came on and they all went, "oh, well @#$%" because it dawned on them how simple it really is. Threading with CPUs is not quite as simple but I have never had much of a problem with it.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
The strong focus on threads indicate that our minds are basically rooted in the sequential, one-at-a-time, mindset. Which also means that we have not really grasped object orientation. The sooner we can detach our minds from this ordered first-things-first-then-the-next way of coding, parallelism becomes much easier.
In my early life as a programmer, I became familiar with two concepts that are essentially forgotten today:
An APL "workspace" is like a sand pit were you add your objects - data arrays, functions, ... - dynamically. Or remove them. The objects interact, like real-world objects. In APL, the interaction is "classical", by calling other functions in a traditional way. But the complete autonomy and independence of the objects when there is no interaction is only partially continued in OO languages of today.
The other concept came with GUIs, popularized by the classic (1984) MacOS and Windows: First, that everything that happens is in the form of atomic events. Handling of an event is (conceptually) instantaneous and brings the object from one consistent, well defined state to another consistent, well defined state.
Second, interaction betwen objects is by message passing, rather than by function calling. All normal messages are queued, non-blocking, asynchronous. While APL objects are definitionwise completely independent of each other, messages may (if used throughout) provide a similar run time independence of of each other. You completely avoid deadlocks: Sure, object A may wait indefinitely for a reply message from object B after sending a request, but that in no way prevents object A from handling all other kinds of messages. You need no locking; shared data resources are modelled as objects receiving request messages, either causing changes in its internal data structures or a reply message with data values. No locking is needed; the message queue provides the sequential order of atomic accesses.
What use are threads, really? If every object interacts with every other object through messages, and has an autonomous right to determine how it will handle them but do it one by one in an atomic fahsion, and is responsible for its own transition between consistent states, the need for threading within each object is very limited. If everything is modelled as objects, all interaction as messages, there is little need for thread mechanisms outside the objects, too.
Each independently executing object must "run on a thread", it has its own life. But that doesn't require threads as we use them today, with synchronization and resource ownership, and lots of bells and whistles, in some thread models also under the management of a higher level "process", wich may imply a more complex, two-level resource ownership model. If all resources are objects, owning itself, you don't need it.
When an object is waiting for the next message, all state information is preserved in its data members. Its runtime stack is empty, its program counter well defined - think of it as similar to an interrupt handler! The message handling system activates the handler by delivering a message (with appropriate queueing mechansism). To utilize several CPU cores, the message handler may run one low-level thread per core, and dispatch messages to as many objects as it has cores/threads. You can do the same using higher-level threads, but at a higher cost, and maybe less flexibility (e.g. if there is an ownership relationship from thread to object). Note that the total stack requirements are limited by the number of messages actively being processed simultaneously, one per core, each no deeper than that needed for processing a single message in an atomic operation.
This is not how we were taught to code in the pre-threading, pre-OO days, but neither threads nor OO has essentially modified our mental models: We still have not learnt to see objects as autonomous, and although we have multiple sequences of operation, they are still well-ordered, with loops but essentially still sequential.
Yet, you can create quite autonomous objects in most OO languages. You can encapsulate data in objects. Your "message" mechanism may be implemented as a function call (the only public one offered by the object), that returns nothing and conceptually returns immediately. The object may alternate among consistent, well defined states. Your thread worries would vanish. You would never more see a deadlock. Some resource requirements would go significantly down (e.g. for stack space).
I am trying to be a realist, though: There are no signs of neither autonomous objects nor message (/event) driven models entering into programming courses; they are too busy teaching students how to create deadlocks and racing conditions using threads
Adding, I've considered using message passing models, and I actually do kind of in .NET using ISynchronizeInvoke implementations on System.Windows.Forms.Control when I'm synchronizing threaded operations in .NET in a UI
I've strongly considered implementing a message passing system over ISynchronizeInvoke that works more generally, but I don't know how yet, and even if i did i'm not sure how I would get it to work in a platform agnostic way.
The other issue is finding a good implementation of a concurrent deque in .NET
for some reason in C++ despite doing a lot of ISAPI development i never needed such a system
SIMD is fine though, and I wish that was the main direction that parallelism took. It looks hard at first but you get used to it, whereas threads look easy at first but turn out to be harder and harder the more you learn.
I blame ASDA* for having an "end of range clearance" offer on Taylors Hot Lava Java in bean form. I bought quite a few packets - this was before CORVID-19 and stockpiling - but my old hand grinder really was never up to grinding as fine as you need for Espresso, so the Espresso machine has sat neglected in a corner of the kitchen for seven years.
Until I bought a new electric grinder and found it has several Espresso settings ... and a thorough clean, purge, and test later - BOOM - fresh ground beans into fresh made Espresso, the way it should be.
I'd forgotten how good it could be ... and how useful a wake up device it is ...
* For your colonial cousins, think Walmart - they own ASDA, but it's nowhere near as naff
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
AntiTwitter: @DalekDave is now a follower!
I am trying to work from home but have an issue with MS Teams, if I use it over a VPN to my Work machine it work fine. However voice calls don't, I need to attend a voice meeting over Teams this Tuesday. If I try to install Teams on my Desktop I get the error code 135011 and a Contact the Administrator. This I do and after being fobbed off once with a 'Well you have gone outside your licence' phone again get someone else who is slightly more helpful and has escalated it to second line and they will contact you by the end of the day, they didn't. I am now looking to try to install it on a old Win 7 Box I have.
Aren't they all?
I'd be hard pressed to think of half a dozen sequels that were as good as the original, much less better. I think it's the way Holywood works: the sequel must be the same as the original, but with more explosions. For number three they can get creative, because two flopped so badly and they want the franchise to continue ...
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
AntiTwitter: @DalekDave is now a follower!
Last Visit: 26-May-20 7:58 Last Update: 26-May-20 7:58