The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
For example, I've some 2600 threads on one machine and 1200 threads on another) according to Task Manager.
No, I don't want to look at them in proc explorer, and I tried googling that question, but what I'm looking for is just some high level (doesn't need to be technical) information of what the OS is doing. With nothing open except Task Manager, I have 1144 threads.
Any links, wisdom, or bad warp and woof weaving puns?
Look at how many services you have running. Each of those has two threads minimum, most will be in the range of 5 to 10 threads with some having more some having less. My virus scanning services have 300+ threads.
Its just easier to code tasks that need to wait on things single threaded, than it is to code some sort of async programming model. After all, a lot of that window code was written back in the late 90's.
Marc Clifton wrote:
bad warp and woof weaving puns
Just an amusing observation. Got discussing weaving the other day with a friend, only to to have my dog join in. His comments were surprisingly relevant.
The original Windows programming model didn't use threads. Everything was event driven: The entire application was waiting for something to happen, implemented as a new element in the message (/event) queue. The message was processed, conceptually by event concepts, that is: instantaneously.
Telecommunication people have programmed this way, using FSMs and state diagrams, since Day 1 of communication protocols. Other programmers never caught onto the way of thinking (although I have been using a disassembler written as a FSM: Recognizing an instruction prefix, an addressing mode or whatever were modelled as events).
Event/FSM modelling and programming is a completely different programming discipline. I haven't seen it taught in universities for at least twenty years. Nowadays, I don't think the lecturers are aware of it at all - not even in telecommunication courses. It really is a pity; event driven programming does have a lot of advantages.
Threads came in as an alternative, with all its problems. But people (especially those wiht *nix background) embraced it anyway. Every now and then I am itching to implement a complex protocol in a pure FSM manner, just to demonstrate how clean it could be. Even though other programmers might say: Yes, that looks great! I have no hope of making them do things that way themselves, though, so I never spent significant resources on it.
In the original windows programming model, everything had to be event driven because there were no threads. It was also all based off the windows messaging model, which was closely linked to having a window.
When threads were added, they were predominantly for background tasks, and as such it was a major pain to give them a windows message pump so you could write them event driven (you had to give them a hidden window). So, for that kind of background processing, there was a shift toward just writing the code single-threaded and using multiple threads.. unless you needed gobs of threads, in which case you used some sort of thread pooling (which, IIRC, there was no library support for at first).
I seem to recall this being the recommended practice from the teacher back when I took my Win32 programming class at MS.. but this is a lot of years ago, so maybe I misremember it all...
My point was, that was the prevailing best-practice when a lot of that system code was written, and I doubt it's been rewritten since.
The classic Mac OS was the same way - purely event driven. (I learned the philosophy of event driven programming by reading about Mac OS several years before I encountered Windows in pratice!)
In the classic Win32 model, events were routed to an entity called a 'Window', because it almost always had one. But it didn't have to! Today, a 'process' serves the same role: Like a window entity handled a number of events, and the processing of them, the 'process' handles a number of threads. The main difference is that while processing an event (i.e. a message) always completed within a short time, a thread may stop midway, waiting for 'something' - the 'something' is not modeled as an event/message, but as something else.
Note that in *nix (as well as many other OSes), the 'process' is an actively executing entity. Threads are just subdivisions of the activity within a process. A Windows process is not by itself executing; it is a container for one or more threads - just like a win32 window didn't by itself do any processing: Each message handling procedure did.
It could have been modeled as an event/message - the main difference is that the thread model allows the thread to have its local data across those waits. In an event model, a 'windows'/process maintains a data structure which is modified by micro-operations (i.e. individual event procedures) completing in finite time (conceptually: zero time). There is never any deadlock, never any wait for someone else to complete so the data structure is 'released'; no data is ever 'reserved'. The FSM may indicate that in a given state, a given event is illegal and should be treated as an error, but that never causes a deadlock.
You could think of the classical processing of an event as a one-shot thread: It is fired up, does its job (in an atomic manner) and terminates. Introducing the pausing of a thread midway, doing more than an atomic micro-operation, caused threads to grow fat. When lots of data were hidden inside threads, many threads were packed into one process, so that they grew fat, frequently encompassing the entire application. In win32, you would leave one data structure to one 'window', as an FSM of one data structure (say, the state info for protocol), making more 'windows' for other data structures. Your application would be organized as a number of cooperating FSMs, or 'windows', the way networking people have been doing for ages.
I never saw an introductory college textbook promoting this design philosophy (not even in the days of classic Mac OS and win32) - you would learn it only in advanced, telecom oriented courses, long after your basic approach to programming has been molded into the multiple-thread and locked-up data structures way of thinking.
So the FSM / event concepts were lost. And few people are aware of what was lost; they never knew that it existed.
How long ago do you define "original"? I was doing multithreaded apps (not just events or multitasking) in VB6 back in the last half of the 1990s in Windows. C++ developers were writing multithreaded apps even earlier.
Yes, in the second half of the 1990s it started - not as an OS supported thread concept, but with "helper functions" which, at the user level, set up a separate message loop where the "thread" (which technically was the mainloop processing code for an earlier message) could hang, waiting for the messages to be delivered to that supplementary message loop. The only message to be forwarded to this loop (aside from broadcasting type messages, of course), was the event on which the "thread" was blocked. Other messages were routed to the ordinary mainloop and processed as soon as the "thread" had lined up to wait for its "special delivery" message.
I didn't start programming win32 until the arrival of XP, and do not know if the OS-supported thread concept was present in Windows NT from Day 1 - it may have been. As far as I can remember, Win16 did not have any OS thread support. If you were an advanced Win95/98 programmer, you may have been linking in the Win32 DLL, to use the subset of NT fuctionality that could be invoked from Win16 - but then you have sidetracked into Win32.
For all practical purposes, "orignial" Windows is Win16.
if you had to do some heavy processing, like converting .wav into .mp3, best way to do it was multithreading. windows 95 had support for it. it wasn't something NT specific.
not every win32 application needed a window, but if you were making a GUI application then you would register a window and a callback procedure to the win32 API. your callback procedure would receive win32 events from the OS in your event loop that was running in the primary (usually the only) thread of your application. those events that you would receive would mostly be mouse clicks on the buttons of the GUI of your app or key presses on the text areas and similar.
if you had a button that makes some time consuming task, like encrypting, and you primary (and only) thread would not only handle the button press message (event) but take care of the whole process of encrypting then your whole app window would hang. there will be nothing to process future events while your primary thread is processing the encryption.
this is where you would fire up another thread to do the time consuming processing so that you primary thread could handle GUI events, for instance minimizing your app window.
the other way to do this is somewhat similar to what they do in game programing and cooperative multitasking. you process a small chunk of the data in your primary thread and then stop. peek for events, process them and then either proceed with data processing (win32) or exit to the OS (win16) and repeat on the next turn when your primary thread gets active via your window callback procedure. sort of...
Having written FSMs using event driven programming I 100% agree. In communications domains FSMs are far easier to debug and prove correct once debugged. I find it interesting that every single blog I've read that talks about high speed IO in Windows and dotNet ends up implementing some form of a FSM and/or pure event driven architecture.
For OP's question, every service has at least two threads - one that monitors for service control commands (start/stop/pause/resume) and one that does the work. Many applications, including MS-Word and Excel, implement some form of rudimentary memory and resource management and they use background threads to assist with cleanup. All Java and dotNet based applications implement several threads for resource management. However, most of these threads are in a wait state waiting for either an OS level event trigger or an application level event trigger, so the overall CPU time is low.
I'm pretty sure I would not like to live in a world in which I would never be offended.
I am absolutely certain I don't want to live in a world in which you would never be offended.
Freedom doesn't mean the absence of things you don't like.
only to to have my dog join in. His comments were surprisingly relevant.
Of course... Dogs are pretty intelligent :P
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.