|
I just read it. It's interesting that you managed your own threadpool. The more I try to work with the built in one, the more i want to go back to my own thread pool management like i was doing before. I'll take a look at that SmartThreadPool class to see how it works. I'm interested in that.
I'm using Invoke/BeginInvoke already for my UI updates, but that's not the focus of my project, which relies on message queuing.
The thing I wonder about, is since Task.Run() uses ThreadPool , if I won't have to somehow intercept or wrap it so that it uses mine. I don't know if that's possible.
This was actually easier to do without having to consider Task or ThreadPool . Funny that.
Real programmers use butterflies
|
|
|
|
|
I think TPL manages the thread pool for you.
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
I figured out some more stuff. Instead of using my own thread pool, I simply changed the way Task items get scheduled, so it will queue items after it exceeds my own quota. That way it still uses the system threadpool but in a way that makes sense. the relevant class is under the Tasks namespace and it's called TaskScheduler . It's kinda neat, but not very well documented.
Real programmers use butterflies
|
|
|
|
|
|
Thanks!
This is how i got around it - i can still use it, but i can throttle the number of concurrent tasks.
Customizing the TaskScheduler: Queue Your Task Work Items to Run When You Want Them To[^]
Seems not so popular, maybe because i ripped a lot of code out of a microsoft example in their docs, but unlike them i explained it (and i modified it)
ETA: The advantage over using a custom thread pool is you can still use tasks with it. To use a custom threadpool like stephens, i'd still need a custom task scheduler similar to what i wrote above to launch tasks on that threadpool
Real programmers use butterflies
|
|
|
|
|
Maybe it's just my NIH proclivity, but following all this just reinforces why I have no time for languages and frameworks that provide higher level threading abstractions. They're usually beyond clueless. Hell, even most of the O/S's are clueless.
|
|
|
|
|
I usually don't care for it either, although using async/await and letting compiler build the necessary state machine to make it work is pretty cool
Real programmers use butterflies
|
|
|
|
|
honey the codewitch wrote: I expect more like a few at most for most applications, and a well written app shouldn't have more CPU bound operations than there are CPU cores (or hardware threads) when possible, so why so many? Threads fortunately aren't limited to the amount of CPU cores; that'd be quite unusable. Instead of having a thread for each core, we bind them to functionality that is either or not suspended.
Even in the days of single-core desktops my WinForm apps would spawn multiple threads (as do most applications). If the user would click anything, I'd spawn a thread so that the UI would stay nicely responsive and drawing, and the user would have immediate feedback on his action (as opposed to SQL Server update which doesn't show a window and you have to launch taskman just to check whether it is still running).
Also take into account that the amount of threads in the pool might not be "per application".
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
The docs say it's per process, and what I'm saying has to do with performance.
Sure if you're UI thread is being tied up, move some of that to background work, but the bottom line is if you've got more threads than hardware to run them you are wasting CPU time context switching.
My statement again, was purely about designing an app for maximum throughput.
I guess there would be reasons to spawn more threads than that, but all you're doing at that point IMO is using threads as an abstraction.
Real programmers use butterflies
|
|
|
|
|
honey the codewitch wrote: The docs say it's per process, and what I'm saying has to do with performance. If each .NET app would spawn 2k threads, you'd notice it.
honey the codewitch wrote: My statement again, was purely about designing an app for maximum throughput.
I guess there would be reasons to spawn more threads than that, but all you're doing at that point IMO is using threads as an abstraction. Most applications have a UI, and events; rarely do I see a application where only processing is involved; in which case you break up the application to run decentralized, instead of mucking with threads.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
I said in the article i wrote about this (and perhaps in the OC, i don't remember) that the threads might be cutouts.
I'm simply reporting the value of GetAvaialableThreads() vs GetMinThreads() and GetMaxThreads()
If Microsoft's thread pool is lying to me, that's on them.
Real programmers use butterflies
|
|
|
|
|
I don't think it is lying, just missing context.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
The UI loop is typically spun on a single thread, with events firing out of the loop it's spinning. I'm not sure if you're trying to imply they're typically firing those events off of other threads, but in my experience, they don't.
Real programmers use butterflies
|
|
|
|
|
I do; as explained, for almost every user action. There was an example-project from MS that implemented MSN Messenger in .NET, and it did the same; almost everything that took longer than 250ms was on a background thread.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
250ms is a quarter of a second, so that's "long running" in my book, in that you can't tie up the UI thread for that long.
I guess if you're doing a ton of those all the time, you may be spawning more threads to handle it, but I don't see how being able to move the window around even as the UI is lagging its update by 250ms or more all the time is much of an improvement.
Personally, I don't know enough about the app to say otherwise, but if i found i was running in to that I'd look at my design.
Real programmers use butterflies
|
|
|
|
|
honey the codewitch wrote: I guess if you're doing a ton of those all the time, you may be spawning more threads to handle it, but I don't see how being able to move the window around even as the UI is lagging its update by 250ms or more all the time is much of an improvement. If the UI isn't updated for some time, Windows gives this nice option to close your non-responsive application. Also, usually got a thread for loading grids in the background, synchronizing a progress-bar at the same time. Gives a much better user experience than normal databinding.
MSDN states that since .NET 4, it "depends" on multiple factors. MS being helpful as always, but at least it proves Google wrong. Querying the values in a new project yields below results on my machine;
GetMinThreads - workerThreads 6, completionPortThreads 6
GetMaxThreads - workerThreads 2047, completionPortThreads 1000
GetAvailableThreads - workerThreads 2047, completionPortThreads 1000 Somehow I doubt that Max and Available are the same, but that's indeed what the framework reports.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Those are the numbers I get too. The minimums are reasonable, but the maximums are ridiculous, IMNSHO.
The only reason I can think of for microsoft to set those figures so high is to virtually guarantee that CPU bound "async" tasks are never completed synchronously. That doesn't seem right to me though. Like I said, I miss working at Microsoft, where there was someone I could shoot an email to who might know and respond.
Real programmers use butterflies
|
|
|
|
|
It may be that the "max" is equal to "available"; doesn't mean "active", but that'd be horrendously confusing.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
especially considering I am using GetAvaialableThreads() and it's the same count as GetMaxThreads()
Stephen Toub, who I trust as an expert, has been relying on his own thread pool implementation in .NET for years, and the rant in the OP i think is one of the reasons.
Real programmers use butterflies
|
|
|
|
|
honey the codewitch wrote: The docs say it's per process, And how many? Do you have a link? Google claims this;
What is the default value for thread pool size?
5
The Max Thread Pool Size parameter specifies the maximum number of simultaneous requests the server can handle. The default value is 5. When the server has reached the limit or request threads, it defers processing new requests until the number of active requests drops below the maximum amount.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
That may be what google reports, but it's not what my actual code reports.
ETA: Are you sure that's talking about the .NET ThreadPool class?
Real programmers use butterflies
|
|
|
|
|
honey the codewitch wrote: ETA: Are you sure that's talking about the .NET ThreadPool class? Not quite, because I can't find the equivalent on MSDN, which I'd expect to be correct. It implies the .NET threadpool, since the other answers are about it too.
honey the codewitch wrote: That may be what google reports, but it's not what my actual code reports. I seriously dislike it when code and documentation don't come with the same answer. What I did find was another "max";
MSDN claims: There is one default thread pool per process, including Svchost.exe. By default, each thread pool has a maximum of 500 worker threads. Now, a maximum of 500 sounds reasonable, given that the average UI only uses 20 or so backgroundworkers, and perhaps 2 to 4 threads for processing data/async reading/writing.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Yeah 500 would be, just because you have all kinds of library code and framework code running along with your app code, and they need threads too.
When I design an app for throughput I do try to take that into consideration but it can only be found with profiling. Consequently, for me to know how many long running tasks i should be serving varies.
But otherwise I try to design as if I'm the only app running. The reason is if it's about moving and chunking bits fast, it's either a game, or some heavy app, or a server of some sort although those are mostly I/O bound except for things like game servers. But either way, the end user probably is aware of its requirements and will not expect it to run well with lots of other stuff going on.
The OS does a good job of scheduling its own threads so in practice I ignore OS stuff. It tends to background itself when an app is cranking anyway.
This is all just experiential. I used to write a lot of ISAPI code for work, and I had to think about what my threads were doing, and manage a "proper" thread pool. By "proper" I mean, it was directly tied to how many incoming I/O bound requests it was expected to serve - so it's a bit of a different animal but with similar concepts to say a game, or a computationally intensive app, like an mpeg encoder in that you head to very careful with how you were using the threads you had.
If I was writing something I/O bound and CPU bound, like a video player, i still wouldn't need a lot of threads. In the neighborhood of 3 depending on the system - one for I/O (so it's not tying up CPU), one for decoding (CPU bound), and one for rendering - usually tied with UI (CPU and/or GPU bound).
I guess thinking about it, I haven't run into a lot of middle ground with respect to app performance. Usually my apps either have to be casual about threading and simply reserve them for things that would lock up the UI, or they have to go full bore and do as much work as they can in the time given.
Even my parser generators were casual, despite sometimes being very long running - the LALR algorithm for example, is murder on the CPU and is resistant to being parallelized.
So maybe it's a dearth of experience on my part, wherein I would need more a "medium" app which might consistently need a lot of threads due to executing a lot of short lived but still too long for the UI tasks. I can imagine scenarios, but nothing not contrived seeming. Again, perhaps my own tunnel vision due to not finding one I've had to write.
Real programmers use butterflies
|
|
|
|
|
honey the codewitch wrote: Yeah 500 would be As long as it doesn't start all 500 in the pool for each proces. Would explain though why the performance of my modern machine can't keep up with the UI of a Commodore Amiga. By the time the SQL Server upgrade UI appears, I clicked on the icon 10 times to "start" the damn application. Older machines give instant feedback in the UI, yet the modern ones need a few seconds to load their libraries and put up a "loading" screen.
honey the codewitch wrote: But otherwise I try to design as if I'm the only app running. Haha, very different environments and designs If you are the only app running, or have a dedicated machine, then it'd be wasteful to not use every resource there is. I usually have to assume that the user has at least one instance open of every Office-application, a mail-client, an IM client, a ton of browsers, with the user expecting a smooth experience even if they load half a gigabyte of crap into a grid.
honey the codewitch wrote: which might consistently need a lot of threads due to executing a lot of short lived but still too long for the UI tasks. I can imagine scenarios, but nothing not contrived seeming. Not just short lived; any communication using TCP, or any disk-operation, I start a backgroundworker. There's quite the amount of "professionals" that read/write in a UI-event (on mouse click or whatever), blocking the UI and making you go through all the forms' events just to get an idea of the flow.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Eddy Vluggen wrote: any communication using TCP, or any disk-operation, I start a backgroundworker.
That's an I/O bound thread, not a CPU bound thread. I'm purely talking about CPU bound worker threads here.
I/O bound threads follow different rules because they spend most of their time asleep waiting for input.
ETA: I think the context got lost in the fray, but in OP, I'm talking about CPU bound worker threads. I'm not talking about I/O bound threads. You may have many more of those than you have cores. The threadpool keeps a separate pool for those.
Real programmers use butterflies
|
|
|
|
|