|
Just bringing this idea up to see if it might hold any merit to developers or just plain crazy. I've realized for a long time now that no matter how many CPU's you have you are still limited by the Software developers incorporating multi-threading for whatever number of cores.
My Question is. What if we were to have the OS, which already knows how many Cores are present distribute the operating processes on different clock cycles to the multiple cores in a serial type effect. Everyone know data is streamed to the processor serially every clock cycle. if the OS were to know the number of CPU's and integrate a method of switching between them distributing data to each one every clock cycle, it seems to me the more cores you had the faster the information could be processed no matter whether the programs were written to optimize multi-core or not.
|
|
|
|
|
Alisaunder wrote: plain crazy
it is.
each thread executing on a core has state (in the CPU registers, in the stack, etc), instructions from one thread have to remain on the same core to work with that state; switching a thread to another core is a rather expensive kernel operation, one would typically like to avoid (that is what "thread affinity" is about).
Furthermore cores have caches to vastly improve performance; and these caches most often are local to the core, so having the data in one core's cache isn't going to help the thread when executing on a different core.
And finally, CPU performance isn't the only bottleneck, there are other limitations, such as memory bandwidth, and disk and network performance. Therefore, adding a few threads to an app may be good, adding a large number generally is counterproductive.
In all a very bad idea.
Luc Pattyn [My Articles] Nil Volentibus Arduum
Fed up by FireFox memory leaks I switched to Opera and now CP doesn't perform its paste magic, so links will not be offered. Sorry.
|
|
|
|
|
ok I'm not a CPU engineer so I just have a basic understanding of their functionality, but aren't we seeing similar functionality being used on Video cards using SLI to distribute screen information between 2,3 and 4 videocards tied together. Instead of CPU's they are GPU's which are basically the same thing.
|
|
|
|
|
By the way this is a hypothetical discussion I wasn't referring to current threading technology I was referring to a new way of trying to implement and utilize the extra cores inside a processor using the OS sort of like SLI does with a device driver and GPU's.
|
|
|
|
|
in a GPU things are quite different: it is dealing with streaming data, performing "simple" operations on a sequence of pixels, without much state involved, with tailored memory ports, and highly predictable. More cores means a smaller screen area per core, hence faster. Video processing is inherently a candidate for parallel processing; your average web browser, spreadsheet, whatever, isn't.
Luc Pattyn [My Articles] Nil Volentibus Arduum
Fed up by FireFox memory leaks I switched to Opera and now CP doesn't perform its paste magic, so links will not be offered. Sorry.
|
|
|
|