Quote:I'm not an expert on multiprocessing in Intel systems, but I think the best way to look at it is that one core cannot execute two instructions at the same time. That may not be technically correct as the hardware may split an instruction into separate parts and execute different parts at the same time. My understanding is that if you run two threads on one core, there will be some overhead associated with switching between the threads which will tend to make the whole process slower. So if you have a job that is all calculation (like calculating Pi to a large number of decimal places), you will finish the fastest if you have one active thread per core. However, if the job you are doing has pauses where no instructions are being executed (for example if it is waiting for input, or for data that is being read), then the operating system may be able to fill those pauses in one thread with useful work from another thread. That will speed up your job if the time recovered is enough to make up for the overhead of switching threads. For example, if you run your job in a single thread on a single core and Task Manager shows the CPU as 75% busy, then the best you could hope for by using multiple threads is to make the CPU 100% busy (about a 33% improvement).
var
This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)