Introduction
Timing fluctuation can limit the usefulness of timing experiments. The idea here is to place the algorithm inside a loop and count the number of iterations between ticks.
The reason for using the time
function rather then the clock
function is because it can be poorly implemented with the result of "missing ticks", and accumulating this over many loops, it will lead to inaccuracy up to 50%. Using the time
function and looping in between ticks will decrease the inaccuracy.
Example
time_t start;
start = time(NULL);
vector<long> iterations;
iterations.reserve(reps);
while(iterations.size() < reps){
int count = 0;
do{
count++;
copy(A, A+N, B);
sort(B, B+N);
finish = time(NULL);
}while(start == finish);
iterations.push_back(count);
start = finish;
}
This can then be repeated for several times, so even if the duration between ticks vary, we can select the most accurate.
Output
The output below shows the average result that has been calculated sorting an array of double
s of length 10000 for 10 times.
The result is calculated by retrieving the middle value after the vector has been sorted and dividing it by 1000.0 which will give us the milliseconds.
1000.0/iterations[reps/2]
This member has not yet provided a Biography. Assume it's interesting and varied, and probably something to do with programming.