|
Hello together,
we want to run benchmark tests on our testing PC.
We were suprised that the benchmark sometimes shows strong deviations. (sometimes only 20% of the best benchmark value, with several windows versions).
I can not explain what happens there.
We even made an image of the pc's hard drive that we reinstall every time we run the (two) benchmarks.
Still we have those deviations.
The next time when i will do some testing i will check if the following services are not running
- the index service
- the defragmentation service
- the antivir service
Do you have any ideas what has do be stopped/made/run in addition to the above to make the benchmark more consistent.
I hope this is the right forum for this.
Regards
Sascha
|
|
|
|
|
S. Becker wrote: to make the benchmark more consistent.
My guess would be that your benchmark is either just wrong or you do not understand what it is measuring.
Should note as well that attempting to measure a PC/OS without understanding the real business use (generally an application) is an excercise that is doomed to fail.
|
|
|
|
|
We did the benchmark tests with two different benchmark tools and both showed those strong deviations. So i do not think
that it is a problem of the benchmark tools.
The way the benchmark is run is defined in a word document.
Even if we would not understand what has been measured: if the input is the same the output should be also the same.
My job is now to make clear what went wrong because if the (benchmark) pc not allways produces the (more or less) same result
how can our software be measured.
Thank you.
Regards
Sascha
|
|
|
|
|
S. Becker wrote: We did the benchmark tests with two different benchmark tools Maybe you should talk to the people who provided the tools. Benchmarking computers is a difficult activity at the best of times because there are so many variables to be taken into account, particularly with multi-tasking operating systems.
One of these days I'm going to think of a really clever signature.
|
|
|
|
|
S. Becker wrote: how can our software be measured.
By running your software and exposing it to various loads.
|
|
|
|
|
In the end it was a mix of a to small sample for calculating the relative variance and some unnecessary processes (upgrade service for ex.)
Now with the bigger sample and the stopped processes the results are more consistent.
Thank you for your help.
Regards
Sascha
|
|
|
|