16,003,417 members
See more:
Is it possible to predict the execution time for a method or block of code ?
Posted
Sergey Alexandrovich Kryukov 21-Dec-15 10:45am
Generally not. Only in some special cases. But this is an interesting question, my 5.
—SA

Solution 1

On Windows, no. Windows is not a real-time operating system and is a shared system that does preemptive multitasking. You have no way of knowing how long it's going to be before Windows puts your thread back into execution and you have no way of knowing how long the thread is going to be allowed to execute. This makes it impossible to know ahead of time how long any piece of code is going to take to execute.

Why would you want to?

v2
RhishikeshLathe 21-Dec-15 10:02am
Sergey Alexandrovich Kryukov 21-Dec-15 10:45am
That's not all. Please see Solution 2.
Really appreciate good "a curiosity" questions. :-)
—SA
Sergey Alexandrovich Kryukov 21-Dec-15 10:44am

You could give a much stronger statement. In general case, such prediction is theoretically impossible at all, it means — even on the systems with perfectly deterministic timing.

—SA
Dave Kreskowiak 21-Dec-15 13:08pm
I craft answers in the time I have allowed, which ain't much now-a-days.

Solution 2

In general case, such prediction is theoretically impossible. I mean, it is impossible even for the systems where the timing of every logically deterministic sample is totally deterministic, such as in the case of single-thread single-process system with known timing of each CPU instruction (in CPU cycles) and CPU cycle frequency.

So, in such systems, you can perfectly predict timing in special (simple) cases. But why not in general case? Because of conditional operators and, generally, instruction-flow control statements, such as "if". Well, with such statements, timing prediction can also be possible in special (simple) cases. But, again, why not in general case?

Because even the logical behavior cannot be predicted in general case. I can proof it mathematically, based on another mathematically proven statement, that the halting problem is unsolvable. The generalized statement is known as Rice's theorem. Please see:
https://en.wikipedia.org/wiki/Halting_problem#Sketch_of_proof[^],
https://en.wikipedia.org/wiki/Rice%27s_theorem[^].

I could prove my point on the `Rice's theorem`, but for simplicity, it's enough to use its special case, the fact that halting problem is not solvable. In general case, you cannot even predict if some not so trivial algorithm will ever halt the execution. That means, you can expect that the program will execute for some, usually long, time, and eventually halt, or will never stop.

Apparently, if you don't even know from the algorithm that the time is infinite of finite, it means you cannot predict the time. :-)

[EDIT]

Can we face or build an algorithm which always halts, but timing is finite but still theoretically unpredictable? Yes, with Rice's theorem. If some your property value is not deterministic, you can write something like `if (myProperty) LongMethod() else QuickMethod()`.

One simple practical case is the use of pseudo-random number generator. Yes, its algorithm itself is deterministic, but the values won't be deterministic if you seed the algorithm, for example, with the value taken from the real-time clock device.

Another trivial cases is an interactive program. :-)

Now, even with multi-threaded systems, in special cases you can still predict timing, but it will be "hockey time", not "football time". Perhaps you know that in hockey, the clock is stopped on all breaks, to count only the "pure" time of the game. You can precisely evaluate the timing for a single thread, not counting the time for all interrupts and thread switching. For real time ("football time") it will give you the infinum of the possible real-time durations.

But this estimate will be valid only on the fixed CPU model and fixed CPU clock frequency. The tick times are available from the CPU specs, so you can sum up all the tick counts for all instructions. But remember that your code can be compatible with the similar CPU, which may have different timing. Generally, it all boils down to what you understand by "code" and "prediction". If this is the code in high-level programming language, different compilers may generate different CPU instructions.

"Prediction" is also not a well-defined category. Let's say, you are calculating generalized Fibonacci sequence using the recurrent algorithm. The algorithm is perfectly deterministic, but what would be the prediction, especially if the initial data set and required length of sequence is taken from input data? Each time, you can predict the behavior of the algorithm perfectly, but the method of such prediction is only one: to perform all the calculations from the beginning to the end with the exact same data set. Is it prediction or not? :-)
Just some food for thought…

—SA

v5

Solution 3

In the limited sense that you can use a high-resolution .NET timer to time the method/code, after you pre-jit the code by executing it at least once, before timing it ... a qualified 'yes.'

I suggest you review the excellent links and information on this post: [^] to familiarize yourself with .NET timers and their quirks, and best practices in using them.

But, keep in mind:

How do you know you have "exercised the code" with the input data that would make it run the slowest ? How do you know you have timed the possible many modal use-cases ?

Dave and Sergey do a good job in their answers here of defining the factors inherent in Windows that make results of timing "approximate," and unreliable beyond a certain threshold.

However, relative timing data can still be valuable ! There are expensive 3rd. party tools for .NET (from PostShap, RedGate, Telerik, etc.), and some free ones from Microsoft and various open-source projects, that attempt to "profile" application performance: [^].

Search here on CodeProject on "Profiler."

On a practical level, the best use of timing may be A/B Testing: to compare two different implementation techniques, or two different Class designs. And, also on the practical level, often analyzing memory consumption is as, or more, important than execution time.