The Big-Oh notation has a well-defined mathematical meaning. When you say that a function
f
of a variable
N
is big-Oh of some function
g
of
N
, it means that
f(N)
is bounded above by
C.g(N)
for all
N
[except maybe for the first values], where the coefficient
C
is called the "hidden constant".
http://en.wikipedia.org/wiki/Big_O_notation[
^]
For instance, when you say that a sorting algorithm has running time
T(N) = O(N.Log(N))
, where
N
is the number of elements to be processed, that means that the running time grows not faster that
N.Log(N)
.
[Keep in mind that you need to scale these values with the hidden constant, which depends on how precisely the code is written in the innermost loops and how the compiler does its optimization job.]
When an algorithm has linear time behavior
O(N)
, that means that solving the problem is not more complicated than merely stating the problem: just reading the input data takes time
O(N)
. Algorithms with a linlog behavior (
O(N.Log(N)
) are also considered as fast ones. Things get worse with polynomial times like
O(N^2)
.
N N.Log(N) N^2
10 33 100
100 664 10000
1000 9966 1000000
10000 132877 100000000
100000 1660964 10000000000
1000000 19931569 1000000000000