You really want to read this Wikipedia article:
Computational complexity theory[
^].
The meaning of input size depends on the concrete algorithm that is being analysed.
For the definition of
"bestcase,worsecase,averagecase" please see the referenced article.
Short example:
Appending a node at the end of a list of length n.
Space complexity: n (We need space for n + 1 elements, the constant 1 is dropped in big O notation
Time complexity: n (If we don't have a pointer to the end of the list we have to traverse the whole list thus n)
Time complexity: 1 (If we do have a pointer to the end of the list (like a queue for instance) we can go there directly and need constant time denoted by 1)
Example 2: Worst case
Inserting values into an unbalanced binary tree can cause a degenerated tree (actually a linked list) when the values inserted were already sorted.
The time complexity goes from O(log
2n) to O(n) in the worst case.
The purpose of the big O notation is to categorize the computational complexity of algorithms to make them comparable performance wise. Sometimes algorithms can be tuned by trading on complexity off for another. Lets say the runtime of an algorithm with constant space is O(n
2) runtime wise it is sometimes possible to get a sub exponential runtime by increasing the space complexity of the algorithm.
Hope I could shed some light onto this for you.
Regards,
Manfred