
Comments and Discussions


Subtitle / Summary:
Deciding Big O performance is just the 1st step to choosing the size/performance trade off.
All O(1) algorithm's are not faster than all O(N).

I like your article. It stimulated a lot of thought.
The one misunderstanding I wish to comment on has to do with preferring an O(1) versus any O(N) algorithms, or perhaps I mean not comparing implementations.

Generally it is true: when you know 2 algorithms, and one is O(1) and the other is O(N), probably the O(1) will outperform the O(N). But not all the time.
You picked the best example .. the O(1) look up of a computed result is constant time. It trades space for time and performs quite well when using an array or vector.
Unfortunately, other O(1) constant time algorithms usually exist, and one of them might be slower than both the O(1) look up AND the O(N) algorithm.
Example:
I have recently explored several alternatives for computing the Fibonacci series.
Please consider the following implementations:
Sum of previous results:
The mathematical statement for Fibonacci is probably considered O(N^2), because the function is invoked twice for each value [0..N], and the recursion spins down Ni times to fib(0) and fib(1). It is usually written something like "fib[N] = fib[N1] + fib[N2]"
Loop:
A commonly occurring loop implementation of the Fibonacci series, found multiple places on the net is O(N).
Analytic:
The following uncommon Fibonacci computation is also O(1):
fib(N) = lround((pow(0.5 + (0.5 * sqrt(5.0)), N) 
pow(0.5  (0.5 * sqrt(5.0)), N)) / sqrt(5.0))
On my older Dell, using "posix gcc version 4.8.1 (Ubuntu 4.8.12ubuntu1~12.04)", and averaging over millions of invocations of the algorithm in a second ...
O(N) Loop Fib ~96 nsec per avg Fib(n), 0 <= n <= 94 (base line)
O(1) Analytic Fib ~180 nsec per avg Fib(n), 0 <= n <= 94 (~87% slower than Loop)
O(1) Look Up Fib ~3 nsec per avg Fib(n), 0 <= n <= 94 (~3300% faster than Loop)
And yes, the N squared implementation can be quite slow.
O(N^2) Fib ~37 msec (i.e. 37,000,000 nsec)
It is easy enough, and therefor worth your while, to compare the running algorithm's time performance.
Deciding big O performance is just the 1st step to choosing the size/performance trade off.
Douglas O. Moen
modified 9Oct13 22:45pm.





(Corrected my statement from saying O(1) to O(N))DOUGLAS O MOEN wrote: The mathematical statement for Fibonacci is probably considered O(N^2), because the function is invoked twice for each value [0..N] . It is usually written something like "fib[N] = fib[N1] + fib[N2]" No, actually it is considered O(N) to calculate the series out to N places using the normal process. "fib[N1] + fib[N2]" is an O(N) process because 2*N will consistently increase the same amount for each process. When the speed slows down consistently and evenly with increase in size, it is O(N).
There are 4 steps to do this, but the O(N) steps increase or decrease evenly with the value of N (value number, not O notation).
The steps: 1. fib[N1] =O(N) 2. fib[N2] =O(N) 3. fib[N1] + fib[N2] =O(1) 4. fib[N] = fib[N1] + fib[N2] =O(1)
DOUGLAS O MOEN wrote: The following uncommon Fibonacci computation is also O(1):
fib(N) = lround((pow(0.5 + (0.5 * sqrt(5.0)), N) 
pow(0.5  (0.5 * sqrt(5.0)), N)) / sqrt(5.0))) That is definitely an O(1) process. (Keep track of your parentheses, taking out the text () = ((((()))
((())))())) are the series of parentheses 9 left P, 10 right P on the right side of the equation.)
I was taught but forgot the basic definition and then found this in the series of links on it:
http://library.thinkquest.org/27890/applications1.html[^] The formulas (Yours and the web's) do mathematically match but I'm rather concerned about the accuracy of the formula on the web. The second field to a power is a negative number to a power. It will alternate between positive (even) and negative (odd). I'm guessing they intended the absolute value before subtracting, I'll check.
1 1 2 3 5 8 13 so 5 is fifth and 13 is seventh. Even though this is O(1), it behooves you to reduce the calculations you make even here:
sqrt5=2.2360679774997896964091736687313;
hlfrt5p1=1.6180339887498948482045868343657;
hlf1mrt5=0.61803398874989484820458683436565;
fib(N) = (int)((pow(hlfrt5p1, N)  pow(hlf1mrt5, N))/sqrt5 + .5);
To me this is easier to read AND more efficient!
Splitting the formula would add more to the efficiency:
if (N < 3) fib(N) = 1;
else if (N > 5) fib(N) = (int)(pow(hlfrt5p1, N)/sqrt5 + .5);
else fib(N) = (int)((pow(hlfrt5p1, N)  pow(hlf1mrt5, N))/sqrt5 + .5);





Thanks for the feedback. Sorry about my lack of clarity, I was unnecessarily terse about the mathematical statement, and perhaps the other 2 implementations I compared.

Mathematical Statement Notes:
A mathematical statement is not an implementation, it is an equality that, in this case, describes some aspect of the results.
I had not thought about my offering until your comment explained how it confused you, so perhaps the mathematical statement should have been presented with subscripts as in the wikipedia http://en.wikipedia.org/wiki/Fibonacci_number
Hmmm. Let me try that with this editor : FN = FN1 + FN2 (Clumsy, but result is good) Yes, that probably would have been better.
I agree that filling out the previous array elements gets us to O(N).

To clarify what I was thinking, the C++ function (or method attribute) that perhaps most closely resembles the mathematical statement might be something like:
uint64_t fib(uint32_t N)
{
uint64_t retVal = N;
if (N > 1)
retVal = fib(N1) + fib(N2);
return (retVal);
}
I consider this to be O(N^2). It uses no memory array, but a little bit of stack. And not surprisingly, it is quite inefficient.

Analytic Implementation Notes:
My implementation of the Fib(uint32_t N) uses the formula provided (with correct number of parenthesis of course) and works correctly for 0 <= N <= 46. I did not investigate why it does not work for 47 to 93 because I do not care, as I am interested in comparing performance. N in [0..46] is adequate to determine that it is 246% slower than the loop implementation.

Loop Implementation Notes:
The loop implementation you can find on the web (and now here) is probably something like:
uint64_t fib(uint32_t N)
{
uint64_t rslt = N;
if (N > 1)
{
uint64_t fib_nMinus2 = 0;
uint64_t fib_nMinus1 = 1;
for (uint32_t n = 2; n <= N; n += 1)
{
uint64_t sum = fib_nMinus2 + fib_nMinus1;
fib_nMinus2 = fib_nMinus1;
fib_nMinus1 = sum;
}
rslt = fib_nMinus1;
}
return (rslt);
}
This works for 0 <= N <= 93, uses very little memory, all of it on the stack.
The disappointing summary still holds: All O(1) algorithm's are not faster than all O(N).
modified 9Oct13 22:21pm.





One of the defining characteristics of O(N^2) is that if you double N, you quadruple the time to execute. (See Bubble sort times on my chart. I'm betting the OS got involved and kicked the thread off of the CPU on the last one.) If you make N = 1,000,000, the actual number of loops you make would be about 1,999,996. With 1000, loops about 1996. With O(N^2), the first should be a million times slower than the second, with the number of loops taken, it should be a fraction more than a thousand times slower. Which is precisely the expectation of an O(N) process.
Remember, with O(1) if you execute 1 or 10 O(1) processes in one O(1), it always returns in the same amount of time. That is the characteristic of O(1).
A corollary if you execute 1 or 10 O(N) processes in one O(N), it always returns in a proportional amount of time.
If you combine an O(1) process with O(N), the combination is O(N), If O(1) is a sizable amount of time, you would have to subtract it from the first time to get the predicted increase of the N portion and then add back in the O(1) time. Basically if you increase N by 10 times, the new elapsed time gets closer to 10 times longer each time you execute it with that kind of increase.
As a developer, you should always be on the lookout for ways to improve your performance, cutting your elapsed time in half is nothing to sneeze at. Change your routine to have two parameters, N would be the first fib number to find, len would be the size of the array to return and you only call it once to get both numbers. (That's assuming you didn't have the O(1) formula to get to N directly.)





All my fib() functions count method entries.
With respect to my implementation of this recursive function:
retVal = fib(N1) + fib(N2);
fib(10) reports the result 55, with 'entry count' of 177.
fib(30) reports the result 832,040, and 'entry count' of 2,692,537.
fib(30) entry count is much greater than 9x fib(10) entry count ... (2,692,537 / 177 == 15,202)...
I think this is still considered O(N^2), but it's been a while since I studied this topic, and it is remarkably inefficient, so I no longer care.





Interesting, I created an Excel form to see how many calls it came up with and if you double the last call, it is exactly 1 higher than the Entry count you got for 30. On the other hand, the Times executed columns shouldn't care what the numbers are except at the end. "22"'s total (123) is loop 9. "21" (1 for loop 10) is called 55 times, added together, you get 178. Exactly 1 higher than your count without doubling it.
It certainly increases faster than I "intuitively" expected. Then it hit me, each time it is called it doubles the calls it makes. IE 2^N. However 2^30 is in the billion range, not million, so it isn't a correct estimate. Don't know how to mathematically describe this.
The first Times is the number of executions, the next two are the number of calls made.
N Times N1 Times N2 Times Total
30 1 29 1 28 1 3
29 1 28 2 27 1 4
28 2 27 3 26 2 7
27 3 26 5 25 3 11
26 5 25 8 24 5 18
25 8 24 13 23 8 29
24 13 23 21 22 13 47
23 21 22 34 21 21 76
22 34 21 55 20 34 123
21 55 20 89 19 55 199
20 89 19 144 18 89 322
19 144 18 233 17 144 521
18 233 17 377 16 233 843
17 377 16 610 15 377 1364
16 610 15 987 14 610 2207
15 987 14 1597 13 987 3571
14 1597 13 2584 12 1597 5778
13 2584 12 4181 11 2584 9349
12 4181 11 6765 10 4181 15127
11 6765 10 10946 9 6765 24476
10 10946 9 17711 8 10946 39603
9 17711 8 28657 7 17711 64079
8 28657 7 46368 6 28657 103682
7 46368 6 75025 5 46368 167761
6 75025 5 121393 4 75025 271443
5 121393 4 196418 3 121393 439204
4 196418 3 317811 2 196418 710647
3 317811 2 514229 1 317811 1149851
2 514229 1 832040 0 0 1346269
1 832040 0 0 1 0 832040





DOUGLAS O MOEN wrote: uint64_t fib(uint32_t N)
{
uint64_t retVal = N;
if (N > 1)
retVal = fib(N1) + fib(N2);
return (retVal);
}
I consider this to be O(N^2). It uses no memory array, but a little
bit of stack. And not surprisingly, it is quite inefficient. I'll say! I don't know about C++ on a managed system, but C# on a managed system would create a code copy for every recursive call made, absolutely horrible code performance wise.
OK, I made a stab at it, starting out at 94 (Apologies read part of missive and not all):
static ulong fib(byte N)
{
ulong ret = N;
if (N > 1)
{
ulong f1 = 1, f2 = 1, pt = 2, f3;
while (pt < N)
{
f3 = f2+f1;
f1=f2;
f2=f3;
++pt;
}
ret = f2;
}
return ret;
}
...
ulong retrn = ulong.MaxValue;
DateTime strt = DateTime.Now;
retrn = fib(94);
TimeSpan spn = new TimeSpan(DateTime.Now.Ticksstrt.Ticks);
... In debug mode I stopped after I set retrn to get the first value and then just after spn was created
retrn 18446744073709551615 ulong
retrn 1293530146158671551 ulong
+ spn {00:00:00.0030002} System.TimeSpan
Whoa, that's quite a difference, rewrote the call:
ulong[] values = new ulong[100];
DateTime strt = DateTime.Now;
for(byte i = 1; i<=100; i++)
values[i1] = fib(i);
...
+ spn {00:00:00.0010000} System.TimeSpan
[89] 2880067194370816120 ulong
[90] 4660046610375530309 ulong
[91] 7540113804746346429 ulong
[92] 12200160415121876738 ulong
[93] 1293530146158671551 ulong
[94] 13493690561280548289 ulong
Index 92 is fib(93) and the last value you can accurately calculate, 94 is too big for ulong. (Not running in strict mode, overflowing just overwrites bits, creating garbage values after N>93.)
Calling the function 100 times only cost one millisecond.
OK, NOW I've read your full missive and understand. Yes, your first version is an N^2 application.
Even 94 squared loops is less than 9000. The looping isn't causing the problem, writing to memory and restarting a new function nearly 9000 times is. Be careful with recursive calls.
DOUGLAS O MOEN wrote: The disappointing summary still holds: All O(1) algorithm's are not faster than all O(N). Well, you haven't given an example of O(1) processing, but I will grant you that at certain levels of N with some routines an O(1) process can be slower than an O(N). That isn't the point, the point is, as you increase N, the time linearly increases for an O(N) process. The time taken never increases for an O(1) process because of the process. (The OS may take a hand in slowing it down.)
Since you don't have any error checking and you are using a 64 bit integer, enter 4 and 8 billion into your fast fib version's N and measure the time for each. Bet you it is twice as long for the 8. It is an O(N) process.





Man, I've got to learn to read better, yes you did have O(1) code, just not the source in this routine.
I don't know why it fails at 47, but that would be the first number to exceed the maximum value of a signed int, so that is a clue.
Can't be related to double:
"By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally."
That should start failing to be precise around 73.
I can't remember the formula right now, but it was doing some complex stuff and the O(N) was only doing 92 loops with extremely simple logic, offhand, it isn't surprising the simple logic is faster.





Article title should be "Reply to Justin's article on big O notation"





Member 10314175 wrote:
Article title should be "Reply to Justin's article on big O notation" 1. I attempted to address O(LogN) and O(NLogN), he covered O(1) and O(N), neither of us tackled O(N^2)
2. If you have a problem with his article, reply to him. Oh wait, you CAN'T, that link doesn't allow comments.
3. I saw what I thought was a mistake in his article, but it is possible his wording needs work.(See 2. about correcting him.)
4. I tried to keep him out of my article as much as possible but all 5 of these notations are interrelated.
5. Because of space I took out O(N^2) processing in the article but gave two versions of the bubble sort in my code.
6. The Quicksort article link came from reviewer comments, so I decided to keep it out of my article. I hadn't used it as my source to build my binary sort routine. I did note to myself the article said that the best a comparison sort could do was N! processing.
My alternate bubble sort does quite a few less loops than that but the actual comparison count is close enough that they are essentially correct. In 20 items N! would make 210 comparisons, and my version would make 196. (Almost double the 107 loops made. The inner loop makes 98 of them.)





Wow... what?
1. So?
2. I don't have a problem with his article. I have a problem with yours. What makes you think I have a problem with his?
3. I don't have a problem with his article, I have a problem with yours. (See 2. about me not having a problem with his article, but having a problem with yours)
4. ...? Again, I don't care what this other guy said.
5. I wasn't criticizing you for leaving it out, I just threw it in there as a further example for anyone reading. My comment would have been incomplete wihtout it.
6. .... I'm starting to wonder if you accidentally replied to the wrong comment?
Okay yeah that must be it.
Edit: okay maybe not, I guess you were kind of replying to two of my posts at once. But did you see this one yet? here:
(the parts in quotes are quote you...)
"Say you have an O(1) process that takes exactly 1 second to run. If you execute that O(1) process 10 times without using a threading process, it will take 10 seconds to finish if it takes one billionth of a second to retrieve each item in the array. If you run it against a million items it will take over 11.57 days to run. Oh yea, be sure to add that millisecond to retrieve all those items from the array."
Okay, let's remove all big O notations from your above paragraph, and here are the facts I can gather:
 P1 is a process that takes 1 second to run on an input of size one
 If you run it against a million items it will take over 11.57 days to run.
Correct me if the above is not what you meant, but if it is, then P1 IS NOT O(1). If P1 takes 1 second on an input size of 1, and 11.57 days on an input size of a million, well, 11.57 days is equivalent to about a million seconds, which is linear time complexity, THEREFORE YOUR ALGORITHM IN P1 IS O(N), NOT O(1).
(Edit: ) If what you really mean is you run P1 a million times on the same input, and it takes a second each time, then P1 could be O(1), but if you run it a million times that doesn't mean it 'becomes' O(N). When you run the same algorithm (P1) on the same input one million times, you have effectively created a NEW algorithm (let's call it P2) with a NEW time complexity. Your new algorithm P2 simply repeats P1 N times, and therefore the new algorithm P2 has O(N) time complexity (assuming P1 is O(1) for any size of its input).
Here is the difference between the two algorithms.
P1 (x):
doSomething
P2 (N, x):
for i = 1 to N,
call P1 (x)
If P1 is O(1), then P2 is O(N). So when you say the complexity of P1 is O(1) and then "becomes" O(N) when you run it a million times, that is false.
>>"The confusion is that Paul won't(can't?) understand what I was saying."
No, the confusion is you do not understand big O notation. Please do your research.
>> "Any more examples of my "errors"?"
I could go on and on. Every time you reply, you make more mistakes because you still refuse to learn the subject matter.





Hi KP Lee,
From your recent comments, I think I've discovered where your main confusion lies. I know I keep saying I won't try to explain it, but it turns out I don't have much to do today, so here goes.
CONSTANT TIME
O(1) means constant time complexity. If you run an O(1) algorithm on an input of size N=1 or N=1million, it will perform similarly either way.
An example of an O(1) algorithm is retrieving a single value by index from an array (when we know which index we want). Whether the array is size 1 or 1million, the time complexity is the same.
When we say a process takes O(1), it does not mean N=1, it means that the algorithm has constant time complexity, meaning the performance of the algorithm does not depend on the size of the input.
LINEAR TIME
O(N) means linear time, which means there is a linear relationship between the size of the input and the time performance of the algorithm. An algorithm that takes one second on an input of size 1, two seconds on an input of size two, and a million seconds on an input of size a million is a linear time algorithm.
An example of an O(N) algorithm is finding the minimum value in a randomly distributed array.
QUADRATIC TIME
O(N^2) means quadratic time complexity, which means that there is a quadratic relation between the input size and the time performance of the algorithm. If running an O(N^2) algorithm on an input of size 1 takes 1 second, running it on an input of size 2 should take about four seconds, and on N=1million it should take about eleven million days (a million times a million seconds).





Member 10314175 wrote: CONSTANT TIME O(1) means constant time complexity. If you run an O(1) algorithm on an input of size N=1 or N=1million, it will perform similarly either way. First off there are very few times that an O(1) process would be given an input of a million items and it could process in an O(1) manner. I came up with a trivial one myself: Get the minimum or maximum value from a sorted array. Another one: send an array and ask for the N'th value.
In the article, I said something like "If you apply an O(1) process a million times it would be O(N)" Maybe I said it wrong. If I replaced "apply" with any one of these: "execute"/"call"/"loop through" would my meaning become clearer? Maybe I should also change "a million" to "from one to a million"?
I didn't realize the term for O(N^2) was quadratic time complexity, but it's effects on processing have always been very clear. Because of quadratic equations, the term makes sense. Both of the bubble sort routines I supplied are O(N^2) The loops executed were actually N^2 and N!/2, so the latter approximately looped N^2/4. (You get an improvement in speed, but it isn't even twice as fast. A regular bubble sort should process 150M items in about 2.5 years.





KP Lee wrote: In the article, I said something like "If you apply an O(1) process a million times it would be O(N)"
The actual text from the article is "If you apply O(1) to a million items, it becomes O(N)."
KP Lee wrote: Maybe I should also change "a million" to "from one to a million"?
You could have said something like this:
A process that runs an O(1) process N times is O(N).
KP Lee wrote: First off there are very few times that an O(1) process would be given an input of a million items and it could process in an O(1) manner.
Not sure what you're trying to say here. If what you're saying is that there aren't very many O(1) algortithms, that's definitely not true. If what you're saying is that in practice, there aren't many different situations in which O(1) algorithms are usefully applied, I don't think that's not true either. But if you're just trying to say it's easier to think of useful practical examples of O(N) algorithms than it is for O(1) algorithms, then yeah you're right.
KP Lee wrote: A regular bubble sort should process 150M items in about 2.5 years.
You mean on your machine? Okay if you say so.
Not sure what your point is with this except your second paragraph; I guess you're just making statements.





First off, thanks for not flaming me, seems to be happening a lot lately. Maybe I've lost the art of communicating civilly, so they are replying in kind? Your rewording of O(1) > O(N) was great, better than what I came up with.
Astrochimp wrote: But if you're just trying to say it's easier to think of useful practical examples of O(N) algorithms than it is for O(1) algorithms, then yeah you're right. That's pretty much it. There are a lot of multiple item things the system does that are O(1) that I take for granted. Creating an array for instance, mainly because I don't think of it as created until it is also populated and that, at best is an O(N) process.
It's pretty impressive it can do that as an O(1) process, mainly because you can reset blocks of memory very quickly. (Apparently you can force the GC to run because there is no delay between dropping 150M of memory and reusing it.)
Astrochimp wrote: You mean on your machine? Okay if you say so. Sorry, thought that was implied by the statement itself. On my machine 200K is sorted in about 2.3 minutes. 150M is 750 times bigger than 200K so the estimated total seconds on my machine is 140.418*750*750. (Using one of my two cores.) I didn't even try to think of a method to split up the tasks because I can get a valid comparison when running on a fairly idle machine.





If I try to understand something and it turns out I am frustrated due to author errors (from guessing/halfrecollection/never_understood_it_in_the_first_place whatever reasons) in article, I resent the author for this.





My main reason for writing this article was to explain LogN and NLogN big O notations and show the performance advantages of picking an NLogN process over an N^2 process if you have the chance to do so.
Is there ANYTHING about those processes that I wrote about that was incorrect?
I knew that e had a rational reason for being such an irrational value, but I didn't really care what it was, I still don't. On the other hand it is a little satisfying to understand where it does come from. (A graph that was extremely useful when computers couldn't get the same value in a fraction of a second. An understanding of a mathematical process. Immaterial in projecting response times. Not important enough to me to be sure I'll still remember where it comes from a year from now.)
OK, I used a lower case o to indicate an O process. That's a critical error? Really??
There has been criticism of my comments about O(1) Basically I was saying that if you toss an O(1) process into an O(N) process, that process remains O(N) That comes straight from other Big O articles and is basic common sense.





>>" My main reason for writing this article was to explain LogN and NLogN big O notations and show the performance advantages of picking an NLogN process over an N^2 process if you have the chance to do so."
Well, you try to go into a lot more than just that. It's pretty obvious that LogN or NlogN is faster than N^2, but you start rambling on and putting your foot in your mouth, revealing how little you know about the subject. It seems like you have a little mathematical background (except that you had trouble with derivatives) but you don't have the computer science background.
>>"Is there ANYTHING about those processes that I wrote about that was incorrect?"
People have pointed out several things, but when you reply, you say several more things that are incorrect, so it's very time consuming to argue with you. What you really need is a 4thyear university computer science course on algorithmic complexity. We can't teach you all this stuff just through comments.
>>"OK, I used a lower case o to indicate an O process. That's a critical error? Really??"
Yeah, it's quite a critical error when your article is about Big O notation. Anyone who has taken a course on algorithmic complexity has learned about the distinction between little o and big O, so when you try to pass yourself off as an expert by writing an article about big O, and then readers see that you don't even know the difference between big O and little o, your credibility goes right down the tubes. However, it wasn't your most serious mistake.
>>"There has been criticism of my comments about O(1) Basically I was saying that if you toss an O(1) process into an O(N) process, that process remains O(N) That comes straight from other Big O articles and is basic common sense."
That's not what you said. You said "If you apply O(1) to a million items, it becomes O(N)", which is not the same thing. Yes of course if you have a process that takes O(N), and then you do something extra in that process that is O(1), the process remains O(N). But what you originally said was basically that if you have an algorithm that is O(1), and you run that on an input size of N, then it becomes O(N) which is false.





This is an example of one of my errors:
Paul Abrams wrote: Here's a quote from your article that demonstrates just how confused you are:
"If you apply O(1) to a million items, it becomes O(N)"
This is a contradiction. He goes on to show how an array's individual item can be retrieved from an array in the same time no matter what the size of the array or where it is located in the array.
So, what does that have to do with the price of beans in China?
Say you have an O(1) process that takes exactly 1 second to run. If you execute that O(1) process 10 times without using a threading process, it will take 10 seconds to finish if it takes one billionth of a second to retrieve each item in the array. If you run it against a million items it will take over 11.57 days to run. Oh yea, be sure to add that millisecond to retrieve all those items from the array.
OK, make it a millisecond, a thousand items would take a second and a million would take 16 minutes, 40 seconds for the loops plus 1 millisecond to retrieve the item from the array.
The confusion is that Paul won't(can't?) understand what I was saying.
OK, that was one of my "errors" that I stubbornly refuse to believe is an error. Do you still think it is my error?
Any more examples of my "errors"?
Was my charting of data I gathered running the process help?
So, you are frustrated. First off, noone knows everything, that includes me and my reviewers. That is normal for everyone when they are learning something new. Instead of resenting me, how about thinking of me as an opportunity to expand your knowledge. Or, think of me as such an AH that you don't want to talk to me anymore.





"Say you have an O(1) process that takes exactly 1 second to run. If you execute that O(1) process 10 times without using a threading process, it will take 10 seconds to finish if it takes one billionth of a second to retrieve each item in the array. If you run it against a million items it will take over 11.57 days to run. Oh yea, be sure to add that millisecond to retrieve all those items from the array."
Okay, let's remove all big O notations from your above paragraph, and here are the facts I can gather:
 P1 is a process that takes 1 second to run on an input of size one
 If you run it against a million items it will take over 11.57 days to run.
Correct me if the above is not what you meant, but if it is, then P1 IS NOT O(1). If P1 takes 1 second on an input size of 1, and 11.57 days on an input size of a million, well, 11.57 days is equivalent to about a million seconds, which is linear time complexity, THEREFORE YOUR ALGORITHM IN P1 IS O(N), NOT O(1).
(Edit: ) If what you really mean is you run P1 a million times on the same input, and it takes a second each time, then P1 could be O(1), but if you run it a million times that doesn't mean it 'becomes' O(N). When you run the same algorithm (P1) on the same input one million times, you have effectively created a NEW algorithm (let's call it P2) with a NEW time complexity. Your new algorithm P2 simply repeats P1 N times, and therefore the new algorithm P2 has O(N) time complexity (assuming P1 is O(1) for any size of its input).
Here is the difference between the two algorithms.
P1 (x):
doSomething
P2 (N, x):
for i = 1 to N,
call P1 (x)
If P1 is O(1), then P2 is O(N). So when you say the complexity of P1 is O(1) and then "becomes" O(N) when you run it a million times, that is false.
>>"The confusion is that Paul won't(can't?) understand what I was saying."
No, the confusion is you do not understand big O notation. Please do your research.
>> "Any more examples of my "errors"?"
I could go on and on. Every time you reply, you make more mistakes because you still refuse to learn the subject matter.
modified 4Oct13 15:31pm.





No I am not frustrated, I glanced your article and then read the comments which I always do) in order to verify reliability  I started extensive Algorithm courses and was just browsing ahead elsewhere and had an allergic reaction to your attitude and assumptions ...





An interesting article on "Bubble Sorts."
A better article than it's subject: Order of Magnitude  which is a useless theory in mathematics that 'success is guaranteed to every organization centered upon a goal based upon the amount of work that was required formerly to achieve results by brute force methods,' and the curve that this represents.
When O(2r+1)^2 can become O(1) then you know it's time to sober up.





Voted 1 because I can't vote 0. Not only doesn't the author know what he's talking about, but instead of educating himself when commenters point out some of the errors, he obstinately says "nobody taught me that" or "I don't remember", instead of looking it up.





Thanks for the vote. With a total of 5 votes and two known to be 1's, 3.78 isn't a bad score. Wait, the math doesn't work, 17/5 is 3.4. Even giving myself a 5 for getting published, 22/6 is 2.67. ??
You are right, I should do better at looking up resources. I'm kind of a shoot first, ask questions later type person and I need to watch myself about that. I really should look up how they get that score, probably won't.
I don't take myself seriously, maybe you shouldn't either?





If you don't take yourself seriously and you don't want others to take you seriously, maybe you shouldn't be posting articles. Or, have a disclaimer at the top of the article: "I don't take myself seriously and neither should you. Don't bother reading this article because I have no idea what I'm talking about."






General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

