Click here to Skip to main content
15,355,133 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
See more:
I'm learning a Coursera course about time complexity. I encounter this in a video and I think something is wrong. "So suppose that we have an algorithm whose runtime is roughly proportional to n and we want it to run it on a machine that runs at about a gigahertz. How large an input can we handle such that we'll finish the computation in a second? " (1GHZ CPU (10^9 cycles per second))

"Well if it runs at about size n, you can handle about a billion sized inputs, before it takes more than a second" (10^9 / 10^9)

"If it runs like n squared, it's a lot worse. You can only handle inputs of size about 30,000 before it starts taking more than a second." n = 30000, (30000)^2/10^9 = 0.9 sec ~ 1 sec

"If the inputs are of size 2 to the n, it's incredibly bad, you can only handle inputs of size about 30 in a second" 2^30/10^9 ~ 1.07 sec

With n log(n) complexity, if the input size n is 10^9, then it should take 10^9 * log(10^9) / 10^9 = 9 second, right? Somehow it shows 30 seconds in the video. It also said that if n = 10^7.5 then the runtime is 1 second, which is also wrong with my calculation.

I calculated every case with the n, n^2, 2^n complexity and everything was right, only nlog(n) case messed up

Here is the photo for more details: https://i.upanh.org/2022/04/11/nlogn.jpg[^]

Linked the course video (3:40 to 5:00 is the part): https://www.coursera.org/lecture/algorithmic-toolbox/asymptotic-notation-zI8dH[^]

I edited the question so that it's just like in the course video I watched

What I have tried:

I calculated every case with the n, n^2, 2^n complexity and everything was right, only nlog(n) case messed up
Posted
Updated 11-Apr-22 12:30pm
v4

Quote:
If the input size n is 10^9, then it should take 10^9 * log(10^9) / 10^9 = 9 second, right?

No ! You don't calculate runtime from Big O notation, at least not by itself.
The Big O notation tells you the category complexity of an algorithm, not the runtim, not directly.
When you have a runtime for given data size, the Big O complexity allow you to evaluate the runtime for another data size.
your timing of 1 second for data from 20 to 10^6 means that you need to switch to miliseconds timing to get meaningful timing.
   
Comments
Hoang Minh Quang FX15045 11-Apr-22 10:36am
   
Sorry, I just watched the video again. There is this line "So suppose that we have an algorithm whose runtime is roughly proportional to n and we want it to run it on a machine that runs at about a gigahertz. How large an input can we handle such that we'll finish the computation in a second? " I also updated the question with some lines in the course video.
The big O doesn't give the exact time of execution. See, for instance Big O notation - Wikipedia[^] (Properties sections, multiplication by a constant).
   
Comments
Hoang Minh Quang FX15045 11-Apr-22 10:36am
   
Sorry, I just watched the video again. There is this line "So suppose that we have an algorithm whose runtime is roughly proportional to n and we want it to run it on a machine that runs at about a gigahertz. How large an input can we handle such that we'll finish the computation in a second? " I also updated the question with some lines in the course video.
To add to what the others say, you can't equate clock cycles directly with execution time anyway - some machine operations take n cycles to execute, others may be n * 2. To add to that, modern processors are pipelined - so some instructions may start while other are executing, cached - so execution time may depend on how long ago an instruction was last executed and if its data is available in L1, L2 or L3 caches, availability of free cores to execute, 'plus a host of other details.

You can't just say "my clock speed is X, so my execution time is X * Y" - there are far too many other factors that will influence it.
   
Comments
Hoang Minh Quang FX15045 11-Apr-22 10:37am
   
Sorry, I just watched the video again. There is this line "So suppose that we have an algorithm whose runtime is roughly proportional to n and we want it to run it on a machine that runs at about a gigahertz. How large an input can we handle such that we'll finish the computation in a second? " I also updated the question with some lines in the course video.
30 seconds is about right if it's log2(n). log2(1000) ~= 10.

EDIT: O(n log(n)) algorithms often recursively divide the problem in half, solve each part separately, and then combine the solutions. This is why log2 is usually implied. Mergesort is a good example.
   
v2
Comments
Hoang Minh Quang FX15045 11-Apr-22 14:22pm
   
OMG thank you!! I just searched on google, in programming when saying log(a), it usually means log2(a) not log10(a) like when I learned at high school.
the big O notation only tells you how the execution time will change when the "size" of the problem is changed, it does not provide an actual estimate for a specific case.

In other words, there is an unknown factor, so for example O(n2) could mean
time_needed = constant * n * n

where nobody is telling what the value of the constant is, hence no concrete value for time_needed.

The example only tells you that a problem twice as large will take four times as long to get solved.
   

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900