I'm learning a Coursera course about time complexity. I encounter this in a video and I think something is wrong. "So suppose that we have an algorithm whose runtime is roughly proportional to n and we want it to run it on a machine that runs at about a gigahertz. How large an input can we handle such that we'll finish the computation in a second? " (1GHZ CPU (10^9 cycles per second))

"Well if it runs at about size n, you can handle about a billion sized inputs, before it takes more than a second" (10^9 / 10^9)

"If it runs like n squared, it's a lot worse. You can only handle inputs of size about 30,000 before it starts taking more than a second." n = 30000, (30000)^2/10^9 = 0.9 sec ~ 1 sec

"If the inputs are of size 2 to the n, it's incredibly bad, you can only handle inputs of size about 30 in a second" 2^30/10^9 ~ 1.07 sec

With n log(n) complexity, if the input size n is 10^9, then it should take 10^9 * log(10^9) / 10^9 = 9 second, right? Somehow it shows 30 seconds in the video. It also said that if n = 10^7.5 then the runtime is 1 second, which is also wrong with my calculation.

I calculated every case with the n, n^2, 2^n complexity and everything was right, only nlog(n) case messed up

Here is the photo for more details:

https://i.upanh.org/2022/04/11/nlogn.jpg[

^]

Linked the course video (3:40 to 5:00 is the part):

https://www.coursera.org/lecture/algorithmic-toolbox/asymptotic-notation-zI8dH[

^]

I edited the question so that it's just like in the course video I watched

**What I have tried:**
I calculated every case with the n, n^2, 2^n complexity and everything was right, only nlog(n) case messed up