..has the signature of a function declaration but the name and arguments of your class constructor.
I'm a bit confused about where it could be "located" as you've shown your class declaration with a proper constructor declaration and the proper (beginning) of the constructor implementation. Maybe it's just needless?
If the brain were so simple we could understand it, we would be so simple we couldn't. — Lyall Watson
That is a function declaration which takes two parameters and returns an RTMDialog object. Firstly, is that what you wanted, and secondly, what is this to do with your original question? Where exactly in your code is this declaration?
I started to answer this and then realized what you were saying
Yes you can't do O(log n) on an infinite not sorted sample you can't divid the listing.
I assume they mean O(n) time which is the only way it makes sense, or the samples have to
be sorted which you didn't state. You can search a dictionary or a phone book in O(log n)
because they are organized in order.
I should actually be more careful you can't do it on a single program.
You could setup more and more task/threads/processors to run your one program.
You see that with very big processor cluster farms etc.
Having an infinite amount of stuff like that doesn't really make sense. Unlimited perhaps, in the sense that there is no a-priori constant limit that you can abuse to make all algorithms collapse to constant time. But infinite, no way. Even if it's sorted, it could start with infinite zeroes, which you therefore cannot count, nor could any higher element ever be reached. It just doesn't work.
Pi goes to inifinite decimal places .. creating the infinite output is easy and such things exist. Any irrational number is an example, the element involved being a digit.
Even in your case a computer will get thru thousands or millions of compares for a single digit per second.
If I asked you to count the digit 7 occurance in the PI output you can easily do it, and you could carry the count until you ran out of memory, disk space etc to hold the count. You only need the count you don't have to store the thing.
Describing a procedure that produces infinite output is easy, yes. But you can never have all of its output, and no you cannot count the number of occurrences of the digit 7 in the decimal representation of PI. You can start to count, but you would never have the answer. That would technically not even be an algorithm, because an algorithm is an Effective Method, which means it must terminate in a finite number of steps.
I can't and wont agree, you can terminate any infinite processes on a different criteria
Pi to the 100,000 place is a classic example as is counting.
So even infinite processes can terminate in a finite number of steps you just don't
get a full precision solution.
Rounding, image analysis, smoothing, infinite searches and a pile of other fields all
have solution infinite algorithms that if left run would never end. However you concluded
them when you have sufficient accuracy for your needs and they are most definitely
normally called algorithms. I even know several of them are patented as Algorithms even
though under your definition they aren't because they would never end, except for they
pick an point to terminate the processing.
As a bit of humor I am not sure the field of Fractals knows it can't have algorithms
because every calc in the field goes to infinity you have to force bailout
Given there are multiple definitions of Algorithm I will leave it up to the OP to work
out with his teacher what definition of Algorithm they want to use. I am quite happy to
call what they are doing an algorithm even if you are not and it will just become a pointless
You are trying to duck the problem some algorithms go on to infinity.
Some fractals never terminate they are designed to spit differences, they run until you turn them off they
are in every sense of the word infinite. You could also add machine learning AI, it never stops learning.
It does stuff in between time, termination is not a requirement of getting usable output.
At no point in either of these concepts do I need to specify a termination or need finite things.
You could claim they might run out of resources but that is also the case to the question as posed by the OP.
You are just applying a physical world restriction to bring in a terminate. However it's an algorithm long
before it ever terminated and as a concept it does run forever it would never end of it's own choice.
To me you are adding significance to infinity/finite which are just concepts. Abnormal or selected terminations
are just a byproduct of the real world, to the algorithm they can be meaningless.
I doubt we will agree on this stuff we are sort of differing around physical vs concept infinity.
You are wanting to take a much harder literal approach than I would in programming an algorithm.
All that ends up happening is things I will call an Algorithm you won't call that, you would call
it something else a process, a system or the like.
I'm not bringing in any real world things. The definition of an algorithm says that it must terminate - that's not just something random I'm throwing in. That's just some distracting naming though.. infinite processes then, sure. And we could define an infinite process that starts to count the occurrences of a specific value in an endless list, but unlike some other examples you have mentioned, it does not make any progress. It does not approximate a goal. It has nothing to output until it is done, which it never will be. Counting occurrences in some prefix was not the question.
The occurrences of an item in an infinite list fundamentally cannot be counted, except by a supertask, which is an interesting concept but not allowed in any reasonable model of computation.
Speaking about doing it in O(log n) as OP does is even more pointless, there isn't even an input size to set n to. There is no question about whether that's possible or not, the entire question cannot be asked.
Both run forever and we rate them in MIPS-years because we expect them to never end.
In fact I know several maths departments have had them running for years with prime
numbers slowly being added to a file list as it finds them.
This one has been going 9 years, and hasn't output a number in over a year and it may
never put out another number ( Only the PRIME GODS know). Great Internet Mersenne Prime Search - PrimeNet[^]
You will note the weird today stats on right hand side to even know it is still running
and you may get a laugh from the youtube video about the notification failing and the
discovery date being 3 months late.
Wikipedia and everyone I know calls them an algorithm .. except they fail your definition
they be finite and terminate
You claim such a thing can't be done and we shouldn't talk about it ... yet it exists .
I keep answering because you keep making out some sort of authoritative answer, you insist on
some definition but you never say by who, what authority?. I have shown you a number of
computer programming fields who don't agree with that definition and things you say are
impossible exist, so I am never going to agree with you. You are wasting time trying to argue
authority in such a situation. I don't doubt your definition may be true in your field and you
really truely believe it, but that doesn't work universally.
No you're just confusing GIMPS with the algorithms they use. Primality testing can trivially be done in finite time even with trial division. The Lucas-Lehmer test that GIMPS spends most its time doing also runs in finite time. Even factoring can be done in finite time with trial division. QS and GNFS do not need infinite sieving, they need an amount of sieving that depends on the smoothness bound B, which (while typically analyzed heuristically) is definitely finite (for example, it doesn't need to be any higher than the number being factored - that's a terrible bound but it makes B definitely finite). GIMPS as a whole doesn't have an end in sight because they can just keep looking for more Mersenne primes, but it's a project, not an algorithm.
You have not shown that anything that I have said does not exist actually does, you have called various things infinite that aren't, and named an example of an infinite process that isn't an algorithm. The one thing you could have mentioned that is slightly troubling in the face of the finiteness of algorithms is the class of Las Vegas algorithms, which seems to defy the definition. But they terminate in finite time with probability 1.
As for the definition, it's on wikipedia among other places. See also effective method. Not authoritative enough? Check the sources.
Interesting I need to give that some more thought to see if it works for me.
Most of us mathematicians and programmers are obviously just lazy and call it the Mersenne Primes algorithm,
we know there are a couple of small sub section algorithms running under it. I can honestly say I have never
thought much about it until this question.
The problem I have is if I take it out to computability like you have (and your link), Mersenne Primes is not
an Algorithm, it's not an Effective method (it doesn't terminate) .. it's not anything and you ended up calling
it a project .. which I now understand. A project to me is meaningless and while to you it's just a formal
definition, I lose working meaning and I am not sure I want to go that far just to get a definition
What is common to all the group of things that are causing problems is they all expel intermediate results
and it is the intermediate results that are more important than the "project" terminating (using your terms).
I see where you are going but I need time to think about it because this is tricky with this class of problems,
I am even needing to look carefully at the definition of computability. A couple of the fractal situations are
probably classed as not computabile.