|
Imagine you'd bought the beer instead - You wouldn't be able to count to zero let alone from zero
|
|
|
|
|
OriginalGriff wrote: wasted instructions add up fast
Which is why I detest animated widgets.
|
|
|
|
|
That may have made sense in the 1960's, but not really anymore. A simple addition and possibly a shift takes very time relative to all the other processing happening in a computer.
|
|
|
|
|
You don't know what else the computer is doing so don't assume that there are plenty of cycles. If every app wastes 10% of the "extra" cycles it doesn't take long to bog the whole thing down.
|
|
|
|
|
That is assuming that the developers of system code have not already optimized the code where it can be of the most benefit. Also, it assumes that the compiler is not smart enough to do some optimizations.
|
|
|
|
|
Not at all. I'm saying that I could have a gazillion apps running.
|
|
|
|
|
Relative to system calls, and library calls, the code in the application usually takes up little of the resources unless you have an application that is heavily processor bound.
|
|
|
|
|
And let's keep it that way.
|
|
|
|
|
I thought it was to save electrons.
“Education is not the piling on of learning, information, data, facts, skills, or abilities - that's training or instruction - but is rather making visible what is hidden as a seed” “One of the greatest problems of our time is that many are schooled but few are educated”
Sir Thomas More (1478 – 1535)
|
|
|
|
|
I thought that it was to save Dark enrgy used in between the two calculation steps.
Just Joking.
|
|
|
|
|
Actually, it doesn't simplify the calculation of the address. It just aids the ability to interchangeably address the whole array and the address of its first element (e.g. in C, &array and &array[0] both give the same address). In languages like FORTRAN (from 1957 to current day) which start indices from 1, the formula is
element(n) = (address_of_array - size_of_the_element) + n * size_of_the_element
which is just as simple as starting from 0 as (address_of_array - size_of_the_element) is a compile time constant value. Considering the millions of errors over the decades that starting from zero has perpetuated (e.g. forgetting to add one to the bounds when declaring the array or getting the end point wrong in loops or wasting space by deliberately skipping element 0), doing the compile time calculation would have been a small price to pay.
|
|
|
|
|
jsc42 wrote:
element(n) = (address_of_array - size_of_the_element) + n * size_of_the_element
which is just as simple as starting from 0 as (address_of_array - size_of_the_element) is a compile time constant value.
(address_of_array - size_of_the_element) is not a compile time constant value in any but the simplest of cases, i.e. where the array is at a fixed memory location every time the program is run, which is almost never.
|
|
|
|
|
You are correct! What I meant (and I admit that the language that I used was far from rigorous) was that the relative offset is unvarying. If, for example, the offset of the start of the memory allocated for the array is 100 from a location (relative addressing / stack frame relative / absolute / whatever) and each element takes up 4 address locations (e.g. bytes), then all that the compiler does for 1-based indices is to use 96 + n * 4 (because 100 - 4 = 96) instead of 100 +(n - 1) * 4 in all of its address calculations for the array; this is as simple as using 100 + n * 4 for 0-based indices. This can be extended for arrays with specifiable lower bounds e.g. in this example if lwb is the lower bound, the 0-based index offset would be 100 + (n - lwb) * 4 [or more optimally (100 - lwb * 4) + n * 4] or for 1-based index offset would be (100 - (lwb + 1) * 4) + n * 4 = (96 - lwb * 4) + n * 4.
|
|
|
|
|
Introduction of the lower bound would definitely negate any advantage of 0 based indexing. With no specified lower bound the cost at the instruction level would depend on the instruction set and processor, but with the instruction sets I am familiar with, 0 based indexing was definitely less expensive. Very simple case using x86 32 bit instructions and assuming that the array base in ESI and array index in EBX, accessing a 32 bit array value can be done in a single instruction in both cases:
mov eax, [esi + 4*ebx] ; 0 based indexing
mov eax, [esi + 4*ebx + 4] ; 1 based indexing
On even the most modern x86 processor, the instruction with an offset takes more memory than the one without. On older x86 processors it would also cost CPU cycles, and on more primitive instruction sets you wouldn't even be able to encode the address calculations into a single instruction at all. One interpretation of your argument for same cost might be that in the one-based case that you just start with esi pointing to 4 bytes before, which would be true but you still have to count the cost of adjusting esi, though in some cases that calculation could be amortized across many array accesses.
|
|
|
|
|
Also - The first positive number in any number system is zero.
Another way to state it: All positive number systems start with zero.
|
|
|
|
|
It was also handy back in the days when only single array indexes existed and you wanted dual indexes because everything mathematically matched up really handily without taking extra steps.
for (i = 0; i < 10; i++)// Not at all like the format of older languages!
{
for (j = 0; j < 10: j++)
{
ind= i + j * 10;
val[ind] = some value
}
}
Or the other way around, find the two indexes from the current index:
for (ind = 0; ind < 100; ind++)
{
j = ind/10;
i = ind - (j * 10);
}
|
|
|
|
|
|
Seriously? Clean this up or it'll get deleted pretty quick: at present it is meaningless garbage.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.
nils illegitimus carborundum
me, me, me
|
|
|
|
|
|
|
|
|
|
|
I don't trust it.
Two of us have been working on the same file today, so we agreed he would check his changes in first and I would update mine, merge and commit the merge. I get the go ahead to update, update tells me correctly that the exact same file we've worked on is conflicted. I look at the file. no "mine" or "r.6271" markers anywhere to be seen, but a load of interface implementations are missing. I update again, to no avail. I stubbed out the interface implementations, compiled, ran all my tests and tried to commit only to be told that my file is out of date and needs updating before I can commit so I update again, and hey presto, a handful of marked conflicts in the same file. Nobody else has committed anything in that time. WTF?
Last week our build server failed to get the externals we'd set up on a repository, even though a local update got them all. Cue hours of digging around instead of actually being productive.
I really hate SVN.
|
|
|
|