The Lounge is rated PG. If you're about to post something you wouldn't want your
kid sister to read then don't post it. No flame wars, no abusive conduct, no programming
questions and please don't post ads.
Some early BASIC versions allowed up to 26 numeric variables; i through m (?) were integers, everything else floating point. It's a reflection of the generosity of 26 variables that in practice most of us only ever needed i, j, k as integers; we really were spoiled for choice.
My first BASIC experience was with 286 variables: A-Z, A0-A9, B0-B9, ... And 26 string variables: A$ to Z$. I never saw a BASIC so tiny that it had only 26 numeric variables - maybe it existed, but I doubt that any real world problems were solved with that compiler .
Also, the first Basic compiler I worked with didn't distinguish between integer and float - that was quite common in the early Basic days. I believe that with the Univac 1100 mainframe series Basic, every variable was born as integer, but as soon as it was assigned a float literal value or the result of a non-integer expression, the type was changed on the fly. (So I think it really was an interpreter, not a compiler system.) No Fortran style implicit type by (first) letter.
this style works for me. Its hip like texting acronyms now. You could use i d k instead or f y i.
My personal favorite was back when I used VB
Dim g as string... It made my programming style quite revealing.
Consistency is a blessing. I have my own names for common things that go back almost 20 years...they are in finger memory and great for copy/pasting. You could reasonably argue that the names a programmer uses for common items become a fingerprint of sorts.
I get the Fortran background of 'i','j', and 'k' but have always disliked single character variable names since we're now (since the '80s) allowed to be a bit more descriptive...so I use a lot of 2 letter names instead! Like you said though, whatever works!
BTW, where UI items are concerned, it irritates me to see developers who are content with Textbox1, Textbox2, etc.
Developers who are content with Textbox1, Textbox2 irk the crap out of me, never mind merely irritate.
"'Do what thou wilt...' is to bid Stars to shine, Vines to bear grapes, Water to seek its level; man is the only being in Nature that has striven to set himself at odds with himself."
I also used i,j,k for indices, why does it bother you?
Although someone ask me, once, to use more descriptive name, so I indulged him and refactor my indice to "variableIndiceForArrayIndexFrom0ToListCount" which, granted, is much more descriptive and easy to read!
well another memory muscle reason for many...
before FORTRAN many learned BASIC (back in the day when a school would have 2 or 3 TRS-80 or similar computers, the first version (Model I with 4k RAM) only had single char variables A..Z. Later the Model II (with the massive 16k) allowed 2 letters AA..ZZ.
anyhoo it was actually in the Programming Guide (probably inspired from FORTRAN) that suggested
I, J, K, L... for "general" integers (in particular FOR loops), (also ref: I for iterator)
S, T, U for general strings.
"Important" variables used A, B, C (effectively the global variables)
suggested sticking to single letters for compatibility with Model I. Some versions of FORTRAN also had that 2 letter limit.
"That way you could better determine what any variable was for/about."
mock it if you will, but given the naming limitations of the time at least some were already invested enough to come up with some common coding styles.
- Nowadays i, j as iterators/offsets even makes appearances in mathematics,
- when you see "for (i = 0; ..." you already know the intent (unless you or the programmer are idiot(s), and that's even if it's someone else's code.
-- and inasmuch almost makes it better to keep using i, j
... unless you're some sort of purist 'style wanker' who says 'the code may be misunderstood'
..... (and let's face it: such comments are nearly always a reflection of the lack of abilities of the idiot quoting them).
But of course. The shorter the scope, the shorter the name. The longer the scope, the longer the name.
Waaayyy back in a Pascal class I took in the mid-80s (on a PDP-11), for one assignment I chose to use the number of syllables of nonsense words to indicate the usage of the variables. Try typing "tafimadiddle" many times on a VT-100 (no cut/paste).
Though being rather young I do have the same habit. Guess I've picked that one up when taking my first C class 9 years ago (that is longer than it actually feels), and I just stuck with it, though I'm not using it as often in C#, where most of the stuff is Linq (x => x) it is there - Or I just use foreach.
I only have a signature in order to let @DalekDave follow my posts.
Mathematicians used i, j and k as indexes long before Babbage's Analythical Machine. It did not come with Fortran.
When I was a student, the professors insisted on long, descriptive names in our programming hand-ins. I looked over the shoulder of one of my classmates: His integer variables where named I01, I02,... float variables F01, F02... I shook my head: Prof xxx will explode into small pices when he sees that code! My classmate smiled back: Oh no, of course I make a global subsistute of I01 with NumberOfFruitBaskets, I02 with NumberOfApplesPerBasket, F01 with AverageWeightPerApple and so on, but I can't be bothered with typing those long names when I develop the program! (He was the brightest kid in class, and certainly had the mental capacity to keep the asossication between F01 and the average apple weight.)
In my current job, one group revising our coding standards suggested that code lines were restricted to at most 72 characters (they were serious about that!). My project asked for an exception, as we had rules for the naming of cross-module #define constants that in some cases lead to identifiers exceeding the 72 char limit. ... I think that is going a little bit too far in the other direction.
I recall that in calculus we would use i, j and k as the iterator variables for summation operators. The exception seemed to be t when it was time but that was more physics applications than the pure math.
I spent my high school senior year in the US as an exchange student. The physics teacher in my Norwegian high school had stressed that one good reason for using 'v' as a symbol for speed and 'u' as a symbol for voltage is to make it simpler to do complex calcuations as "pure math" without being confused by any physical interpretation of the (partial) expressions.
My US physics teacher stressed exact the opposite view: We use 'v' for velocity to make you concious of the meaning of this value within a complex formula! ... He was surprised that we did not use the initial letters of the Norwegian words for those concepts. How can you know what you are calculating, if you ignore the semantics?
I do understand his reasoning, but it goes only so far. At some point, you must detach the calculations from the physical interpretation of each value. To take one example: In electronics, I learned to calculate filters, handling both the values and the units, and saw it coming out as a value in seconds (or more commonly: microseconds) describing the filter. I know how I got there, but it never got under my skin how the value ended up as a time. I cannot associate a filter with a time span, the way I can see what a car's speed or mass represents. I can only handle it by treating it as pure math, and learn that the resulting value comes out as microseconds, without grokking it.
That takes me back to my year as an Electrical Engineering major. The math that would fall out of circuit diagrams was pretty intense. In the end though it felt like hocus-pocus and I couldn't relate so became a Mechanical Eng. major, then a physics major but somehow ended up as an actuary writing programs. "i" is still an oft used loop variable so long as interest rates are not in the context.
In the early 1980s I read an analysis of the time consumption of CDC (mainframe systems) compilers. One of them spent about 60% of the total running time on getting the next source file character to the tokenizer.
In the 35+ years since then I have never learnt anything that could fully explain that figure; maybe they had no buffering at any level between the getc() and the pysical disk. That would be crazy, but how can you explain the observations in other ways? That compiler certainly cannot have spent much resources on, say, fancy optimizations!
The claim has some substance. I worked with Tcl/Tk in the early days, when the source code was directly interpreted: If a loop was executed a million times, the same source code statements were tokenized and parsed a million times. Whitespace and comments were skipped over a million times. Symbols were looked up a million times. So there were preprocessors removing all comments and unneccessary whitespace, to make the code run significantly faster. Some of these preprocessors also replaced longer variable names with short ones, but due to the extremely dynamic nature of Tcl (you can build a character string at run time, and then execute it as a statement - if you build a string referring to a variable name, it wouldn't find the shorter name). When Tcl introduced bytecodes, speed increased by a significant factor.
I have heard similar stories from other developers, using other interpreted languages, and sometimes they argue in favor of short names to speed up interpretation. Today, that is mostly an old myth that won't die: Bytecode compilation, or at least some sort of pre-processing, has become the norm in anything that is called "interpreted" languages. (With no pre-processing, they are commonly called "scripting" languages nowadays.)
I have a degree in physics, where we used single letters for every variable, going to Greek for lack of letters, and i,j,k were universally used for counting in equations, usually corresponding to the three dimensions x,y,z. FORTRAN was used for scientific calculations and naturally adopted what scientists use.