The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
Mathematicians used i, j and k as indexes long before Babbage's Analythical Machine. It did not come with Fortran.
When I was a student, the professors insisted on long, descriptive names in our programming hand-ins. I looked over the shoulder of one of my classmates: His integer variables where named I01, I02,... float variables F01, F02... I shook my head: Prof xxx will explode into small pices when he sees that code! My classmate smiled back: Oh no, of course I make a global subsistute of I01 with NumberOfFruitBaskets, I02 with NumberOfApplesPerBasket, F01 with AverageWeightPerApple and so on, but I can't be bothered with typing those long names when I develop the program! (He was the brightest kid in class, and certainly had the mental capacity to keep the asossication between F01 and the average apple weight.)
In my current job, one group revising our coding standards suggested that code lines were restricted to at most 72 characters (they were serious about that!). My project asked for an exception, as we had rules for the naming of cross-module #define constants that in some cases lead to identifiers exceeding the 72 char limit. ... I think that is going a little bit too far in the other direction.
I recall that in calculus we would use i, j and k as the iterator variables for summation operators. The exception seemed to be t when it was time but that was more physics applications than the pure math.
I spent my high school senior year in the US as an exchange student. The physics teacher in my Norwegian high school had stressed that one good reason for using 'v' as a symbol for speed and 'u' as a symbol for voltage is to make it simpler to do complex calcuations as "pure math" without being confused by any physical interpretation of the (partial) expressions.
My US physics teacher stressed exact the opposite view: We use 'v' for velocity to make you concious of the meaning of this value within a complex formula! ... He was surprised that we did not use the initial letters of the Norwegian words for those concepts. How can you know what you are calculating, if you ignore the semantics?
I do understand his reasoning, but it goes only so far. At some point, you must detach the calculations from the physical interpretation of each value. To take one example: In electronics, I learned to calculate filters, handling both the values and the units, and saw it coming out as a value in seconds (or more commonly: microseconds) describing the filter. I know how I got there, but it never got under my skin how the value ended up as a time. I cannot associate a filter with a time span, the way I can see what a car's speed or mass represents. I can only handle it by treating it as pure math, and learn that the resulting value comes out as microseconds, without grokking it.
That takes me back to my year as an Electrical Engineering major. The math that would fall out of circuit diagrams was pretty intense. In the end though it felt like hocus-pocus and I couldn't relate so became a Mechanical Eng. major, then a physics major but somehow ended up as an actuary writing programs. "i" is still an oft used loop variable so long as interest rates are not in the context.
In the early 1980s I read an analysis of the time consumption of CDC (mainframe systems) compilers. One of them spent about 60% of the total running time on getting the next source file character to the tokenizer.
In the 35+ years since then I have never learnt anything that could fully explain that figure; maybe they had no buffering at any level between the getc() and the pysical disk. That would be crazy, but how can you explain the observations in other ways? That compiler certainly cannot have spent much resources on, say, fancy optimizations!
The claim has some substance. I worked with Tcl/Tk in the early days, when the source code was directly interpreted: If a loop was executed a million times, the same source code statements were tokenized and parsed a million times. Whitespace and comments were skipped over a million times. Symbols were looked up a million times. So there were preprocessors removing all comments and unneccessary whitespace, to make the code run significantly faster. Some of these preprocessors also replaced longer variable names with short ones, but due to the extremely dynamic nature of Tcl (you can build a character string at run time, and then execute it as a statement - if you build a string referring to a variable name, it wouldn't find the shorter name). When Tcl introduced bytecodes, speed increased by a significant factor.
I have heard similar stories from other developers, using other interpreted languages, and sometimes they argue in favor of short names to speed up interpretation. Today, that is mostly an old myth that won't die: Bytecode compilation, or at least some sort of pre-processing, has become the norm in anything that is called "interpreted" languages. (With no pre-processing, they are commonly called "scripting" languages nowadays.)
I have a degree in physics, where we used single letters for every variable, going to Greek for lack of letters, and i,j,k were universally used for counting in equations, usually corresponding to the three dimensions x,y,z. FORTRAN was used for scientific calculations and naturally adopted what scientists use.
My thought when someone claims that there is something 'wrong' with that is that the problems we are having with customers being down, failure to even deliver requirements that are relevant to users, a continuing problem with delivered bugs into production, etc, etc....
...have absolutely nothing to do with what variable I use in a for loop.
And micro managing coding styles just demonstrates that the proponent of such has spent zero time studying the actual impacts to process quality. Often (maybe always) the same ones that think the newest technology is going to solve all those problems also (even as they are only 10% in to implementing the last technology solution that would have solved all of them.)
(not that this has ever kept me from posting before):
Lately, on Windows 10 or Server 2016, a lot of files I've downloaded direct from MS refuse to install as, it turns out, they apparently think the digital signature is not valid. I've seen this for some of the monthly cumulative updates for Windows 10 or Server 2016, cumulative updates for SQL Server, and SQL Server Management Studio. The same files, according to a Windows 7 machine, are showing that the signatures are valid and I have no problem installing those updates (where applicable - for example, obviously, I can't install a Win10 CU on 7).
I have at least 4 VMs (and a physical machine) running either Win10 or Server 2016 showing this bogus "bad cert" problem. Files have been re-downloaded many times, and hashes match every time. The system clock varies by maybe up to 3 minutes between all of my machines (and they're all set in the correct time zone), so a 3-minute drift shouldn't result in this sort of cert check failure.
Where does one even begin to diagnose such a thing?
if u had installed man in the middle stuff like fiddler you may need to purge your windows keystore of invalid certs etc
This has started happening on a bunch of completely independent machines, both physical and virtual. I generally don't d*ck around with the cert store, and I know for a fact I haven't installed Fiddler on any of them.
That's where I get them from. At least the Win10/Server 2016 cumulative updates. I hadn't tried to look up the SQL Server CUs from there, but I'm doing that now. Also - I don't think something like SQL Server Management Studio ever gets posted to that site, as it's not an "update" per se, but rather a full product.
This is insane.
I just redownloaded SQL Server 2016 SP2 CU5 from the update catalog, rather than the usual MS download site. Even though the filenames are slightly different, the sizes match, and the hashes are identical. Yet if I right-click on both and select Properties, Digital Signatures, select one of the two, and click on details - the file from catalog...* shows it's OK, even though (and I've just re-confirmed) the file from downloads...* is still showing as invalid.
This makes zero sense, given that the file hashes match.
Just a thought based on something that happened years ago to me, are you doing extensive port blocking? Case I had MS changed validation methodology and they were using a port that was blocked so it failed with no indication at all that the blocked port was the problem.
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013