The Lounge is rated PG. If you're about to post something you wouldn't want your
kid sister to read then don't post it. No flame wars, no abusive conduct, no programming
questions and please don't post ads.
I was trying a little piece of Python code. Got a bit surprised with the "positional"-indentation requirements in code. Just like how it used to be in older programming languages like Cobol. (I'm not sure if any other recent programming languages enforce these)
If x is a:
do //indented & works
is different from
if x is a:
do //Non-indented & throws error
It's funny, and doesn't this sound rudimentary and a bit annoying?
Python's version is worse: you don't need to declare the variable using "var", or "Dim", or anything - just pick a name and assign a value. So if you misspell it ... that's a new variable.
Just like FORTRAN was back in the day (1977). And that caused a probe to miss the planet Venus completely, so what it does in a banking app, or a Boeing 737 Max is not something you want to think about too hard, really.
Strongly typed variables should be mandatory in all languages used in the 21st century!
Sent from my Amstrad PC 1640 Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
I agree, but you can still do quite a bit of damage with var.
In what language?
In C# it's still a typed variable, so it's exactly the same thing as explicitly using the type name except instead of the type name you use var. So no damage there, except to readability.
var processedPerSecond = processedCount / elapsedTime * 1000;
Probably there, you get an int instead of a decimal?
However, changing var to decimal will still result in a wrong output as an int is silently cast to a decimal, but that doesn't change outcome of the divide operation.
The fix I think you're looking for is:
var processedPerSecond = (decimal)processedCount / elapsedTime * 1000;
Now here is a fun bug, which I've seen go wrong multiple times...
In this piece of code the query variable is of type IQueryable.
That means the Where that is executed is actually Queryable.Where(IQueryable, Expression).
The actual database call, with WHERE clause, is executed on the ToList method (or more precise, in the Enumerator).
Now someone comes and changes the code, the first line becomes var query = databaseContext.SomeTable.ToList();
Suddenly, the type of query has changed to IEnumerable!
The database query WITHOUT the WHERE clause is executed and Enumerable.Where(IEnumerable, Func) is executed to filter the collection in-memory.
You now get thousands of records instead of tens, while the in-memory Where is probably more expensive than the database WHERE.
All in all a nice performance hit...
That is actually the only bug I've ever encountered while working with var, and it's pretty specific.
And the only reason it goes wrong is because IQueryable and IEnumerable have pretty much the same extension methods, save for Expression and Func, but a Func can be implicitly cast to an Expression.
If I wasn't using a strongly-typed language, and had an a**hole for a coworker that needed a lesson, I certainly would! Send him a copy in which a variable has unicode changes but looks the same: "It works for me, I can't understand why you are having problems!"
Right. That cost me a couple days of work, in the days when Fortran, as well as hardcopy code, were still around. (Yes I know there are languages called Fortran today, but they look completely different!). The loop went trhough once (as Fortran always does), but wouldn't repeat. I didn't discover the problem until I sat down with a printout of the disassebled binary code. Fortunately, the disassembler made use of the symbol defintions in the debug info, and fortunately, the printer (as opposed to the screen) used serif fonts. Then I could distinguish an "l" from an "I", so a conditional jump out of the loop used a diferent variable from the intended one.
There are several such "false friends". I should have known, because in my student days, one of my friends who was a TA in the elementary Fortran programming class caused and "incident": The professor insisted on Norwegian variable names, so instead of "QUEUE" the variable should be named "KØ". "Ø" isn't recognized in Fortran, but ø is commonly transcribed to English as oe, which makes it "K0E". Because we have Ø as a letter, we do not slash zeroes here. Depending on the typeface, KOE and K0E may look almost perfectly identical. They did on our screens. When Jon discovered, he exclaimed loudly so that everybody around could hear hear it to the very good-looking, 18 year old female freshman: "Look, here you are not saying k-oh-e, you are saying k-null-e!" Zero is "null" in Norwegian and "knulle" means f**k... Jon didn't realize was he was "suggesting" until the word had been pronounced, and he got so ashamed that he wanted to flee away. But the girl did accept his excuses.
Actually, it doesn't take national characters, like ø, to confuse 0 and O - even in English-speaking environments, non-slashed zeroes are common. Some typeface do not distinguish much between 1 and l either - I have seen troubles caused by that pair as well.
During my first student year, we actually did punch cards! Or, we filled in coding sheets, handed them in for punching by trained typists (especially well trained in mistyping...), and the turnaround time could be 36-48 hours in rush periods. Interactive terminals on 300 bps serial lines were reserved for graduate students. During the second year, we got three 16 bit minis, each handling 16 screens at 9600 bps, and a full screen editor, which made the big and proud Univac 1100 mainframe look so much like a clunky old wreck that we were ordered to reduce the line speed of the mini's terminals to 1200 bps, so that we did not put too much shame to the great mainframe. (The TAs, I was one, managed those three minis more or less ourselves.)
Then (fall of 1979) Fortran was replaced by Pascal as the language for the "101 Basic programming" course for half of the students. Some departments refused to switch - noone in their branch of engineering would ever be programming anything but Fortran! Other departments accepted Pascal only conditionally: The EE departement accepted it only if the Computer Science students (like myself) were required to take a course which covered e.g. 3-phase power distribution networks...
The incident with the I/l mixup happened around 1985, after I had completed my degree and was working at the University Computer Center. The guy with the problem was in the Mechanical Engeneering department; they still clung to Fortran as The Only Language at that time. But they did accept screens then! (Besides, I don't think there ever was delivered any punched card reader for the 32-bit supermini series that they were using.)
A few years ago, a studymate of mine, working at the University supercomputer center, surprised me by telling that for FEM, weather forecasts and similar typical supercomputer applications, Fortran is still the language of choice! It is no longer Fortran IV, though... Strangely enough, I cannot find on the Internet who said, in the discussions of the proposed language extensions to become Fortran 77: "I don't know what programming languages will look like in year 2000, but they will be called Fortran..." - one of the big gurus of the time, maybe Hoare or Dijkstra. When I paraphrased that statement in my earlier post, it was intended as a historical reference, but it seems like the quote has not survived history. Whoever said it was perfectly right: Fortran 2003 is called Fortran, but looks nothing like Fortran IV.
It's interesting to see in this discussion that one of python's more attractive features is seen as a disadvantage 8)
Once again, the debate here about whether the language is 'good' or not seems to have nothing to do with whether it actually is good or otherwise, but simply amounts to 'does it fit my idea of how a language should work'.
If I've learnt anything from over 40 years of software development, it is that (with a few domain specific exceptions) the skill of writing good software - don't mistake me here, I'm not saying that I write good software! - has almost nothing to do with the language you write it in, and everything to do with domain knowledge and the skill of the programmer(s).
Every language has rules; every language has flaws. Being aware of both and using/avoiding them so you produce good code is what it's all about.
The behaviour cited above as being 'problematic' is exactly why I use Python for certain types of project.
has almost nothing to do with the language you write it in, and everything to do with domain knowledge and the skill of the programmer(s).
I am glad you used the word "almost", because using Windows batch scripting for some devops work I have been doing has resulted in me having to work around the language and do things in a manner that I would not see as particularly elegant.
Of course I could rewrite it with powershell scripting, which is on my list, but when you are working on systems that are business critical - doing a large code refactor is not always possible largely due to the cost to the business.
“That which can be asserted without evidence, can be dismissed without evidence.”
It never ceases to amaze me how complex a process could be constructed with DOS batch scripts - often by relying on obscure side-effects of small .COM files etc. I've always liked Powershell (although I rarely use it!) because it seemed a genuine attempt to provide a full batch environment for windows with 'proper' controlled access to the entire system.
I do find the verbosity of the naming convention somewhat off-putting though (as indeed is the case for some other languages) - ridiculously long and complex variable names can be as difficult to manage as i, ii, iii and iiii etc!
(I once spent over a day hunting down a bug in some C++ code where a stupidly long descriptive variable name had been used in multiple places with two different internal spellings via a macro such that the compiler didn't 'notice' the variants!)
But a var (as in C#, say) isn't at all the same thing as an untyped (or rather, dynamically-typed) variable (as in Python). When you write var x = 5;, x is still an int just the same as if you'd written int x = 5;, you're just helping yourself save some keystrokes and letting the compiler infer the type (ok, you're not saving any keystrokes in that example, but not every type name is as succinct as int ). You can't then turn around and write x = "snuggles"; or the compiler will throw a fit, unlike Python where it would be perfectly legal. More and more people might be using var, but let's not confuse that with "no one will be using proper typed variables any more!"
Supporting and encouraging good (consistent) formatting is a good thing, but requiring it isn't.
(In my opinion) one of the strengths of C-like languages is that whitespace is pretty much unimportant. With C and C# -- unless you have directives -- each sequence of whitespace can be reduced to a single SPACE without affecting the meaning of the code.
When I write C, I try to put all the directives in their own files, away from the actual C code.
Last Visit: 20-Jul-19 5:45 Last Update: 20-Jul-19 5:45