|
all good rules of thumb although the if/switch one gets sticky so that's a rule to be bent a LOT.
still, the idea is sound - don't use conditionals where you can use polymorphism
although in .NET the runtime still does work to cast
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
Marc Clifton wrote: Don't write an if unless you know what the else does and why. I have seen programmers consistently adding "else /* do nothing */; after every "if" that doesn't have a natural "else".
I think that clutters up both the logic and the code. Lots of operations are conditional; you perform them when conditions are met. Otherwise you don't, but you don't have to say that expicitly; that is obvious! It follows from the very idea of a conditional operation.
Besides: I prefer to turn up the warning level to maximum while coding. When I take over code with zillions of empty "else"s, I get a zillion warnings about "possibly unintended empty staetement".
What is the real difference between an "if" and, say, a "while"? You suggest that you always should indicate an alternative action when the condition is not met. So how to you specify the alternative action when the "while" condition is not met? If you do not, why not? Isn't that a very similar situation? Why is it less important to know what to do if a "while" condition is false than when an "if" contidion is false?
Actually, I have used one language quite a bit where you could specify what to do when a "while" fails: you could conditionally break out of "for"-loops, and distinguish between breakout and completion of loop:
for listElement in listHead:nextpointer do
while listElement.key <> wantedKey;
exitfor
output ("sorry, the wanted key is not in the list");
exitwhile
output ("found! I can process this list element for you"):
endfor; This is actually a very nice flow control construction, which is a mess to duplicate in C if you want the "exitfor" and "exitwhile" claused to exexute within the context of the loop (e.g. with access to loop local variables), which is one of the essential points.
I honestly miss that flow construction. Why can't other languages offer it?
|
|
|
|
|
I try to avoid adding a comparison to a comparison. And if I do I don't do Yoda conditions.
|
|
|
|
|
heh. The yoda conditionals were hammered into me in the late 80s/early 90s.
The multiple comparisons are a necessary evil, as TKey isn't directly comparable. You have to use its IComparable<T> interface. Oh how I wish .NET would let you declare a contract on operators. You can't. It's a limitation of .NET's generic types.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
If you remember to put the constant on the left side, you also would remember to double the equals sign.
And why the h*** do we have to double the equals sign? This thread makes me miss Pascal so much!
In Norwegian, "yoda" sounds like "joda...", ususally pronounced with a sigh, meaning "yes, but...". I hear what you say, but I am certainly not sure that you are right. So "yoda" may be an appropriate term
Pascal, and several other languages from that period, were designed by experts on formal languages, parsing etc. C is based on a collection of scraps left over from an early days space invation game implementation. OK, those students were certainly clever, but they were not experienced language designers.
|
|
|
|
|
it's not just about remembering. It's about typos. A better argument is that compilers these days catch accidental assignment, but some of us have just had certain practices drummed into us for years and they stick.
Double equals sign is necessary in the C family of languages because there are different ways to do equality and assignment.
And you may find the C language family inelegant, but there's a reason they carried the day and pascal well... didn't.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
The big mistake was to use the single equals sign for assignment.
Many languages, from Algol to Pascal to ADA uses := for assignment. APL has a special assignment character. Lisp uses keywords. Classic Basic uses LET. The real probles is: Why does C use the equal operator for assignment?
Pointing to that is an explanation for why double == is needed for equal operations, but not an excuse.
If you try to suggest that C squeezed out Pascal because C is "better", you suggest (with great force) that your main field of expertice is not in formal language design.
VHS won the market becuse it was better, didn't it? And MP3 won over SACD/DVD-A because it was better? TCP/IP won over the OSI protocol stack because it was better? Well, that depends on the criteria. If your only crition is "degree of market penetration", all of these were "bests". But please don't pretend that this is only imaginable criterion.
|
|
|
|
|
Better is subjective. I'm saying more people found it usable, which speaks to its versatility.
Perhaps it would have been better for C family languages to not use equals as an assignment operator.
But it's also both not the first thing about the language I'd change, nor does it say much to me about formal language design.
As someone who has written plenty of parsers and parser generators that accept formal grammars, I can tell you C's biggest sin is that type declarations need to be fed back into the lexer to resolve grammar constructs. This breaks the separation of lexer and parser. It's not quite as bad as python's significant whitespace but it's a pretty ugly thing to have to hack together in a parser.
But then, I'm not Niklaus Wirth. I'm just someone that writes code.
That being said, I don't holy roll. I use what works. Pascal doesn't. There just aren't modern tools for it. It's not quite as dead as latin, but catching up.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
Think twice, write once.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
only twice? I find a bout of analysis paralysis followed by headdesking a few times is really the way to go.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
It's a metaphorical twice as in more than once.
Although I have found sometimes just taking a shot in the dark can be useful if you can learn from failure.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
I hear you. I just did that. But I learned from success. I mean, I was working from some sample code, in C++, on implementing B+ trees, but I ported it to C# and then rewrote it using .NETisms and adding features.
Then I realized it was almost pointless without a little database system going with it because it only optimized situations where nodes are directly tied to disk access.
On the other hand, I did the same thing with the regular B tree and it worked flawlessly, and is useful as an in-memory autobalancing tree structure (inserts and deletions are slow, searches are very fast and consistent - every search takes the same number of comparisons)
so woo.
but i guess someone already implemented one here. Not sure how mine stacks up, but it works.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
When thinking fails, code and fail to move forward.
|
|
|
|
|
Similar to the woodworking saying - measure twice, cut once.
|
|
|
|
|
If you can't find a way to keep your logic nested <= three levels deep, find another profession or project, because you certainly don't want to be the one to debug that sucker. A function is an acceptable solution.
|
|
|
|
|
I think it depends on the logic, and should be amended to non-trivial logic, because I wouldn't count things like null checks - validation - that sort of thing, unless they're convoluted. But that's me, and it served me well enough. Usually my debugger problems are complicated. I almost never actually debug. I Ctrl+F5 in visual studio and I either get the expected result, or usually I know where I went wrong because I develop very iteratively.
When I was growin' up, I was the smartest kid I knew. Maybe that was just because I didn't know that many kids. All I know is now I feel the opposite.
|
|
|
|
|
Something similar: Never compare floats for equality. It may bite sooner than later.
|
|
|
|
|
Yep. Been there, done that, got the scars on my back from self-flagellation for trying it.
Software Zen: delete this;
|
|
|
|
|
That is also one of the mantras I preached when teaching programming. But even though we had been teaching the kids about limited precision, it was very difficut for the to understand that "if ((1/3)*3 == 1)" could fail. (Except that if you really used constants, or compile-time-evaluated expressions, an optimizing compiler might remove the entire "if".)
Students often have a vague understanding of terms like "integer" and "float" (or "real"). So I preferred to refer to them as "counts" and "measurements". That made it a lot easier for them to understand how both integers and floats behave in the computer.
One of the great details of the APL language is the environment variable quadFUZZ (if my memory of the name is correct): When comparing floats, if the difference is less than quadFUZZ, the values are treated as equal. (I belive that the fuzz was actually scaled by the actual float value, so it was a realative, not absolute tolerance, but I am not sure - APL is too long ago!)
|
|
|
|
|
Oddly enough, I can't remember any problems you're talking about from my own experience. And I am not even a programmer by trade, I've studied physics and programming was a side-gig at first.
To me, integer numbers are exact and floats are, as it's impossible to represent arbitrary numbers with discrete values, approximations. They may be good enough for daily use, but they may fail and when they do, they fail. Maybe that's why I didn't have any problems, the concept of approximations is deeply nested in a physicist's mind.
Well, that and I've recently built a system which used integer for it's measurement values (mostly because the sensor returns integers by the value of 0,01°C). So your vocabulary would have spectacularily failed me
|
|
|
|
|
Students insist that when you measure up 3 kg of flour for your bread, that is count of the number of kilograms. Their body height is a count of centimeters.
It goes the other way, too: They may use a float for the number of students in the class, arguing that when they increase the number by 1.0 for each newcomer, the float still represents the count of students. And, the more advanced ones argue, with a float, you can count any number of units. A plain int can't even count the number of living humans!
Sure, most of these problem come with students who have been playing with computers in their bedroom since they were ten, all self-learned, having picked up one little bit here one there, with no trace of discipline whatsoever. But frequently, these become class heroes: Other students learn "smart tricks" from them, and "how real programmer do it, not the way that silly lecturer tells us to". So they can have a large influence on otherwise "innocent" students.
This is mostly a problem with students. With professional programmers, the problem is with those who do not fully realize that e.g. a comparison does NOT return an integer (-1, 0, 1) but "less", "equal", "greater", and you should NOT compare it to numerical values. If you declare non-numeric, yet ordered, values as an enum, and create an array of, say, weather[january..december], you canNOT index this array with an integer, "because may is the fifth month, I can use 5 as an index... no, wait, I have to use 4, because it is zero based!"
One specfic example: In my own C code, I use to define "ever" as ";;" so that an infinite loop, it is made explicit as "for (ever) {...}" (inspired by the CHILL language, where "for ever" is recognized by the compiler). I used this in one of the code modules I was responsible for at work. It was discovered by one of the young and extremely self-confident programmers, who was immensely provoked by it: He promptly replaced it by the "proper" way of writing an infinite loop: "while(1){..}". He searched through our entire codebase for other places where I had done similar sins, adding a very nasty remark in the SVN log for each and every occurance, requesting that everybody in the future refrain from such inappropriate funnyness - we should do our progamming in a serious manner.
Oh, well - I din't care to argue. Why should I. Readable, easily comprehendable code is more essential when it will be read by people who are not into embedded systems code. Or rahter, to a developer of embedded C code, it is far easier to recognize "while(1)" as an infinite loop than that "for (ever)" for the same thing.
|
|
|
|
|
Don't compare datetimes for equality, either, particularly if they don't all come from the same 'source'.
|
|
|
|
|
That, on the other hand, may work just fine. Depends on the language/runtime library and the source of the dates, but when I want to know if some date is today, equality works.
Again, assuming the runtime library helps and you know what you're doing. There's a reason why TheDailyWTF has a couple stories on mishandling time stamps.
I suppose, we could amend "Don't run home-built date/time handling" to the list of useful mantras. There's heap tons of ways to get them wrong and even if you test all you can, it may fail when a leap year occurs.
|
|
|
|
|
Pure dates are generally not a problem, as long as you (or the coder from whose output you are getting the date) don't/didn't do something very stupid. Even here, however, timezone issues can cause problems. I once had an issue which resulted from the author of the firmware of a device I was getting data from recording what should have been a pure (midnight) datestamp as the corresponding local datetime. Since I am in the GMT-5 timezone, midnight on April 4 became 7 pm on April 3!
Trying to compare datetimes for simultaneity, however, is almost always a severe PITA, when the source clocks are not both perfectly synchronized and using the same basic internal representation for clock time.
|
|
|
|
|
Storing dates in UTC internally solves almost all issues. Not all of them, but almost all. Comparing time stamps in milliseconds for equality may work or may fail, depending on the context. In a scientific context, all clocks involved are precise down to milliseconds at worst and way more precise at best. That, and time differences of a few milliseconds make huge differences.
But it all boils down to context. But yeah, I've seen some very stupid date handling myself. My point is, while comparing floats for equality is a horrible idea by default and always, comparing dates for equality may work very well depending on the circumstances. Well, that and dates are like encryption, there's heaps of way to get it wrong, many of them very subtile but still destructive and only a few (of not only one) ways to get it right.
|
|
|
|
|