1. Try to write bug free code by using the tools the software Gods done give us
2. Accept that it still won't always work
3. Make sure that I can almost always find out what happened if something goes wrong in the field
If you have a high likelihood of being able to post-mortem field issues and figure out what went wrong, you have an enormously better chance of minimizing the number of errors that actually exist. If you can mostly only figure out errors that happen during development, those field errors can remain Heisenbugs for years and years.
I try to write code that is bug-free, and I diligently test and verify proper function before releasing it to the wild. That said, I recognize that all code, including mine, has bugs that will need to be corrected. I therefore write code that is easy to maintain and correct.
I do get some of that (I do the UI's in our products), but with us most bugs arise from machinery behaving in unexpected or undocumented ways. We also have the problem of 'surfing the wave' of continual change in interfaces to that machinery, and maintaining compatibility with older hardware still in the field.
Software developement is a constant war against the universe. The developer tries to create bug free code and the universe creates dumber users to block the applications. So far, the universe is winning.
Jokes apart. You nailed it. One must remain realistic. Noone can write perfect software that never get hit by bugs.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
There is a classical study of bugs in the OS360 operating system - I guess lots of you have seen that sawtooth curve of how the bug count increases with every release, until they do a thorough cleanup which gives a great drop in bugs, until it rises again.
One intereresting point is that even at the low points, the number of known bugs that hadn't been fixed was around 1000. Anyone may claim to be a lot better than that (for similarly complex software), but I won't ever believe them
(And for fixes: Somewhere I read that IBM has, or had, an official term for a patch; it was called a TF - "Temporary Fix". But within IBM, this was usually referred to as a PTF - a permanent temporary fix.
Over the years one is burned by various bugs - more often than not because users do unpredictably stupid things - causing problems that no sane path would ever lead to.
So - since I consider the user "the enemy" - we'd not have to code defensively if they were not "the enemy" - one defends from enemies, I try hard as possible to outsmart them. This entails a combination of preventing hard-learned painful lessons from recurring and anticipating their boundless "ignorance mitigated by stupidity".
Alas, as interactions (with on-screen components) becomes more complex, new avenues of attack become available. You take your best shot at anticipation but, admittedly, they have the advantage.
A new plot for Star Wars ! The Force, the Dark Side, and the Stupid Side, which can easily overpower them both.
I agree with your philosophy that the user is "the enemy". They ultimately do things that you would never expect. For instance, instead of entering a number somewhere, they will try entering their name. Why, because they are just plain stupid, and often even try to break your program.
So, here is a true, yet very funny story.
Back when i was in college in the early 80's, we used to joke about what would happen in 2000 when a year such as 00 would mean 1900 instead of 2000. Yeah, we had one computer back then that still used punch cards too, A newer computer they had just installed had terminals but languages still made use of 2 digit years even on it. Thus, as programmers we were already well aware of the Y2K problem long before it ever hit the news media.
In 1978 I began working with genealogy. In genealogy there is no such thing as a 2 digit year. Your g.grandfather was born in 56. Is that 1756 or 1856? Now days it could even be 1956. Thus I got used to always using 4 digit years very quickly. Back then, genealogy records were all on paper so we didn't have to worry about computers.
Now move ahead to 1994. By then I had my own business and released a program for Windows 3.1 that was designed for and used a touch screen. No one even knew what a touch screen was back then, but that is a different issue. A gentleman bought the program from me in late 1994. In January 1995 he called me up and said that he had found a bug that crashed the program. I looked at where he said the bug was but couldn't duplicate it. I then had him walk through and tell me exactly what he was doing, letter for letter, number for number what he was entering and click by click what he was clicking on. I followed on my screen what he was entering. Sure enough, there was a problem! Then I realized what he was doing. The dialog box he was on where the bug occurred required entering a date. The software automatically entered the current date in the date field when the dialog box was opened, which is the date you would use most of the time when using the software.
So what was wrong? He was replacing the date with the date for the following day. He was entering it as 01/25/95. See the problem yet? He entered the year as 2 digits. I was so used to always using 4 digit years due to my genealogy expreience that it didn't occur to me that someone would use 2 digits to enter the year in that field. The default date that was entered had a 4 digit year as well. I explained to him that if he entered a 4 digit date it would work just fine. His immediate response was "well that's dumb, no one ever uses 4 digit dates like that". Later that day I was able to send him an update so that he could enter a 2 digit year. The fix was such that if you entered a 2 digit year that was between 00 and 90 it would assume you meant 2000 and not 1900. The software was Y2K compatible well before 2000 because of the way I wrote it originally. I had begun working on the design of it back in 1991 (which was for DOS back that far, but I soon switched to Windows 3.0 and 3.1 soon after).
It wasn't long after this that the whole Y2K issue hit the mainstream news media and that everyone would need to get used to the idea of using 4 digit years. I almost felt like calling this guy back up again and asking what he thought about my software needing to enter a 4 digit year, even before Y2K became widely known. I never did, but I have always thought it would have been funny to have done so.
So, yes the user is the "enemy". My software was perfect, and Y2K compatible long before it became fashionable . It was the "enemy" who thought that it wasn't
I am happy that you quote "enemy". Beacause I can't think of it that way, except when joking about it. I am happy when I realize: "Oh, so that is what you need! Sure, I think I can make that for you." Understanding the real needs, replacing my initial, and incorrect, assumptions. That is not an enemy's work; that comes from a helper.
Sometimes, I must help the user to help me. E.g. I must guide the user to, or teach him, a certain formalism required for identifying the needs. To make the customer understand his own needs better. (Not a formalism for how to build a solution; that is something different.) Usually, the customer is thankful for getting a better understanding of what he is doing, computerized solution or not.
I see the user as my friend. Unquoted.
To follow up your Y2K story: I guess I beat you there... In 1981, I was an intern with a company with its proprietary CPU architectue, proprietary OS and proprietary binary date format. I was working late one night when the date stamps suddenly went crazy on all new files created, on all computers - but only when viewed through some subsystems. We were working at a rather low layer, so I quickly spotted the problem: The (binary format) time stamps had expanded into the most signifcant bit. Those subsystems giving completely crazy days had declared the time stamp as a signed value, treating the MSB as a sign bit. I called the chief OS architect, and he called the entire OS group for a panic action; they were sitting all through the night to analyze all the core systems to see if any of them would crash. Fortunately, only user level applications had declared the time stamp as signed, and nothing crashed. We had to send out messages to all customers the next day, and within a couple of weeks, updated versions of all the affected applications where sent out. Most of our customers were technically qualified, understanding the problem, so we were mostly "excused" for the bug.
As a solo dev with a number of desktop business apps, early on I found that the most important features I would ever write were program updaters and support tools. Combined with an automated deployment system it means that I can sometimes find and fix a problem/bug, recompile, deploy, and patch a customer's system all in a 15 minute phone call/remote. More often than not, bugs are fixed the same day as reported, depending on complexity.
My current software apps have no reported bugs. My customers are great at testing!
Well, that's a sign of caring for your users. I'd push the "good developer" part a bit further.
A good developer generally (we all make mistakes and have errors of judgement) writes code that is fault-tolerant and also lessens dependencies, so the bug fix is generally pretty easy and doesn't have huge downstream effects.
Therefore my code will be resilient to them.
Those that can't be prevented will be handled.
Those that can't be handled will be rendered harmless.
Those which remain will be fixed when the time comes.
GCS d--(d+) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
In reality a balance between good code and error handling is essential. Too sloppy with your code means there will be lots of errors and no matter how good your handling of them your code will be buggy. Too enthusiastic with your error handling could mean doing that for eternity. Prevention is always better than cure. Whatever there are still times you just gotta ensure the program fails gracefully if the highly unlikely and unanticipated event happens. The zone of perfectionism is not achievable but will waste money, time and resources, so don't try for it, try for "better than good enough" by all means, try for excellent, but perfect? No.
Also a note for pedants like myself: In a perfect world I make no mistakes, my colleagues make no mistakes and the compiler makes no mistakes. Meantime the environment never fails, always works as advertised and connections never ever fail. The question does not mean this it is imperfect.
A couple of days ago I read some ornitologs' worries about the extremely mild January we are having in Norway this year. Yesterday, the temperature was up to 13 C in my town; a more typical temperature this time of the year would be minus 13 C. The problem is that if spring comes so early that bugs and larvae of all sorts wake to life, and turn into grown butterflies and whathaveyou, before the migratory birds come back from their winter "vacation" in the south, there may be very litte stuff left to feed their small ones with.
So, some bugs are great! We want them. We want the bird song.
Last Visit: 24-Feb-20 18:13 Last Update: 24-Feb-20 18:13