From the wikipedia entry it seems "defensive programming" is more about avoiding security holes (buffer overflows, etc.). "continuing function of a piece of software in spite of unforeseeable usage of said software" is a bit vague. To me that sounds like using a spreadsheet program to do video editing.
Ok, we're talking about software here aren't we?
So what can be less foreseeable than developing a piece of code?
At the time of development you may have an absolute idea of what it is meant to do, and with a strict data input but it will most certainly change.
It may take a day, a month, a year but if the software itself lives, your code will have to handle some "unexpected" scenarios, and this is where your "defense" will be put to the test.
Business change, people change, everything change, so every piece of code you do must adapt even if by just handling the errors and reporting them correctly to the IT dep.
To be prepared for errors won't consume more development time if its implemented by design. Think about it right from the beginning and it will feel natural to use, not an hack.
Hmm, if you do this for a living you work to a written and agreed specification. If the customer changes his mind or wants more (they always do) he has to write it down, agree the details with you and, of course, give you more money.
You can never guess in advance all the things that can go wrong, though after a few years you get a pretty good idea, and do things like checking input data even if you sent it yourself from the previous module. If you know your customer you can anticipate future changes and put comments in - "comment this bit back in for southern hemisphere".
Like you said, customers change their mind, either because they just do or because the business itself did, but things change despite its written on a 1000 sheet spec or not.
At the end of the day the customer wants it changed, even agreeing to pay for it, it's up to you to decide how hard it will be for you .
So what I think is that the software we write must be prepared to handle change in more way than just the user input, we also must prepare it for business change, for scalability, extensibility, etc...
Usually this doesn't mean a whole bunch of extra work, specially if it's design that way right from the beginning.
If the design is right you can start to think about software changes as a way of earning extra money instead of just loosing it!
Our customers expect our SW to grow with them, so it is always in the spec that it has to be extensible and so on. They don't specify what that means, or how you test it (you have to be able to test any requirement). We always have that in mind, and it really helps if you assume it from the start.
Anyway, we are supposed to be product based now, so we have to plan from the start the possibility of selling it to someone else, so things like putting button labels in a file instead of hard-coding them, so you can change language.
(I ported a project to Arabic once, that was such fun!)
What I consider to be defensive programming seems somewhat different from some descriptions floating around the internet. If I had to give a short description of what I consider defensive programming I would say "distrust, check, and if checks fail they do so loudly".
I think of defensive programming as two rules to follow while programming:
- A block of code (e.g. function) should never handle external data (e.g. function parameters, data returned from other functions) without strictly checking that the data it receives are know good values. If the checks fail they should fail loudly and the block of code should not touch the data at all.
- A block of code should have its behaviour checked (e.g. check return/result data, unit testing). If the checks fail again they should do so loudly.
I'm a strict follower of the first rule, but I confess that I'm a bit lax in relation to the second, especially regarding unit testing.
We make safety critical stuff so we have to, and it gets checked. Having said that, you can still write rubbish, but it comes out in the test and integration.
I like to try and write foolproof code. It's a bit of added fun to what could otherwise be boring. I am a badass and dreaded tester, because I have a rough idea where the bodies are going to be buried, having interred a few myself over the last decades. And yes, I have found a few nasties in my own code ... oops
The theory is there, but full application of said theory is impossible, IMHO.
"the meat from that butcher is just the dogs danglies, absolutely amazing cuts of beef." - DaveAuld (2011) "No, that is just the earthly manifestation of the Great God Retardon." - Nagy Vilmos (2011)
"It is the celestial scrotum of good luck!" - Nagy Vilmos (2011)
"But you probably have the smoothest scrotum of any grown man" - Pete O'Hanlon (2012)
I always try to work out everything the user could possibly do wrong, and code to catch it.
I consider that approach to be bad, security wise. Trying to prevent wrong behaviour/data is a flawed strategy and far too frequently fails. A better approach is to only accept know good behaviour/data. Yes, sometimes some good behaviour/data may be blocked but bad behaviour/data will almost certainly also be blocked.
I use defensive programming;
-by proper unit testing while developing a particular module.
-by proper comment lines (for behavior of respective function/ method) and regions.
and of course, meaningful naming conventions (by which one should easily come to know what's its use) all over in application.