Like you said, customers change their mind, either because they just do or because the business itself did, but things change despite its written on a 1000 sheet spec or not.
At the end of the day the customer wants it changed, even agreeing to pay for it, it's up to you to decide how hard it will be for you .
So what I think is that the software we write must be prepared to handle change in more way than just the user input, we also must prepare it for business change, for scalability, extensibility, etc...
Usually this doesn't mean a whole bunch of extra work, specially if it's design that way right from the beginning.
If the design is right you can start to think about software changes as a way of earning extra money instead of just loosing it!
Our customers expect our SW to grow with them, so it is always in the spec that it has to be extensible and so on. They don't specify what that means, or how you test it (you have to be able to test any requirement). We always have that in mind, and it really helps if you assume it from the start.
Anyway, we are supposed to be product based now, so we have to plan from the start the possibility of selling it to someone else, so things like putting button labels in a file instead of hard-coding them, so you can change language.
(I ported a project to Arabic once, that was such fun!)
What I consider to be defensive programming seems somewhat different from some descriptions floating around the internet. If I had to give a short description of what I consider defensive programming I would say "distrust, check, and if checks fail they do so loudly".
I think of defensive programming as two rules to follow while programming:
- A block of code (e.g. function) should never handle external data (e.g. function parameters, data returned from other functions) without strictly checking that the data it receives are know good values. If the checks fail they should fail loudly and the block of code should not touch the data at all.
- A block of code should have its behaviour checked (e.g. check return/result data, unit testing). If the checks fail again they should do so loudly.
I'm a strict follower of the first rule, but I confess that I'm a bit lax in relation to the second, especially regarding unit testing.
We make safety critical stuff so we have to, and it gets checked. Having said that, you can still write rubbish, but it comes out in the test and integration.
I like to try and write foolproof code. It's a bit of added fun to what could otherwise be boring. I am a badass and dreaded tester, because I have a rough idea where the bodies are going to be buried, having interred a few myself over the last decades. And yes, I have found a few nasties in my own code ... oops
The theory is there, but full application of said theory is impossible, IMHO.
"the meat from that butcher is just the dogs danglies, absolutely amazing cuts of beef." - DaveAuld (2011) "No, that is just the earthly manifestation of the Great God Retardon." - Nagy Vilmos (2011)
"It is the celestial scrotum of good luck!" - Nagy Vilmos (2011)
"But you probably have the smoothest scrotum of any grown man" - Pete O'Hanlon (2012)
I always try to work out everything the user could possibly do wrong, and code to catch it.
I consider that approach to be bad, security wise. Trying to prevent wrong behaviour/data is a flawed strategy and far too frequently fails. A better approach is to only accept know good behaviour/data. Yes, sometimes some good behaviour/data may be blocked but bad behaviour/data will almost certainly also be blocked.
I use defensive programming;
-by proper unit testing while developing a particular module.
-by proper comment lines (for behavior of respective function/ method) and regions.
and of course, meaningful naming conventions (by which one should easily come to know what's its use) all over in application.