We make safety critical stuff so we have to, and it gets checked. Having said that, you can still write rubbish, but it comes out in the test and integration.
I like to try and write foolproof code. It's a bit of added fun to what could otherwise be boring. I am a badass and dreaded tester, because I have a rough idea where the bodies are going to be buried, having interred a few myself over the last decades. And yes, I have found a few nasties in my own code ... oops
The theory is there, but full application of said theory is impossible, IMHO.
"the meat from that butcher is just the dogs danglies, absolutely amazing cuts of beef." - DaveAuld (2011) "No, that is just the earthly manifestation of the Great God Retardon." - Nagy Vilmos (2011)
"It is the celestial scrotum of good luck!" - Nagy Vilmos (2011)
"But you probably have the smoothest scrotum of any grown man" - Pete O'Hanlon (2012)
I always try to work out everything the user could possibly do wrong, and code to catch it.
I consider that approach to be bad, security wise. Trying to prevent wrong behaviour/data is a flawed strategy and far too frequently fails. A better approach is to only accept know good behaviour/data. Yes, sometimes some good behaviour/data may be blocked but bad behaviour/data will almost certainly also be blocked.
I use defensive programming;
-by proper unit testing while developing a particular module.
-by proper comment lines (for behavior of respective function/ method) and regions.
and of course, meaningful naming conventions (by which one should easily come to know what's its use) all over in application.
I had a boss at one time like that. His code constantly crashed in production. It took the team over month with overtime, to fix one of his projects that he developed, using his leet and fast programming skills.