Lately I have begun to question how far an application should go to allow users to address possible errors? This may be answering my own question but it dawned on me while writing that it comes down to, at what point does the error checking and correction begin to affect application build costs and performance.
What do others think?
how far an application should go to allow users to address possible errors?
Depends on what the application does.
at what point does the error checking and correction begin to affect
application build costs and performance.
I have never seen a user (human) application where validation impacted performance. At least not where it wasn't traced to some basic misuse of something (perhaps overuse and incorrect use of regexes.)
Presumably by "build costs" you mean development cost. Since most places don't rigorously track costs I doubt there is an objective answer to that. However I have seen complex error scenarios be deferred because they would take too much development time. Not sure that the extreme cases matter anyways because they often reflect possible rather than likely user errors.
In my experience input validation and logging reduce development time and cost, and maintenance time and cost. Error correction, retries, best guesses, and the like, are tricky ones; when provided, make sure you have an easy way to disable them, otherwise your basic (or unit) testing may become very cumbersome.