Never wrote those.
trønderen wrote:test procedures for environmental conditions (including file system and network), verifying data structure consistency etc.
They don't sound like technical tests though and more like a utility that tests whether a user's system is running well.
I don't really see why you need partial classes for such tests though.
These are (unit) tests in my book and the test code I was talking about that you don't put in partial classes.
trønderen wrote:procedures for testing hundreds of borderline cases, or the complete cartesian product of the values of five different parameters
You don't ship unit tests to customers.
It would clutter up the public API with lots of weird methods.
I'd say linters do a better job at checking syntax.
trønderen wrote:For returning to the subject line contents: Especially with interpreted languages that are not even syntax checked at build time, coverage is essential.
You shouldn't have 100% coverage just to check syntax (and even then, it's not checking syntax, it's just checking that syntax doesn't cause errors).
I agree with you there.
trønderen wrote:Coverage tools sometimes give you surprises: If you are not familiar with them, you might not believe the figures they report, the first time you use them. Most of us has a lot of code that has never been tested. For compiled languages, you at least can assume that the syntax is correct, but you forget to test all the 'else' clauses, several of the switch alternatives, etc.
If your tests make sense it can add some insights.
I don't think coverage is a pretty good metric on itself though.
Very often, you don't need 100% coverage.
My coverage is probably about 0.01%, as I don't test by default and only add tests when I think some code could easily break or is difficult to test otherwise.
The methods I test do have a 100% coverage though.
Basically, I've gone from a "test unless..." to a "test if..." approach and in that scenario, too, coverage is a useless metric.