Unless you’ve been living under a rock, you’ll have heard about code coverage in relation to test driven development or unit tests depending on if you are writing the tests first or not.
Now, a lot of people who evangelize TDD will often also talk in the same way about code coverage, telling you that if you don’t know what code your tests are actually exercising, then how can you be sure your code is correct. I have also had conversations with people who believe in TDD but think that code coverage is just a figure and you shouldn’t pay special attention to it.
What I have found interesting is that in recent months, there seems to have been a real move towards people saying that you have to have 100% code coverage and if you haven’t, well, then shame on you as you can’t possibly be a good developer, can you?
What prompted this post
I’ve recently been working on a project where we had aimed to have 100% coverage but for a couple of reasons, we haven’t managed to do so:
- doesn’t make sense – why test a DTO that only holds state/data and has no behaviour?
- it is not easy to do so – as an example, we’re using EF4 (not code first), and the testing entities with behaviour is hard unless you have a database to exercise it against; yes, it can be done, but is it worth the effort?
So instead of aiming for 100% coverage across the entire solution, we looked to have 100% coverage on the code we could test, e.g., controllers, model, etc., and as we had dependency injection, we could then mock out repositories to help us with the hard to test interactions (which in itself is a whole other discussion).
So we had our 100% coverage, and the tests ran in moments, and we all felt very pleased with ourselves right up until the moment somebody else tested it by using the software the way it was supposed to be used, no boundary or corner case testing, just normal functionality, and it broke!
So we went back to our tests and looked at them to find out what we had missed. How was it possible that our tests hadn’t caught the bug? We had our tests and they exercised the code entirely, our code coverage told us so, but when it was run as a complete system, it failed as we hadn’t explicitly catered for certain circumstances which were now of course patently obvious.
The reason was simple – data. The possible number of combinations of data were so large that if we wanted to test every possible combination, we would have been writing tests for days which just isn’t feasible, so we catered for the boundaries and some of the values we knew were in the data set.
So what do you do?
Remember, TDD or unit tests only test that the code you have written does what you expect it to, it doesn’t make it correct. Ideally, you want to be adding some sort of functional testing via Selenium/Watin/Project White to exercise the system end to end, and even then, it is unlikely that you’ll cover all the scenarios for using the software, but you should at least be testing the normal usage of the system.
But what if you don’t have time to write these additional tests? This is where the “Mark 1 Eyeball” comes into play as one of my old managers used to say. You need to run the software, look at it, try to break it, if you are lucky enough to have a tester or test team, give it to them, they’ll soon find ways to break it :)
100% is only the starting point
I’m not a TDD zealot but I do believe you want to have tests for your code to be happy that it is doing what you expect it to, and yes, the tests should cover that code 100% but, and it is a big but, do not rely on those tests alone. You need to back it up with something that actually exercises the entire system the code belongs in, including the data it will work with.
Quite simply, as the title of this article says, 100% coverage is not enough.