Writing software is very difficult. Writing bug-free software all the time is virtually impossible. There are many reasons why this is so; anyone who has written an untrivial piece of software knows how easily a human can miss a tiny detail, and how the slightest of omissions can cause a most spectacular crash, most likely at the most awkward moment.
Unfortunately, the task of writing software promises to be no easier in the future. While software technologies do advance, so does the complexity of the software we write. There is no magical trick in sight in the foreseeable future that would make writing bug-free software simple and straightforward. Still, not all hope is lost - there are some things we can do to substantially improve the quality of software we write.
The development of new features to a software product goes in phases. No matter what the methodology, the developer usually starts with the requirements, proceeds to design, then writes the code, debugs it, and finally checks the code into the source code version control system. At this point, the feature is considered done. But how does the developer know that the code actually works?
In many smaller companies and companies that are not really software houses but still develop software for their own use, the developer spends a day or so testing the feature, simply running the application and trying out the functionality with different inputs while observing that the outputs come out as designed. After having completed this testing, the developer has some confidence in that the code works the way it is supposed to. But every developer who has ever written code to an actual product knows the nagging feeling of uncertainty. How good a coverage did the testing have? Did I test every single feature in the application – otherwise how can I be sure that the new code did not accidentally break some other, apparently unrelated, piece of functionality? What if new code written by somebody else two weeks later will adversely affect this feature?
Using developer time for manual ad hoc testing is not a cost effective way of ensuring software quality. Developer time is expensive and is better spent on writing software rather than test-running it ad nauseum. This kind of testing finds out whether the new code affected some other piece of functionality only if the developer tests every single feature in the application. Further, it gives no guarantee whatsoever that the feature still works two months down the road as more features have been added to the system.
Some companies have a more or less formal quality assurance process with a team dedicated for testing. The team may have test case scripts that the team goes through and manually tests the software. This activity is carried out after every internal release of the software. Customers will see a new version of the software only after the tests pass and the QA team leader has signed it off.
This kind of testing is better than ad-hoc testing but it is far from optimal. Human testers can and will make mistakes. While hired tester time is cheaper than developer time, the cheapness is offset by the fact that the problems are uncovered relatively late in the development cycle. Every time QA acceptance tests fail, the developers get notified; they fix the problems and give a new version of the software to QA. Time and energy is wasted on communication while the software bounces back and forth between the developers and the QA team. In addition to wasted time and energy, this trashing can have side effects such as creating a ‘ghetto’ mentality in QA personnel.
Quality assurance teams may also have automated testing systems. This is more effective than manual testing, but still is not the optimal situation. Test scripting systems tend to be expensive. Writing good test scripts and maintaining them is something that a tester who was hired for a summer job is not likely able to do. Hiring someone who can write and maintain tests will likely cost as much as hiring a developer in the first place.
Automated Unit Tests
I am not going to describe how to write unit tests in this article. There are already articles on this site that can get you started. Suffice it to say that writing cost effective unit tests is not trivial. It is just like writing any other type of software: it takes time, effort and skill. For now, I’m just going to take the easy way out and list some very basic guidelines for writing and running unit tests.
A unit test is a piece of code that exercises another piece of code. There are essentially two flavors of unit tests: point unit tests exercise a small piece of functionality, e.g., just one method or methods of a class. End-to-end tests exercise one feature across many, sometimes all, layers of an application. An ideal end-to-end test works like an end user story: "the user logs in, invokes the monthly revenue report, sets the date range of the report from July 1st to July 31st, runs the report and observes that all the numbers come up as expected and add up". This is a fairly liberal definition of a unit test; for the purposes of this article anything that you would write e.g., as a NUnit test case is a unit test.
When implementing a feature, a developer should write the code together with a unit test that ensures that the feature actually works. In practice this means writing code that e.g., calls an entry point method of the feature and then verifies that the code did what it was supposed to. For example, if you have a method which transfers money from one account to another, the unit test would call that method with accounts with known balances and then verify that the balances come out the way they are expected to.
When the feature and the unit tests are complete, the developer checks in the source code - including the unit test code - to the source version control system. It is vital that the unit test code is checked in to the version control too. This allows integrating the unit tests with the build. It also allows other developers to check out the unit tests on their local machines and run them at will.
As a rule, before a developer checks in any code into the main source control repository, she should get the latest versions of the unit tests from the version control on her development machine and run all the tests locally. Only after all unit tests pass should the developer check in new code. This ensures that the source code in the version control system remains healthy at all times and can be used to rebuild the system at any time.
In order to get the full benefit from unit tests, the test suite should be run as part of the build process – this is what I mean by ‘automated’ unit tests. The build along with the unit tests should be scheduled to run automatically once or twice a day. If you have unit tests but do not run them as part of a scheduled build, then you are not getting the full benefit from the tests. By running the tests as part of a scheduled build, you test early and often. This again ensures that the source code in the main repository remains healthy. In real life, developers may accidentally check in code that does not work. The daily build should be your main line of defense against these bugs. There are some wonderful and cheap tools for implementing build and test systems, e.g., if you are developing for .NET, you can use NAnt and NUnit frameworks. How to implement a build process is again a fairly large topic and resources on this site and elsewhere on the web can get you started.
Writing unit tests is an extra effort if compared to not writing tests at all, and this should not be ignored. In my experience writing unit tests adds roughly 10-30% to the time it takes to complete a feature. End-to-end tests tend to be the most time consuming type of tests to write. Then again, when writing end-to-end tests one usually very quickly creates a small framework as the common code in the tests is factored out. Thus writing end-to-end tests comes easier after the first few test cases have been written.
Both types of tests - end-to-end and point tests – are needed for best coverage. End-to-end tests find different kinds of problems as they exercise a number of components that participate in the implementation of a feature. Point unit tests are also needed for the most important components because they can verify the functionality of a component more thoroughly.
Benefits of an Automated Unit Test Suite
An automated unit test suite brings along a number of important, tangible advantages as compared to other testing strategies.
First, unit tests find problems early in the development cycle. The common wisdom in software development is that the earlier problems are found, the cheaper it is to fix them. An automated unit test suite finds problems effectively as early as possible, long before the software reaches a customer, and even before it reaches the QA team. Most of the problems in new code are already uncovered before the developer checks the code into source control.
Second, an automated unit test suite watches over your code in two dimensions: time and space. It watches over your code in the time dimension because once you’ve written a unit test, it guarantees that the code you wrote works now and in the future. It watches over your code in space dimension because unit tests written for other features guarantee that your new code did not break them; likewise it guarantees that code written for other features does not adversely affect the code you wrote for this feature.
Third, developers will be less afraid to change existing code. Over time, software systems become more and more change resistant because developers are reluctant to change old code. This is natural because when changing old code, there is always the risk of breaking it or some other part of the system through a side-effect.
The fear of changing code is bad because in one respect software is just like any other physical system: it is subject to the law of entropy. Over time, as new features are added and bugs are fixed, the overall quality of the code degrades. This is even more so if the software is successful since there will be pressure to add new features which do not quite fit into the original architecture. No practical architecture can be designed to accommodate every thinkable feature, and in the changing world there is no way we can foresee all future requirements anyway. The only way to keep adding new features to software and retain the internal quality and clean design over time is by way of refactoring. Without occasional refactoring, the code grows more and more internally tangled until every class knows of every other class. Refactoring and cleaning up existing code is a really scary thing - unless you have automated unit tests.
Fourth, the development process becomes more flexible. Sometimes it may be necessary to fix a problem and to deploy the fix quickly. Despite best efforts, a bug may slip in and an important feature may stop working. The customers cannot purchase products, the users cannot work and your boss is breathing over your shoulder asking you to fix the problem immediately. Releasing quick fixes makes us feel uneasy because we are not certain what side-effects the changes might have. Running the unit tests with the fixes applied saves the day as they should reveal undesirable side-effects. Publishing hotfixes is something we hope we never have to do, and a unit test suite should already decrease the need for such things anyway. But if you ever have to publish a hotfix, a unit test suite improves your chances of doing so without introducing new problems.
Fifth, having a unit test suite improves your project’s truck factor. Truck factor is the minimum number of developers that if hit by a truck the project would grind to a halt. If you have one key developer who works on a subsystem that no-one else knows, then your truck factor is effectively “1”. Obviously, having truck factor of “1” is a risk on the project.
A comprehensive unit test suite improves truck factor because it makes it easier for a developer to take over a piece of code she is not intimately familiar with. A developer can start working on code written by others because the unit tests will guide the developer by pointing it out if she makes an error. Losing a key developer for any reason just before a release is less of a catastrophe if you have the safety net of unit tests.
Sixth, an automated unit test suite reduces the need for manual testing. Some manual testing will always be needed because humans excel at discovering bugs that involve complex data and workflow processes. Writing a unit test for the most complex cases might be so prohibitively time consuming that it is not cost effective any more. The QA team can concentrate discovering the hard-to-find bugs while the unit tests do most of the mundane testing.
The net effect of the benefits listed above is that software development will become more predictable and repeatable – in a word, a bit more like a real engineering discipline. The design and coding phases still have a fair amount of ‘art’ in them, and this will not go away. Once the coding is done, the build process builds and tests the software much like physical products are built on an assembly line. This removes much of the ad-hoc nature in software development which is the underlying reason for many of the problems that plague software projects.
In addition to the immediate and tangible benefits listed in the previous section, I foresee a future in which unit tests will help in answering to change pressures from two directions. At this point, I would like to point out that the following discussion applies to different kinds of software in varying degrees – it applies best to business software and applications, e.g., CRM, ERP and such. I do not see similar major changes taking place in the development of operating systems, device drivers, database management systems etc.
The raison d’être of a software developer, and thus that of a software company, is to make the buyers and users of the software happy. In more precise terms, the developer should provide the end user with software that corresponds to the user’s real data processing needs in a reliable and predictable manner. This is a very difficult task to carry out and traditionally we software developers have not been very good at it.
I have seen an organization where developers became so frustrated with the end users’ constantly changing requirements that in the end they had the end users’ representative sign off the requirements document before they started to work on the implementation. Later, when the users complained that the software does not address their actual needs, the developers showed them the document: “look, this is what you signed off, and that is exactly what was implemented”. The developers got their moral victory but in the end both the developers and the users lost.
The way for both to win is to start by recognizing the fact that the world is constantly changing. A company’s business needs are always changing and a company that does not adapt will eventually die. To address this, we need to start developing software with what I would like to call Extreme Customer Oriented Rapid Application Development (E-CORAD). The end user’s software will be a smart client with pluggable modules that can be updated independently. The developers will release a module early in the development cycle so an end-user dedicated for this task can try it out; the user will give constant feedback to developers allowing them to refine the module through frequent and small incremental releases. This method of development guarantees that the final product will correspond to end users’ real needs. Automated unit tests will be the major enabler that will allow this cycle to take place frequently without generating chaos. CORAD as such is not a new invention but technological advances such as .NET combined with unit tests will allow us to take it to new extremes.
There are certainly plenty of technical and non-technical challenges in E-CORAD but I do believe the challenges are solvable.
The second direction from which there will be pressure to change the way we develop software is that of service-oriented architectures (SOA). Smart clients will pair with loosely coupled, service-oriented architectures on server side (how’s this for buzzwords in one sentence?). These new architectures will fundamentally change the way we design, develop, deploy and maintain software. Client and server side software will be developed and deployed independently from each other, using abstract contracts as points of contact. Loose coupling translates in practice to less type safety, and thus we will have less help from the compiler in catching those stupid and simple mistakes we make all the time. All this flexibility will result in utter chaos unless we do serious automated testing. A unit test can exercise a Web Service to ensure that it abides to its contract. A unit test can test a Web Service consumer by having it consume mock services that return predefined data.
We need automated unit tests in order to realize the promises of SOA. Also, with SOA, software needs to be highly adaptable, and that again means we will develop software in a frequently occurring cycle of small, incremental releases. This is only possible if we have the automatic assembly line with unit tests that will allow the cycle to take place often without compromising system integrity.
From these changes will emerge a ‘Brave New World’ of software; this is a topic on which I hope to write another article soon.
Unit Testing in the Real World
Despite all the benefits automated unit tests bring us, it has to be said that they are not the silver bullet that will rid us from buggy software for good. Trusting unit tests blindly as in: "if unit tests pass, code is ready for production" is a recipe for shipping bugs. Even if we have a comprehensive unit test suite that is run frequently, bugs can still sneak in and some amount of manual testing by humans is needed.
When a bug makes it into production despite a unit test suite, we need to do post mortem analysis. Whenever this happened in the projects I’ve worked in, one of the following things turned out to be true:
- There were two bugs in the code: the original bug in the application code and another one in the test code. Usually when this happens, it is due to special circumstances in the code being tested. For example, in a web application project I worked on, there was a feature that allowed the user to reset her user account password without logging in. The unit test framework performed a login automatically in the beginning of each test case, which made perfect sense for 99% of the tests. For this particular feature, the automatic login hid a bug that manifested itself only when the user was not logged in. The "reset password" unit test ran happily while the actual feature failed miserably after I added some code that as a side-effect required the user to be logged in. The moral of the story is that while bugs may still creep into production code, it will take two bugs instead of one. The chances for the two bugs occurring together are slimmer than for the first bug occurring alone.
- The production environment is different from the test environment. For example, in the same web application mentioned above, there was a different version of a database driver in the production system than in the test environment. A bug in a query that produced seemingly sensible results in the test environment generated a syntax error in production. The obvious conclusion is that the test environment should reflect the real environment as closely as possible.
- The third possibility of course is that the feature in question did not have a unit test to begin with. This is an open invitation for bugs to come in and crash the party. Even the simplest smoke test that does basic verification is usually worth writing.
Every time a bug makes it into production, we go back to the unit tests and add a test case to specifically verify that this bug never occurs again. This way the noose gets tighter and tighter on bugs and it becomes more and more difficult for the bugs to slip through.
There are developers who think that writing unit tests is not worth the time and effort. Sometimes this objection is due to the fact that the developers equate unit tests with micro unit tests, e.g., unit tests that exercise a single get/set property of a class. Writing tests that test trivial code is not worth the time spent. It is more cost effective to write a test that tests a piece of functionality that would fail if the property did not work properly. As a side-note, with .NET it is so easy to write a generic test that tests get/set properties of a given class through reflection that it actually makes sense to write it once and have it do the trivial tests. Another typical objection is that writing more advanced unit tests requires skill, takes time and can be boring. To this I can only say that the unit test should be considered an integral part of a feature, and writing effective unit tests is like writing any kind of software: it requires skill and expertise.
The downside of having a unit test suite is that it needs to be maintained. If interfaces in application code change, unit tests that exercise the code will break. There is no easy solution to this issue other than convincing the reader that the benefits of the tests outweigh the extra effort in developing and maintaining the tests.
The recipe for better software with less people is simple: unit test early, unit test often, and refactor when needed. Unit tests find problems automatically, early and will never grow tired of testing the same feature again and again.
A comprehensive unit test suite that runs together with the daily build is the heart beat of a software project. It gives a sense of progress and stability. Email notifications of successful builds with unit tests can boost project morale. An email notification of a unit test failure, on the other hand, tells all project developers where they should focus their attention to in order to get the problem fixed.
Despite all the benefits that unit tests bring, some amount of manual ad-hoc testing is still needed. The developer needs to run the application and use her best judgment to see if the code really does what it is supposed to. A dedicated QA team is also needed in bigger projects. But with a carefully written unit test suite, the software is self-testing and the need for manual testing and separate QA personnel will be reduced. The benefits of automatic unit tests do outweigh the extra time and effort in writing and maintaining the tests.
Sami Vaaraniemi has been working as a software developer since 1990, primarily on Microsoft technologies. After 12 years of Win32 API and C++ he switched to .NET. He currently works as an independent consultant and can be contacted through his website at www.capehill.net.