Why using a Definition of Done in an Agile project ?
If you want to learn to play the piano it’s going to be a tough endeavor when it takes 30 minutes before your piano produces the sound after pressing a key.
When you demonstrate your software just before the deadline, you are for sure that the project won’t finish earlier. When demonstrating it every week and implementation is done based on the product owners priority, there is a big chance the product owner approves the application even before all requested features are implemented.
Feedback will help you to improve, learn and reach your goal more effectively.
Important is that you get your feedback quick and many times, iterative development can facilitate this.
What you actually get feedback on is defined in the Definition of Done. The Definition of Done defines all steps necessary to deliver an increment of done with the best quality possible at the end of a sprint.
The more you do in your sprint, the more you get feedback on, the more you can improve and learn.
This introduces the first reason for using a Definition of Done :
Definition of feedback and improvement.
When the Definition of Done is complete it will define all steps to deliver an increment of done and therefore creates feedback regarding the product but also regarding the process within the sprint.
With steps like sprint demo, performance testing, acceptance testing etc. feedback on the product is generated. When the product owner is trying out the application during the demo he will give his feedback. The acceptance tests generate continuous feedback on the acceptance criteria especially when all criteria are implemented with SpecFlow or any other specification by example framework.
With steps like peer review and deployment, feedback on the process is generated; are the deployment processes correct, are we coding like we want to etc. ?
The more steps defined in the Definition of Done, the more feedback you will get.
Improves release planning.
The second reason for using the Definition of Done is that it improves release planning.
Typically when finishing a sprint, different items are still left undone.
Some bugs are still in the code, integration test is not done, performance testing on a production like environment is not done, the manual is not up to date etc. All this work is called undone work and has to be done at some time. The problem with this undone work is that it piles up every sprint; every feature that is added will let this undone work grow.
What happens in an agile project release planning session is that based on the number of user stories points and velocity a release is planned. For example when the team has a velocity of 6 and 22 user story points need to be implemented, the release date can take place after 4 sprints.
The problem is that after 4 sprints this undone work is still there. Many teams solve this by introducing so called hardening sprints or release sprints. These sprints are used to create for example the deployment packages, to solve some last bugs, to do some last testing etc; everything to make the software ready for production.
The problem with these release sprints is that they are a bad agile practice. You are trying to time box work that is unknown (last minute tests can reveal all kinds of bugs), not planned, not estimated but still it is really necessary that everything has to be done in a fixed amount of time and before the release date.
Next to this is that your release date doesn’t match with your release planning. Instead of planning the release based on sprints defined in the release planning it is now based on “planned” sprints plus one or more extra release sprints.
When the team defines a complete Definition of Done and applies it, all the undone work is done within the sprints and no release sprints are necessary.
Gives sense to burndown charts.
Applying a complete or ideal Definition of Done also gives sense to burndown charts.
A burndown chart shows the amount of work still to be done in progress of time represented by the green line.
This burn down chart is well visible in most teams but will give a “false” indication when the software is production ready. When the Definition of Done is not applied well, undone work will pile up every sprint represented by the orange line and this line is most of the times not visible in regular burndown charts.
The black line composed of the “ideal” burndown line and the undone work line represents the real burn down chart, but this is most of the time not shown and the product owner will be caught by surprise after 4 sprints that there is still work to be done even though the burn down chart showed differently.
When no release sprints would be used the delta of the black line and the green line shows the risk that is delayed. When not picking up this work in a sprint, it will reveal itself in production; for example when no performance testing is done in the sprints, there is a chance that later in time an issue regarding performance can occur in production.
Almost done is not done at all.
A typical conversation that most developers will recognize goes like this :
Product owner (PO) : Is the software done ?
Developer (Dev) : Yes almost.
PO : Can we go to production ?
Dev : No not yet.
PO : Why not ?
Dev : Well some bugs have to be solved, some integration tests still have to be run, release packages have to updated etc etc.
PO : When can we go to production ?
Dev : I don’t know…..
To avoid these kind of discussions there should be a common understanding of what is meant by Done Software. A Definition of Done will create more transparency of what the team is doing every sprint and what is delivered. When for example the Definition of Done doesn’t say anything about performance testing on a production like environment because the organization is not fit to accomplish this every sprint then the product owner is aware of this.
Definition of Done minimizes the delay of risk.
When the Definition of Done is complete with all the steps necessary to deliver an increment with the best quality you are minimizing the delay of risk. All steps in de Definition of Done are subjected to feedback and therefore risky items are inspected, adapted and improved possible in an early stage and as many times as there are sprints. In other words, risks are covered several times in a very early stage of the project.
The smaller the Definition of Done is, the more undone work is likely to pile up after every sprint. This undone work is not subjected to feedback but will reveal itself somewhere, sometime in production.
A complete Definition of Done will minimize this undone work and therefore minimize the delay of risk.
Definition of Done represents the agility/quality/maturity of the team.
A team is able to complete a (new) feature in one sprint and release it immediately to production with all steps defined in the Definition of Done necessary to guarantee best quality.
The agility of the team is the fact that it can release a feature to production every sprint but
the quality of the team is represented by the number of steps in de Definition of Done applied when releasing this feature to production.
How to put the Definition of Done in practice.
Start off with defining two versions of the Definition of Done, one ideal Definition of Done and one current Definition of Done.
The possible reason for the need of two versions are competence and maturity.
Competence because not every team is capable of doing everything in one sprint in order to deliver a production ready product especially, at the beginning of a project.
To deliver an increment of done in one sprint you need to automate many steps in the Definition of Done, for example automate build processes, automate tests, automate deployment, maybe automate some documentation etc. This can be quite complex and time consuming to setup.
Maturity is another reason why the Definition of Done is maybe not ideal yet, some teams are just not ready enough wanting to do all steps in one sprint. They feel it’s better to do the regression tests only at the end of the sprints or updating the manual just before going to production because they feel it is not necessary or takes too much time to do this every sprint. Those teams don’t have an agile mindset yet.
The ideal Definition of Done defines all steps necessary to deliver an increment of done from development till deployment on production. No further work is needed.
The current Definition of Done defines the steps the team is currently capable of doing in one sprint.
Best is to put both visible on the wall to make it transparent for the product owner what the team is delivering in the sprint and to create a common understanding of what is done.
Important to understand is that the product owner is also responsible if the team is not using a ideal Definition of Done.
He can decide that performance testing is not needed every sprint because it’s is never been a issue on the much faster production servers and because the team did not automate yet the performance testing, it takes too much time to do it every sprint. With this decision the product owner consciously delays the (potential) risk of having a performance issue on production.
If the product owner wants to have more steps on the current Definition of Done for example automated acceptance tests; he should give it priority that a framework is created that facilitates the automation of these tests.
This can be done by giving the work item containing this framework an higher priority in the product backlog.
So putting two versions on the wall will create transparency for the product owner, represents the current capability of the team and shows what you can improve.
Try to regularly expand the current Definition of Done with steps from the ideal Definition of Done.
Expanding the Definition of Done will actually mean growing in quality/maturity.
A good Definition of Done will help with :
Getting feedback and improve your product and process
Better release planning
Giving burndown charts sense
Minimizing the delay of risk
Improving team quality/agility
Creating transparency to stakeholders
Christian Vos .MCSD has been an independent Microsoft developer since 1999 and is specialized in distributed Microsoft .Net platform based Web applications. He is founder of the company Rood Mitek which has developed applications for companies like Philips Electronics, ASML, Lucent Technologies, Amoco, AT&T etc.