|Title||Software Project Secrets: Why Software Projects Fail|
- Why Software is Different
- Project Management Assumptions
- Case Study: The Billing System Project
- The New Agile methodologies
- Budgeting Agile Projects
- Case Study: The Billing System Revisited
Chapter 7: Case Study: The Billing System Revisited
In the previous two chapters we've looked at a series of techniques that can help to solve the problems outlined in Part One. In Chapter 5 we saw techniques that can resolve the issues around software quality. In Chapter 6 we saw techniques that can constrain an agile project to a fixed deadline and budget.
In this chapter we'll look at the same techniques from a different perspective. We'll use another case study—one that addresses the same business initiative as the case study described in Chapter 4—to show how these techniques can be combined to produce a complete solution for the problems in software project management, and how our suggested approach can lead to a more successful outcome. At the end of the chapter we'll see how the new techniques helped the project team avoid the invalid assumptions that we identified in Chapter 3. This is where our journey ends.
The case study is a project to create a new, relatively small piece of software, so we'll see the following techniques from Chapter 6 in action:
- SWAT teams
- Feature trade-off
- Scoping studies
The team will employ techniques from two agile methodologies. A high-level structure of phases and iterations from the Rational Unified Process will be supplemented with lower-level practices from Extreme Programming. This is an unusual but perfectly valid approach. We'll be focusing on the interactions between the members of the team, so only the following practices will feature in our discussion:
- Testing: Manual acceptance test scripts will be used alongside the automated unit tests, because in this case it's less expensive than creating or purchasing a test engine for them.
- Pair programming: The developers will switch partners at the end of each iteration. Unlike most Extreme Programming teams, they must schedule their partner-swapping because they only have two pairs of developers.
- On-site customer: In this case the developers will be on-site at the customer's premises.
- Small releases: After each iteration the software will be released for customer testing and evaluation, but it will only go into production at the end of the project.
At Acme Inc. the accounting manager, Karen, has been under pressure to reduce costs, so she has proposed a new billing system to integrate the various financial applications used by the accounting team. The same data wouldn't have to be entered several times into different systems, and this would eliminate three full-time data-entry positions and save the company about $150,000 a year. Karen's boss, Salim, was keen on the idea, but he was also concerned about the cost of the new application. He wanted to keep it below $300,000 so that the project would have a two-year payback period.
Salim contacted People Co., who suggested that he hire an established team of four developers with a good mix of experience and skills. He also found an experienced project manager from within Acme's Operations department, Phil, who could devote at least half his time to managing the project.
When Salim asked Angela, the lead developer, to estimate the cost and duration of the project, she replied that "At this stage the requirements are still very imprecise. All I can say is that it'll probably take between one and 12 months to create this software. I suggest we first organize a two-week scoping study to firm up the requirements as much as possible before we plan the rest of the project. We can then decide how many more two-week iterations we'll need to complete the work."
Salim wanted to include Karen in the project team to define the requirements, but Angela disagreed. "Your data entry supervisor, Emily, knows much more about the accounting applications, and how the team uses them. Besides, we'll need this person on the team for the whole of the project, and I don't think that Karen can spare that much time. She would need to set aside at least 20 hours a week."
The project team was therefore organized as shown in Table 7-1.
Table 7-1. The Members of the Project Team
|Resource||Name||Specialty||Hourly Rate||Effort per Iteration||Cost per Iteration|
|Lead developer||Angela||Architecture||$120.00||80 hours||$9,600|
|Senior developer||Govind||Networks||$85.00||80 hours||$6,800|
|Junior developer||Karl||User interfaces||$60.00||80 hours||$4,800|
|End user||Emily||Business issues||$50.00||40 hours||$2,000|
|Project manager||Phil||Client liaison||$100.00||40 hours||$4,000|
|TOTAL|| || || || ||$33,200|
The scoping study started off with a three-day requirements workshop in a conference room with a printing whiteboard. The focus was on mapping out exactly how the new application was going to work. The four developers sat around the table with Emily, and they helped her lay out the data fields and buttons for each of the new screens, and step through each of the use cases for the new system. They took turns writing up notes about the discussion.
On Thursday the developers gathered in the same room, this time without Emily, to create a high-level design for the system. As the expert on software architecture, Angela led the discussion, but she tried to include all of the developers in the discussion by asking them questions related to their areas of expertise. The architecture for the system was sketched on the whiteboard in a series of UML diagrams.
Govind and Rauna then worked together to develop a small piece of functionality that passed some data all the way from the front-end user interface to a back-end web service. They expected to complete this work within a week, but the actual pace they achieved and the problems they encountered would provide valuable information as to how fast the remaining work could be expected to progress.
Meanwhile, Angela and Karl concentrated on writing up the results of the week's discussions. Karl wrote up the use cases as a series of documents, and created mock-up screens for the new application with the team's software development tools. Angela spent the time writing up the architecture document. She used a UML drawing program for the diagrams, and added some text that explained why the architecture was the way it was. There was also discussion of the main technical risks, which Angela had spent some time researching and, wherever possible, resolving.
She also broke down Emily's requirements into a list of 73 individual features, and gave initial estimates for each of these features of between one and five units of work. Each unit initially represented a day's work for a developer, but this conversion factor could be adjusted if the overall pace was found to be faster or slower. This meant that the estimates for individual features wouldn't have to be changed if the pace varied. Features that came out as larger than five units were broken down still further.
Project Planning Meeting
On Friday the developers got back together with Emily and Phil to go over the requirements and the estimates. After Karl demonstrated the mock application, the developers asked Emily and Phil to divide up the feature list into must-do, should-do, and could-do features.
"They're all must-do features," said Emily, "We need all of these features in the application."
"We define must-do a little differently from that," replied Rauna. "A must-do feature is one where if it's not there, then the application is of absolutely no use to anyone, and there's no point in deploying it."
"What about should-do and could-do?" asked Phil.
"A should-do feature is one where you can quantify, or at least point to the feature's business value. It has to directly save you money. A could-do feature is one that has no business value of its own, but which helps you to use the features that do," said Rauna.
After some discussion, the team members were able to divide up the feature list as follows:
- Must-do: 29 features
- Should-do: 23 features
- Could-do: 21 features
- Total: 73 features
"Are you guys OK with the estimates I came up with?" asked Angela.
"I think the estimates for the web services need to be increased," replied Govind. "From the work we've been doing, it looks like we'll need to coordinate the database and web service updates with transactions. But web services don't support transactions yet, so we'll have to use a workaround where we create an ‘undo' web service that is called whenever a transaction is aborted."
"How much is that going to increase the estimates by?" asked Angela.
"We'll have to create an undo web service for each functional web service, which will double the amount of work we have to do in that area," said Govind.
"OK. Is there anything else?" asked Angela.
"Well, some of the screens are wrong," said Emily, "I've marked the changes on these printouts. We'll have to add some fields and change the names of some others. Also, these two screens have to be combined into one." "That doesn't look too major. Apart from reworking the bits that Govind and Rauna have already finished, we can probably do the rest in the same time as before," said Karl.
With these changes, the estimates came out as:
- Must-do: 83 units
- Should-do: 56 units
- Could-do: 58 units
- Total: 197 units
"Over the last few projects, the team has averaged 20 units per week—or perhaps half that during the Elaboration phase—so we're looking at about ten weeks work here," said Angela. "Govind and Rauna completed features worth 11 units last week, so the estimated pace is about right for this project. We'll need an Elaboration iteration of about 20 units, and then maybe four Construction iterations of 40 units each. That adds up to 191 units. Can we trim a couple of features to reduce the scope by 6 units? Otherwise we'll need to allow for an extra iteration."
"I think that we can lose these two features. The users don't need to resize the data entry windows if they're sensibly laid out, and pop-up help is not essential if the users are properly trained," said Phil.
"I agree: they're not quite as important as the rest. I can go along with that, so long as we don't lose any more," said Emily.
"OK. That gives us a project plan that looks like this," said Angela as she passed around copies of Table 7-2 and Figure 7-1.
Table 7-2. The Duration, Scope, and Cost of the Project's Phases
|Scoping Study||1||2 weeks||11||$33,200|
Figure 7-1. The overall project plan
"Why do we need so much contingency?" asked Phil.
"We actually need more than this. At the product definition stage, estimates are only accurate to a factor of 2, so we should allow at least 50 percent contingency. But if we can make all of the could-do features optional, then that gives us another 37 percent contingency. Combined with the two extra iterations, that's a total contingency of 68 percent, which should be more than enough," said Angela (see Figure 7-2).
Figure 7-2. The shaded area represents the total contingency for the project. In addition to the time allocated to the essential must-do and could-do functionality, another 68 percent can be used for resolving problems.
"We still want you to include all of the could-do features," said Emily.
"Yes. That's why the contingency iterations are there. I'm confident that the estimates are within 30 percent of what they should be, so if there are any overruns, then we can still complete all the features with only one or two extra iterations. However, and I think this is very unlikely, if something does go badly wrong, then at least we'll be able to create software that still meets your most pressing needs," said Angela.
"I think I can get Salim to give us the go-ahead for that. The budget is very close to what he was looking for," said Phil.
"We'll see you on Monday then," said Angela.
The first task was to set up the team's office for pair programming (Figure 7-3). Instead of the usual corner desks, the team had requested straight worktables. They put two of these back to back, each with a workstation and two chairs for a pair of developers. The third table was placed end-on to these two, so Emily or Phil could work alongside the team whenever they needed to. The development team faced each other as they worked, which made their discussions easier.
Figure 7-3. The layout of the team's office
After that, they set up the development environment for the project, including a source code repository and an automated build script. Govind and Rauna merged their work and Karl's mock-up screens into this structure. The existing unit tests were included in the build script. The team also decided on a process configuration for RUP, but this didn't take long as they could reuse one they'd already used successfully on several projects of about the same size and duration.
For this iteration, Karl decided to work with Govind, and Rauna with Angela. Between them, they divided up some of the highest-risk, must-do features so that each pair of developers was assigned ten units of work. This work included the undo web service for Govind and Rauna's already completed web service. The remaining time was allocated to analyzing the effects of the inevitable change requests on the requirements and the high-level design.
In the meantime, Emily began working on a set of acceptance tests for the system, based on the use cases that Karl had written up. The first one that she wrote was for the functionality that Govind and Rauna had already completed. When she tried it out, she found that it didn't work the way she expected it to. She looked up from her screen and said, "Hey guys, this first screen doesn't work properly. I've just tried to copy some lines from a spreadsheet to paste them into this screen, but it doesn't put them in the right fields. It all ends up in the first field."
"Is that how it's supposed to work? I don't think we covered that in the use cases," said Karl.
"It has to work like that," replied Emily, "I can't copy over one number at a time. That'll take forever. The whole point of this software was to save us time."
"That's OK," said Angela. "We can put this in as another feature. Karl, can you work with Emily to update the use cases, and can you also estimate how much extra work will be required to put this feature into the system?"
"Sure. I can start on that right away," said Karl.
As she worked her way through the acceptance tests, Emily found more and more "bugs" in the system, which were really features that hadn't been thought of yet. The developers documented each one carefully, and assessed the impact of each change. By the end of the first iteration, they had 13 new features that came to 34 units of additional work. They discussed this with Phil at the two-week iteration review meeting.
"We've got two options," said Angela, "We can either use the project's contingency to add another Construction iteration for this work, or we can trade off these new features against the lower-priority features that we identified."
"Why don't we do both?" asked Phil. "Why don't we trade off the new must-do and should-do features against the old could-do features, and then decide after the fourth Construction iteration whether we want to add a fifth iteration for the remaining could-do features? At that point we'll know whether we're running behind schedule, and whether we can afford to use up the contingency time."
"I can go along with that," replied Emily, "but only if you promise that we will do that extra iteration if we're not too far behind. We still need those features in the software."
"What was the progress for this iteration?" asked Phil.
"We finished 7 features that added up to 18 units," said Govind.
"That's a bit slow. We planned 20 units this iteration, didn't we?" asked Phil.
"Yes, but we'd expect some variation, because the estimate for each feature might be off by a few percent. We've done 29 units against a plan of 31 at this point, so I still think that our estimates are broadly correct," said Angela.
"Are there any other issues?" asked Phil.
"Well," said Govind, "I had some difficulties getting the default .NET web service interfaces to work with some of the accounting applications. They need their data formatted in a strange way. I could get it to work in the end, but I had to create the interfaces by hand."
"What does this mean for the project?" asked Phil.
"It'll take a bit longer to create the interfaces manually, but not too much. I suggest that we add two more units of work to the estimates. Also, I'm the only one who knows how to create these interfaces, so I'll have to pair with anyone who has to work on one of them. This might disrupt our pair programming rotation a bit," said Govind.
"We can work around that, though," said Angela.
The team continued to make steady progress (Table 7-3).
Table 7-3. Progress During the Project's Iterations
|Iteration||Expected Progress (units)||Expected Total (units)||Actual Progress (units)||Actual Total (units)|
|Construction 5|| || ||37||209|
Phil was concerned when the team slipped behind schedule by 5 units in the first Construction iteration, and he became even more worried when he saw this trend increasing in the next iteration. He scheduled a private meeting with Angela to discuss the issue.
"What's the problem, Angela? Why aren't you meeting your targets?"
"To be honest, Phil, I'm not sure," she replied, "The guys haven't encountered any significant problems so far. It's possible that the estimates were just a little on the low side. Also, the earlier features required more of the infrastructure to be created, whereas we defined the features from a user perspective, so that may be why this work is taking longer than expected."
"How are you planning to make up the time?"
"Phil, the figures we gave you were estimates—not targets," she said, "A realistic estimate is just as likely to be too small as it is to be too big. We made a commitment to develop this software in the most efficient way possible, and we assured you that the development wouldn't overrun its contingency. And it won't. At this rate we'll still be well within our overall budget."
"So what do you suggest we do?"
"We still have to see, but I think we're going to need that fifth iteration after all."
With each iteration, the software that Emily was testing became more and more complete. Whenever she got a new version, she ran through her acceptance tests to confirm that all of the new features worked as expected. She also spent time playing with the new system, trying a variety of tasks in different ways, and by doing so she uncovered a few small but significant bugs.
As before, most of the bugs she reported were actually changes to the requirements, but the major ones had already been identified and superficial changes could be accommodated within the existing estimates. The developers worked closely with Emily to ensure that each new feature that they tackled worked exactly the way she wanted it to.
Construction Iteration 5
At the close-out meeting for the fourth Construction iteration, the team had to decide whether to go ahead with a fifth iteration.
"How are we doing overall?" asked Phil.
"We've finished 66 features out of 72, and there are 6 features remaining, which come to 19 units in total," said Angela. "There are also a few outstanding bugs that Emily found, but most of them are quite minor. The formatting on some of the screens and reports can get stuffed up if the data items are too long, but that should only take a few hours to fix. Apart from that, the software passes all of our unit tests, and all of Emily's acceptance tests."
"It sounds like both the software and the project are in good shape. I think we can use up some of that contingency time. We had 36 units of could-do features that were traded off in the Elaboration phase. Do you think you can finish half of those, plus the remaining features, in one more iteration?" asked Phil.
"I'm confident that we can do at least some of them," replied Angela.
"Let's plan to do 32 units in this iteration—that seems safe in view of the pace so far—and then we'll see what else we can get done. Emily, you'll have to update the acceptance tests to include these new features. If we could have them by the end of next week, then that'll give us enough time to ensure that everything is working properly before the end of the iteration."
The burn-down chart displayed on the office wall was extended to show the new iteration and the additional scope (Figure 7-4).
Figure 7-4. The pace of work during the project
The developers were able to complete 37 units of work during those two weeks, so during the second week they asked Emily which additional features she would most like to see implemented. She picked out 2 more features that were worth 5 units together.
At the close-out meeting for the fifth Construction iteration, Emily argued strongly for a sixth iteration to complete all of the remaining features. "These were requirements that we said we needed right at the beginning of this project. We still have time on hand. Why don't we just do them too? And there are a few more things that I've come up with that we should include too."
Phil disagreed. "I'm sorry, but I'd rather keep some time in hand in case there are any problems during the Transition phase. What happens if the beta test goes badly, and we need to fix a whole lot more bugs? No. We decided that these features were the least important ones, and I don't think we'll miss much if they're not included in the final version of the system."
"So what's the plan for the Transition phase?" asked Phil.
"Angela and Govind will spend the next few days closing out the remaining defects and putting everything in order," said Rauna. "Karl and I will be in charge of training. We'd like to train your help desk on Monday, the system administrators on Tuesday, and the data entry operators on Wednesday. We're planning to do a morning session and an afternoon session on each of these days, so you don't have to leave any of those areas unattended. We'll be taping each of these sessions, so anyone who's away or sick on those days can catch up. And also any new hires, of course."
"And after that?" asked Phil.
"The big show-and-tell is on Thursday," replied Angela. "I hope you remembered to invite all the bigwigs. Then, all going well, we take Friday off and deploy the new system into production over the weekend."
By Thursday, Angela and Govind had a zero-defect version of the software ready for the executive meeting. The demo went flawlessly. Both Salim and the company's CEO, Cathy, were impressed. They were happy to sign off acceptance for the new system.
The next week was the beta test period. There was nothing for the developers to do but answer queries about how to use the new system (there were lots of these on Monday), sort out problems, and wait for bug reports. There were quite a few bugs reported. Some were misunderstandings about how the software was supposed to work. Others were suggestions for new features and changes that were carefully documented; a few of these deserved further investigation. They eventually ended up with just two new bugs that needed fixing, and Govind and Karl volunteered to work on these.
Angela and Rauna took the Wednesday off, as they had to come back the following weekend to install the final version of the software. On Friday there was a project close-out meeting where the outstanding issues were aired— these were mainly suggestions for new features—and the project was declared a success. Phil opened a bottle of champagne to celebrate, and afterwards the developers went back to clean out their office.
After the project, Phil went over the financials one last time. The project had gone very smoothly, and only half the planned project management time had been used, which saved about $16,000. The project had come in early by two weeks, and that saved a further $33,200. The overall cost of the project was just over $250,000 (Figure 7-5), and the payback period was now just 20 months.
Figure 7-5. The savings from the original budget
The users were mostly happy with the new system, although there was a growing list of suggestions for additional features. Salim didn't see these as high priority, though, because most of them offered no direct financial benefit. Cathy was very pleased with the results of the project. "I think that there may be a bonus in this for you guys," she said to Salim and Phil.
Why did this case study succeed when the previous one in Chapter 4 failed? If we compare this case study to the list of invalid assumptions identified in Chapter 3,we can see that the techniques used by the team helped them to avoid these assumptions, and thereby achieve a better result:
- Scope can be completely defined.
- Scope definition can be done before the project starts.
The scope of the project was reevaluated and adjusted after each iteration. Moreover, the developers worked alongside the customer representative (Emily), and could ask her to clarify details whenever necessary. The team used triage and feature trade-off to ensure that total quantity of work did not overwhelm the budget.
- Software development consists of distinctly different activities.
- Software development activities can be sequenced.
- Team members can be individually allocated to activities.
The team's development process combined design, construction, and testing, so the design could be refined as required and the software could be tested from day one. The developers collaborated on gathering requirements, defining the architecture, and producing estimates, so everyone had an opportunity to ask questions and make suggestions. Communication was abundant and effective.
- The size of the project team does not affect the development process.
The team was very small, and the developers were able to adopt a very informal and efficient development process. They used a Rational Unified Process configuration that had been customized for the size of the team, and adapted the Extreme Programming practices for their circumstances.
- There is always a way to produce meaningful estimates.
- Acceptably accurate estimates can be obtained.
- One developer is equivalent to another.
Using a SWAT team allowed the estimates to be based on the results of the team's previous projects, making them more accurate. The amount of contingency reserve—partly based on triage—was more than adequate for the degree of inaccuracy in the estimates. The scoping study also gave the developers an opportunity to check their expected level of productivity against the project's specific circumstances.
- Metrics are sufficient to assess the quality of software.
The developers continually assessed the quality of each other's code during pair programming. Automated unit testing helped the developers become aware of new bugs very rapidly, so the software maintained a low level of defects. Ongoing acceptance testing by the end user ensured that the usability and functionality of the software were also assessed.
Chapter 8: Afterword
A software project can fail even before it starts. It can fail just because of the way it has been organized and set up. Often it's impossible to find out whether a project has failed until just before it's due to end. Only when the software is ready for testing and deployment does its poor quality become apparent.
The advantage of iterative development is that each iteration provides another chance to find out what's going wrong, and another chance to put it right. However, to do iterative development properly, you must make dramatic changes to the way you manage your projects. Many project management best practices just won't work anymore.
Software is strange stuff: it's complex, abstract, and fluid. It helps to have a deep understanding of the peculiar nature of software when you're planning a software development project. Non-technical people—customers and project managers—often haven't had the kind of hands-on experience that's needed to achieve this level of understanding. However, the technical members of the team—the developers, architects, and analysts—often lack the business and management skills needed to successfully organize a project.
The conceptual gap between the technical and non-technical members of a software development team is the most obvious reason why software projects fail. The communication of requirements from customers to developers is a common source of problems, as is the communication from developers to customers of the repercussions of those requirements.
Developers often concentrate on technology to the exclusion of everything else, and they invariably propose technical solutions to non-technical problems. They need a detailed understanding of the business issues that the software is intended to address, and they should be asked to think about what they can do to help the project run more smoothly.
Developers can also bring essential input to the project planning process. All too often, developers are brought onto the team only when the project has been completely planned. Developers are the experts in software development. If their input has been ignored, then how realistic will the plan be?
The key to software development success is frequent, ongoing communication between the developers, the customer, and the project manager throughout the project, with regular opportunities to confirm understanding and give feedback. By making use of the techniques discussed in this book, you can improve your team's communication, and ensure that your software projects succeed.