The "Holy Triangle,"seems to exist in all workplaces that use any kind of technology these days. Our Agency administration (young, under 30)just put all the field staff (who are 80% over age 45) on latest technology, and the 4 hours of training neglected to consider that most staff were not technology or software-savvy...it was a nightmare. Communication with the consumer (staff) prior to the development and purchase of new technology would have save slot of headache...and money!
The opening paragraphs of your article says 31% of projects will be cancelled, then a few lines later it says 23%.
You are in fact referring to 2 different Standish Group reports - one from 1994, and one that appears to have been conducted in 2000. As there appears to have been a marked improvement in these 6 years, you should probably point that out.
I also believe Standish Group publishes yearly reports, so it would therefore have been interesting to pull in figures from the very latest research - instead of using figures over decade old. A discussion on what has changed would be really interesting.
Although I voted this article a '4', I should have given it a '5'. The only thing holding me back was that the author didn't touch on a very important core reason behind the lack of skilled resources. This has to do with the drive to leverage low-cost labor. The primary source of this low cost labor is resources from underdeveloped countries and college campuses. Yes, I am a 48 year old programmer from way back, and through very hard work and a couple of guardian angels I have been able to keep my job at a Fortune 500 company. But the truth is that companies are continuously replacing highly-paid, skilled programmers with low cost 'alternatives'. That is THEE reason that the work force is 'young'.
"Cost overrun is common in infrastructure, building, and technology projects. One of the most comprehensive studies  of cost overrun that exists found that 9 out of 10 projects had overrun, overruns of 50 to 100 percent were common, overrun was found in each of 20 nations and five continents covered by the study, and overrun had been constant for the 70 years for which data were available. For IT projects an industry study by the Standish Group (2004) found that average cost overrun was 43 percent, 71 percent of projects were over budget, over time, and under scope"
That paragraph would even seem to suggest that IT projects are better than average at only 43% overrun rather than 50-100%!
You might also find this bbc article on "Why Do Costs Overrun?" interesting. It discusses why big governments projects (Olympics, Millennium Dome, Eurofighter, National ID Database, etc.) go over budget, and also if it is even feasible to set a budget based on a large number of variables and the difficulty of knowing what might happen in the future - sound familiar?
I think the difference is that the software industry still hasn't learned that Fred Brooks was right when he said there were no silver bullets. Complexity happens, and just using waterfallCMMRational Unified ProcessAgileSCRUM WhateverComesNext methodology will not avoid that complexity and guarantee you release on time, on budget, and with all the features.
So given that (a) we are no worse than big engineering projects, (b) complexity happens, and (c) there can be no silver bullet to remove it, why do we continue to pretend that it can be otherwise?
To quote the BBC article: "[Dr Will Jennings] believes the public need to be more "mature" in their attitude, accepting a certain amount of cost-over run as a price worth paying for something, such as the 2012 Olympics, that will bring social and economic benefits."
This does not discuss the main reason most software projects run into trouble. I'll quote Ken Schwaber in "Agile Software Development with Scrum" (p24-25):
"I wanted to understand the reason why my customers' methodologies didn't work for my company, so I brought several systems development methodologies to process theory experts at the DuPont Experimental Station in 1995. These experts, led by Babatunde "Tunde" Ogannaike, are the most highly respected theorists in industrial process control. They know process control inside and out. Some of them even taught the subject at major universities. They had all been brought in by DuPont to automate the entire product flow, from forecasts and orders to product delivery.
"They inspected the systems development processses that I brought them I have rarely provided a group with so much laughter. They were amazed and appallled that my industry, systems developent, was trying to do its work using a completely inappropriate process cnotrol model. They said systems development had so much complexity and unpredictability that it had to be managed by a process control method they referred to as "empirical." They said this was nothing new, and all complex processes that weren't completely understood required the empirical model. They helped me go through a book that is the Bible of industrial process control theory, Process Dynamics, modeling and Control [Tunde] to understand why I was off track.
"In a nutshell, there are two major approaches to controlling any process. The "defined" process control model requires that every piece of work be completely understood. Given a well-defined set of inputs, the same outputs are generated every time. A defined process can be started and allowed to run until completion, with the same results every time. Tunde said the methodologies that I showed him attempted to use the defined model, but none of the processes or tasks were defined in enough detail to provide repeatability and predictability. Tunde said my business was an intellectually intensive business that required too much thinking and creativity to be a good candidate for the defined approach. He theorized that my industry's application of the defined methodologies must have resulted in a lot of surprises, loss of control, and incomplete or just wrong products. He was particularly amused that the tasks were linked together with dependencies, as though they could predictably start and finish just like a well defined industrial process.
"Tunde told me the empricial model of process control, on the other hand, expects the unexpected. It provides and exercises control through frequenct inspect and adaptation for processes that are imperfectly defined and generate unpredictable and unrepeatable outputs. He recommended I study this model and consider its application to the process of building systems.
"During my visit to DuPont, I experienced a true epiphany. Suddenly something in me clicked and I realized why everyone in my industry had such problems building systems. I realized why the industry was in such trouble and had such a poor reputation. We were wasting our time tring to control our work by thinknig we had an assembly line, when the only proper control was frequent and first-hand inspection, followed by immediate adjustments."
My most successful project was implemented in 2-week iterations, with a fully-functioning demonstration at the end of every fortnight. The requirements were vague, and evolved by feedback from the demonstrations. We had no design spec up front, and with 2-week iterations, we could work out the design of the new blocks on a whiteboard and just get them done. With 2-week iterations with shippable code and a demonstration at the end of each, we were forced to keep on top of bugs. We were within 11 days of our 6-month deadline, which our Sales Manager later admitted he thought was impossible.
Many of the items in this article come out of the "software development is an assembly line" mistake that has been hurting our industry for decades. For example, 10 and 11 assume the design and analysis can be done up front, whereas in reality, you discover the design has to change as you go along, the market moves so requirements change, people don't really know what they want until they see it. To develop successfully, you need a process which still provides control, but raises issues early so they can be easily addressed.
My biggest failures came when I tried to follow the classic "analyse, design in blocks, integrate, test, deploy" model which this article seems to be advocating. My worst record here was 9 months late on a 6-month project.
I am not sure what cause the delay. What I am sure of is that it hurt Microsoft creditability.
As for all other manufactures (SW and HW) the impact was even more crucial in term of business.
The Vista case demonstrate the fragility that the article present because it happen to a company with more than 60K employees.