CHAOS bloody CHAOS


I’ll admit that I’m one of the guilty when it comes to citing the statistics in the Standish Group’s annual CHAOS reports (did you know that the 2009 report downgraded the success of software projects to a mere 32% of all initiated projects from 34% in 2006?)  – but I’ve also been skeptical about the data behind the CHAOS reports.

When one considers the problems involved with early software estimating (inaccuracies in effort reporting on earlier projects, lack of requirements completeness, uncalibrated or inappropriate estimating models, using statistics without knowledge, etc. – see related posts in this blog) — I’ve often wondered how a project can be deemed successful when it “seemingly” finishes on time and on budget – based on flawed estimates.

It always seemed to be akin to two wrongs making a right (finishing a project seemingly on time when “time” means to a flawed estimate), and it was interesting to come across the following IEEE Software article that challenges the very validity of the CHAOS report:   The Rise & Fall of CHAOS Report Figures (IEEE Software Dec 2009) by J. Laurenz Eveleens and Chris Verhoef, Vrije Universiteit Amsterdam, published in IEEE Software.

In it, the authors state:

“So, Standish defines a project as a success based on how well it did with respect to its original estimates of the amount of cost, time, and functionality.”

This would imply that changes to the original estimates (to more accurate estimates as the project progresses) are not consider part of the success equation – which would effectively put all projects with changes in scope, budget, or time into the challenged bucket.  It leaves me to wonder – how on earth can software development achieve such a high (32%) percentage of projects that met their original (and often ignorant) target date and budget?  I’m sure that there is more science involved in the CHAOS report (or I hope so) – but the IEEE Software article leaves one to question the numbers presented in such reports.

Hmm… that suggests that this isn’t so unusual for our industry – isn’t there a research think-tank that used to give predictions of trends with a certain level of feigned accuracy that never seemed to come true?  (I remember reading something like “By 2007, 80% of measurement programs will be based on function points (prediction 90% accurate)” and anticipating escalating demand for FP training – which never happened. )

In IT we love numbers, predictions, and statistics about what will happen in the future in our industry – and somehow this passion for numbers often overrides their sensibility.  It’s really just CHAOS bloody CHAOS isn’t it?

To your successful projects!

Carol

Carol Dekkers
email: dekkers@qualityplustech.com
http://www.qualityplustech.com/

For more information on northernSCOPE(TM) visit www.fisma.fi (in English pages) and for upcoming training in Tampa, Florida  — April 26-30, 2010, visit www.qualityplustech.com.
Share/Bookmark//
=======Copyright 2010, Carol Dekkers ALL RIGHTS RESERVED =======

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s