Apples and Oranges work in Fruit Salad, not S/W Measurement!


A colleague once observed at a professional conference that “Common sense is not very common” – and when it comes to the typical approach to software measurement, I have to agree.

Case in point – there are proven approaches to software measurement (such as the Goal/Question/Metric by the Software Engineering Institute, and Practical Software & Systems Measurement out of the Department of Defense) – yet corporations often approach metrics haphazardly as if they were making a fruit salad.  While a variety of ingredients works well in the kitchen, data that seem similar (but really are not) can wreak havoc in corporations.  Common sense should tell us that success with software metrics depends on having comparable data.

If only data were like fruit

– it would be easy to pinpoint the mangoes, apples, oranges, and bananas in company databases and save millions of corporate dollars.

Most Metrics Programs don’t Intend to Lie with Statistics, but many do…

I do not believe that executives and PMO’s (project management offices) have malicious intent when they start IT measurement and benchmarking initiatives.  (Sure, there are those who use measurement to advance their own agenda but this is the topic of a future post.)

Instead, I believe that many people trivialize the business of measurement thinking that measurement is easy to do once one directs people to do it.

The truth is that software measurement takes planning and consideration to get it right.  While Tom DeMarco‘s quote

“You can’t control what you cannot measure”

is often used to justify measurement start-ups, his later observations countered it.

In the 1995 essay, Mad about Measurement, DeMarco states:

“Metrics cost a ton of money.  It costs a lot to collect them badly and a lot more to collect them well…Sure, measurement costs money, but it does have the potential to help us work more effectively.  At its best, the use of software metrics can inform and guide developers, and help organizations to improve.  At its worst, it can do actual harm.  And there is an entire range between the two extremes, varying all the way from function to dysfunction.”

It is easy to Get Started in the Wrong Direction with Metrics…

Years ago, I was working with a team to start a function point based measurement program (function points are like “square feet for software”) at a large Canadian utility company, when an executive approached me.  “We don’t need function points in my group” he remarked, “because we have our quality system under control just by tracking defects.” As he described what his team was doing, I realized that he was swimming upstream in the wrong direction, without a clue that he was doing so.

The executive and his group were tracking defects per project (not a bad thing) and then interviewing the top and bottom performing teams about the defect levels.  Once the teams realized that those who reported high defect levels were scrutinized, the team leads discovered two “work arounds” that would keep them out of the spotlight (without having to change anything they did):

1. Team leads discovered that there was no consistency in what constituted a “defect” across teams (an apples to oranges comparison).  Several “redefined” the term defect to match what they thought others were reporting so that their team’s defect numbers would go down. Without a common definition of a defect, every team reported defects differently.

2. Team leads realized that the easiest way to reduce the number of defects was to subdivide the project into mini-releases.  Smaller projects naturally resulted in a lower number of raw defects. With project size being a contributing factor (larger projects = larger number of defects) it was easy to reduce defect numbers by reducing project size.

As the months ensued, the executive observed that the overall number of defects reported per month went down, and he declared the program a grand success.  While measurement did cause behavioral changes – such changes were superficial and simply altered the reported numbers.  If the program had been properly planned with goals, questions, and consistent metrics, it would have had a chance of success using defect density (defects per unit of size such as function points).  Improvements to the processes in place between teams could have made a positive impact on the work!

Given solid comparable metrics information, the executive could have done true root cause analysis and established corrective actions together with his team.

Instead, the program evaporated with the executive declaring success and the workers shaking their heads at the waste of time.

This was a prime case of “metrics” driving (dysfunctional) behavior, and dollars spent poorly.

Keep in mind that Apples and Oranges belong together in Fruit Salad

not software measurement programs.

Call me or comment if you’d like further information about doing metrics RIGHT, or to have me stop by your company to talk to your executives BEFORE you start down the wrong measurement roadway!

Have a (truly) productive week!

Carol

Share
//

Advertisements

One response to “Apples and Oranges work in Fruit Salad, not S/W Measurement!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s