I confess, I am a software metrics ‘geek’… but I am not a zealot! I agree that we desperately need measures to make sense of the what we are doing in software development and to find pockets of excellence (and opportunities for improvement), but it has to be done properly!
Most (process) improvement models, whether they pertain to software and systems or manufacturing or children or people attest to the power of measurement including the CMMI(R) Capability Maturity Model Integration and the SPICE (Software Process Improvement Capability dEtermination) models.
But, we often approach what seems to be a simple concept – “Measure the Work Output and divide it by the Inputs” back asswards (pardon my French!)
Anyone who has been involved with software metrics or function points or CMMI/SPICE gone bad can point to the residual damage of overzealous management (and the supporting consultants) leaving a path of destruction in their wake. I think that Measurement and IT are sometimes the perfect illustration of the term “Virtual Frenemies” I’ll lay claim to it!) when it comes to poorly designed software metrics programs. (The concepts can be compatible – but you need proper planning and open-minded participants! Read on…)
Wikipedia (yes, I know it is not the best source!) defines “Frenemy“ (alternately spelled “frienemy“):
is a portmanteau of “friend” and “enemy” that can refer to either an enemy disguised as a friend or someone who’s both a friend and a rival. The term is used to describe personal, geopolitical, and commercial relationships both among individuals and groups or institutions. The word has appeared in print as early as 1953.
Measurement as a concept can be good. Measure what you want to improve (and measure it objectively, consistently, and then ensure causality can be shown) and improve it.
IT as a concept can be good. Software runs our world and makes life easier. IT’s all good.
The problem comes in when someone (or some team) looks at these two “good” concepts and says, let’s put them together, makes the introduction, and then walks away. “Be sure to show us good results and where we can do even better!” is the edict.
Left alone to their own devices, measurement can wreak havoc and run roughshod over IT – the wrong things are measured (“just measure it all with source lines of code or FP and see what comes out”), effort is spent measuring those wrong things (“just get the numbers together and we’ll figure out the rest later”), the data doesn’t correlate properly (“now how can we make sense of what we collected”), and misinformation abounds (“just plot what we have, it’s gotta tell us something we can use”).
In the process, the people working diligently (most of the time!) in IT get slammed by data they didn’t participate in collecting, and which often illustrates their “performance” in a detrimental way. Involvement in the metrics program design, on the part of the teams who will be measured, is often sparse (or an afterthought), yet the teams are expected to embrace measurement and commit to changing whatever practices the resultant metrics analysis says they need to improve.
This happens often when a single measure or metric is used across the board to measure disparate types of work (using function points to measure work that has nothing to do with software development is like using construction square feet to measure landscaping projects!)
Is it any wonder that the software and systems industries are loathe to embrace and take part in the latest “enterprise wide” measurement initiative? Fool me once, shame on you… fool me twice, shame on me.
What is the solution to resolving this “Frenemies” situation between Measurement and IT? Planning, communication, multiple metrics and a solid approach (don’t bring in the metrics consultants yet!) are the way.
Just because something is not simple to measure does not make it not worth measuring – and measuring properly.
For example, I know of a major initiative where a customer wants to measure the productivity of SAP-related projects to gain an understanding of how the cost per FP tracks on their projects compared to other (dissimilar) software projects and across the industry.
Their suppliers cite that Function Points (a measure of software functionality) does not work well for configurations (this is true), integration work (this is true), and that it can take a lot of effort to collect FP for large SAP implementations (can be true). However, that does not mean that the productivity cannot be measured at all! (If all you have is a hammer, everything might look like a nail.)
It will require planning and design effort to arrive at an appropriate measurement approach to equitably and consistently track productivity across these “unique” types of projects. While this is non-trivial, the insight and benefits to the business will far exceed the effort. Resistance on the part of suppliers to be measured (especially in anticipation of an unfair assessment based on a single metric!) is justified, but a good measurement approach (that will fairly measure the types of effort into different buckets using different measures) is definitely attainable (and desired by the business.)
The results of knowing where and how the money is “invested” in these projects will lead to higher levels of understanding on both sides, and open up discussions about how to better deliver! The business might even realize where they can improve to make such projects more productive!
Watch my next few posts for further details about how to set up a fair and balanced software measurement program.
What do you think? Are measurement and IT doomed to be frenemies forever? What is your experience?
Have a good week!
- Estimating Before Requirements with Function Points and Other Metrics… Webinar Replay (musingsaboutsoftwaredevelopment.wordpress.com)
- Software Development Metrics that Matter (java.dzone.com)