Category Archives: Articles

Estimation Poker – Bluffing (and Winning) with Metrics


In May 2016, I presented a webinar for ITMPI on the topic of Estimation Poker based on the broad topic of software project estimation – regardless of the development approach.  The webinar was well attended despite technical difficulties (I recorded it while in Italy and suffice to say, internet connections from my site happened to be… less than optimum.)  I re-recorded the webinar on my return (with far superior results) and the recording can be accessed at this link:  ITMPI Estimation Poker Webinar Re-Recording:

A teaser 10 minute segment is on YouTube – Dekkers Estimation Poker teaser

I’ve also uploaded the full slide deck to Research Gate – click Research Gate – Dekkers Slides here to download.

Let me know what you think.  Note that this is different than the Agile Estimation Poker (which I forgot about was already established when I designed my webinar.)

Have a great weekend!

Carol

 

Advertisements

QSM (Quantitative Software Management) 2014 Research Almanac Published this week!


Over the years I’ve been privileged to have articles included in compendiums with industry thought leaders whose work I’ve admired.  This week, I had another proud moment as my article was featured in the QSM Software Almanac: 2014 Research Edition along with a veritable who’s who of QSM.

This is the first almanac produced by QSM, Inc. since 2006, and it features 200+ pages of relevant research-based articles garnered from the SLIM(R) database of over 10,000 completed and validated software projects.

Download your FREE copy of the 2014 Almanac by clicking this link or the image below.

almanac

What Software Project Benchmarking & Estimating Can Learn from Dr. Seuss


Sometimes truth is stranger than fiction – or children’s stories at least, and I’m hoping you’ll relate to the latest blog post I published on the QSM blog last week.  I grew up on Dr. Seuss stories – and I think my four siblings and I shared the entire series (probably one of the first loyalty programs – buy the first 10 books, get one free…)

I’d love to hear your comments and whether you agree with the analogy that we seek to create precise software sample sets for benchmarking and in so doing, lose the benefits of the trends we can leverage with larger sample sets.  Read on and let me know!  (Click on the image below or here.)

Happy November!

Carol

dr seuss

IFPUG (News) Beyond MetricViews – FP for Agile / Iterative S/W Dev


With the support of QSM, Inc., I wrote and published this article on a new area of the International Function Point Users Group (IFPUG) website called “Beyond MetricViews.”

While the IFPUG already had published guidelines in this area, the key points to this article include:

  • If you want to measure productivity (or anything else) consistently across two or more software development projects – where each was developed using a different approach (i.e., waterfall vs. agile) – one must be consistent in the definition and application of the measures (and metrics);
  • Function points are defined in terms of elementary processes and agile methodologies deliver such functions iteratively (not complete in one iteration) – posing challenges to the uninitiated;
  • Regardless of whether you measure productivity, defect density (quality), costs or other aspect of software delivery – it is critical to do an “apples to apples” comparison.

Here’s the article (click on the image) for your interest.  (You can also visit the blog at www.qsm.com for details.)

ifpug

Comments and feedback is appreciated!

Latest installment of Ask Carol: No matter What… in Project Management, Size Matters


Just wanted to share with you my latest installment on the QSM website blog – My Dear Carol advice column.  Enjoy!

Ask Carol - size matters

Here is the link to the rest of the article:  http://www.qsm.com/blog/2013/ask-carol-no-matter-what-project-management-size-matters

Fundamentals of Software Metrics in Two Minutes or Less


 

Fundamentals of SW Metrics in two minutes or lessTo read more click on the link:
http://www.qsm.com/blog/2013/fundamentals-software-metrics-two-minutes-or-less

If IT’s important – get a second (or third) opinion!


I’d like to share with you my latest  post on the QSM (Quantitative Software Management) blog – let me know what you think!

-Carol

1st in the series

1st in my new series – here’s the URL:

http://www.qsm.com/blog/2013/ask-carol-if-its-important-get-second-opinion

A Lean Journey: 10 Characteristics of a Good Measure and 7 Pitfalls to Avoid


A Lean Journey: 10 Characteristics of a Good Measure and 7 Pitfalls to Avoid.

Common-sense Leadership: Respond not react…


A big benefit to teaching leadership and communication workshops to adult professionals is continuous learning: every time I teach a class, new revelations come into focus.

One such “aha” moment (where one realizes something that may not have been obvious before) is that Leadership is really about learning to Respond to a situation or stimulus instead of automatically Reacting.  Why is this important?  Responding is the thought intensive process of actively listening, pausing, and then gathering ones “thoughts” before speaking.  Gathering of one’s thoughts involves the neocortex (center) of the brain whereby we override the reptilian (instinctual) brain and the limbic (emotion-induced) brain, and hopefully create a response less prone to immediate and autonomous reactions (based on instinct or emotion).

Considering how eastern cultures (such as Japan) seem to habitually pause before asking questions at a conference or before coming to an agreement gave me “pause” to reflect on how this practice conveys power and respect – and is one often used by practised politicians at press conferences.  This results in less “eating one’s premature words” and less damage control as opposed to when one speaks too hastily or without due thought.

This is a common-sense tip on how to practice better leadership in your own workplace no matter your position:  remember and practice active listening (if you are thinking of what you are going to say – you are not listening!), pausing, gathering your thoughts (and perhaps even saying “please give me 15 seconds to gather my thoughts”) and then thoughtfully responding.

Food for thought – what do you think?  Could this be helpful in your workplace?

Carol

Apples and Oranges work in Fruit Salad, not S/W Measurement!


A colleague once observed at a professional conference that “Common sense is not very common” – and when it comes to the typical approach to software measurement, I have to agree.

Case in point – there are proven approaches to software measurement (such as the Goal/Question/Metric by the Software Engineering Institute, and Practical Software & Systems Measurement out of the Department of Defense) – yet corporations often approach metrics haphazardly as if they were making a fruit salad.  While a variety of ingredients works well in the kitchen, data that seem similar (but really are not) can wreak havoc in corporations.  Common sense should tell us that success with software metrics depends on having comparable data.

If only data were like fruit

– it would be easy to pinpoint the mangoes, apples, oranges, and bananas in company databases and save millions of corporate dollars.

Most Metrics Programs don’t Intend to Lie with Statistics, but many do…

I do not believe that executives and PMO’s (project management offices) have malicious intent when they start IT measurement and benchmarking initiatives.  (Sure, there are those who use measurement to advance their own agenda but this is the topic of a future post.)

Instead, I believe that many people trivialize the business of measurement thinking that measurement is easy to do once one directs people to do it.

The truth is that software measurement takes planning and consideration to get it right.  While Tom DeMarco‘s quote

“You can’t control what you cannot measure”

is often used to justify measurement start-ups, his later observations countered it.

In the 1995 essay, Mad about Measurement, DeMarco states:

“Metrics cost a ton of money.  It costs a lot to collect them badly and a lot more to collect them well…Sure, measurement costs money, but it does have the potential to help us work more effectively.  At its best, the use of software metrics can inform and guide developers, and help organizations to improve.  At its worst, it can do actual harm.  And there is an entire range between the two extremes, varying all the way from function to dysfunction.”

It is easy to Get Started in the Wrong Direction with Metrics…

Years ago, I was working with a team to start a function point based measurement program (function points are like “square feet for software”) at a large Canadian utility company, when an executive approached me.  “We don’t need function points in my group” he remarked, “because we have our quality system under control just by tracking defects.” As he described what his team was doing, I realized that he was swimming upstream in the wrong direction, without a clue that he was doing so.

The executive and his group were tracking defects per project (not a bad thing) and then interviewing the top and bottom performing teams about the defect levels.  Once the teams realized that those who reported high defect levels were scrutinized, the team leads discovered two “work arounds” that would keep them out of the spotlight (without having to change anything they did):

1. Team leads discovered that there was no consistency in what constituted a “defect” across teams (an apples to oranges comparison).  Several “redefined” the term defect to match what they thought others were reporting so that their team’s defect numbers would go down. Without a common definition of a defect, every team reported defects differently.

2. Team leads realized that the easiest way to reduce the number of defects was to subdivide the project into mini-releases.  Smaller projects naturally resulted in a lower number of raw defects. With project size being a contributing factor (larger projects = larger number of defects) it was easy to reduce defect numbers by reducing project size.

As the months ensued, the executive observed that the overall number of defects reported per month went down, and he declared the program a grand success.  While measurement did cause behavioral changes – such changes were superficial and simply altered the reported numbers.  If the program had been properly planned with goals, questions, and consistent metrics, it would have had a chance of success using defect density (defects per unit of size such as function points).  Improvements to the processes in place between teams could have made a positive impact on the work!

Given solid comparable metrics information, the executive could have done true root cause analysis and established corrective actions together with his team.

Instead, the program evaporated with the executive declaring success and the workers shaking their heads at the waste of time.

This was a prime case of “metrics” driving (dysfunctional) behavior, and dollars spent poorly.

Keep in mind that Apples and Oranges belong together in Fruit Salad

not software measurement programs.

Call me or comment if you’d like further information about doing metrics RIGHT, or to have me stop by your company to talk to your executives BEFORE you start down the wrong measurement roadway!

Have a (truly) productive week!

Carol

Share
//