Category Archives: estimating models

Fundamentals of Software Metrics in Two Minutes or Less


 

Fundamentals of SW Metrics in two minutes or lessTo read more click on the link:
http://www.qsm.com/blog/2013/fundamentals-software-metrics-two-minutes-or-less

Advertisements

Measurement and IT – Friends or Frenemies ?


I confess, I am a software metrics ‘geek’… but I am not a zealot!  I agree that we desperately need measures to make sense of the what we are doing in software development and to find pockets of excellence (and opportunities for improvement), but it has to be done properly!

Most (process) improvement models, whether they pertain to software and systems or manufacturing or children or people attest to the power of measurement including the CMMI(R) Capability Maturity Model Integration and the SPICE (Software Process Improvement Capability dEtermination) models.

But, we often approach what seems to be a simple concept – “Measure the Work Output and divide it by the Inputs” back asswards (pardon my French!)

Anyone who has been involved with software metrics or function points or CMMI/SPICE gone bad can point to the residual damage of overzealous management (and the supporting consultants) leaving a path of destruction in their wake.  I think that Measurement and IT are sometimes the perfect illustration of the term “Virtual Frenemies” I’ll lay claim to it!) when it comes to poorly designed software metrics programs.  (The concepts can be compatible – but you need proper planning and open-minded participants! Read on…)

Wikipedia (yes, I know it is not the best source!) defines “Frenemy (alternately spelled “frienemy“):

is a portmanteau of “friend” and “enemy” that can refer to either an enemy disguised as a friend or someone who’s both a friend and a rival.[1] The term is used to describe personal, geopolitical, and commercial relationships both among individuals and groups or institutions. The word has appeared in print as early as 1953.

Measurement as a concept can be good. Measure what you want to improve (and measure it objectively, consistently, and then ensure causality can be shown) and improve it.

IT as a concept can be good. Software runs our world and makes life easier. IT’s all good.

The problem comes in when someone (or some team) looks at these two “good” concepts and says, let’s put them together, makes the introduction, and then walks away.  “Be sure to show us good results and where we can do even better!” is the edict.

Left alone to their own devices, measurement can wreak havoc and run roughshod over IT – the wrong things are measured (“just measure it all with source lines of code or FP and see what comes out”), effort is spent measuring those wrong things (“just get the numbers together and we’ll figure out the rest later”), the data doesn’t correlate properly (“now how can we make sense of what we collected”), and misinformation abounds (“just plot what we have, it’s gotta tell us something we can use”).

In the process, the people working diligently (most of the time!) in IT get slammed by data they didn’t participate in collecting, and which often illustrates their “performance” in a detrimental way.  Involvement in the metrics program design, on the part of the teams who will be measured, is often sparse (or an afterthought), yet the teams are expected to embrace measurement and commit to changing whatever practices the resultant metrics analysis says they need to improve.

This happens often when a single measure or metric is used across the board to measure disparate types of work (using function points to measure work that has nothing to do with software development is like using construction square feet to measure landscaping projects!)

Is it any wonder that the software and systems industries are loathe to embrace and take part in the latest “enterprise wide” measurement initiative? Fool me once, shame on you… fool me twice, shame on me.

What is the solution to resolving this “Frenemies” situation between Measurement and IT?  Planning, communication, multiple metrics and a solid approach (don’t bring in the metrics consultants yet!) are the way.

Just because something is not simple to measure does not make it not worth measuring – and measuring properly.

For example, I know of a major initiative where a customer wants to measure the productivity of SAP-related projects to gain an understanding of how the cost per FP tracks on their projects compared to other (dissimilar) software projects and across the industry.

Their suppliers cite that Function Points (a measure of software functionality) does not work well for configurations (this is true), integration work (this is true), and that it can take a lot of effort to collect FP for large SAP implementations (can be true).  However, that does not mean that the productivity cannot be measured at all!  (If all you have is a hammer, everything might look like a nail.)

It will require planning and design effort to arrive at an appropriate measurement approach to equitably and consistently track productivity across these “unique” types of projects. While this is non-trivial, the insight and benefits to the business will far exceed the effort.  Resistance on the part of suppliers to be measured (especially in anticipation of an unfair assessment based on a single metric!) is justified, but a good measurement approach (that will fairly measure the types of effort into different buckets using different measures) is definitely attainable (and desired by the business.)

The results of knowing where and how the money is “invested” in these projects will lead to higher levels of understanding on both sides, and open up discussions about how to better deliver!  The business might even realize where they can improve to make such projects more productive!

Watch my next few posts for further details about how to set up a fair and balanced software measurement program.

What do you think?  Are measurement and IT doomed to be frenemies forever? What is your experience?

Have a good week!
Carol

Trust and Verify are the (IT) Elephants in the room


As a party involved in some aspect of software development, why do you think projects are so hard?  Millions of dollars in research work to solve this question, with the result being new models, agile approaches and standards, all intended to streamline software development.

What do you think is the core reason for project success or failure?  Is it the people, process, requirements, budgets, schedule, personalities, the creative process or some combination?

Sure, IT (information technology) is a relatively new industry, plagued by technology advances at the speed of light, industries of customers and users who don’t know what they want, budgets are preset, schedules are imposed, scope is elusive, and, ultimately computer scientists and customers still speak their own language.  Some people argue that it boils down to communication (especially poor communication).  After all, isn’t communication the root cause of all wars, disputes, divorces, broken negotiations, and failed software projects?

I disagree.

I believe that TRUST and VERIFY are THE TWO most important factors in software development

These two elements are the IT elements in the room (so to speak!) I could be wrong, but it seems like the commonly cited factors (including communication) are simply symptoms of the elephants in the room – and no one is willing to talk about them.  Instead, we bring in new methodologies, new tools intended to bring customers and suppliers together, new approaches, and new standards – and all of these skirt the major issues: TRUST and VERIFY.

Why are these so critical?

Trust is the difference between negotiation and partnership – trust implies confidence,  a willingness to believe in (an)other, the assurance that your position and interests are protected, and the rationale that when life changes, the other party will understand and work with you. A partnership means that there is an agreement to trust in a second party and to give trust in return.  Trust is essential in software development.

BUT… many a contract and agreement have gone wrong with blind trust, and that is why VERIFY is as important as trust. Verify means to use due diligence to make sure that the trust is grounded in fact by using knowledge, history, and past performance as the basis.  Verify grounds trust, yet allows it to grow.

President Ronald Reagan coined the phrase “Trust, but Verify” – but I believe it is better stated as “Trust and Verify” because the two reinforce each other.  This also suggests the saying:  “Fool me Once, Shame on You… Fool me Twice, Shame on Me.”

Proof that Trust and Verify are the Elephants in the Room

Software development has a history of dysfunctional behavior built on ignoring that Trust and Verify are key issues. It is easier for both the business (customers) and the engineers (suppliers) to pretend that they trust each other than address the issues once and for all.  To admit to a lack of trust is tantamount to declaring war and accusing your “partners” of espionage.  It simply is not done in the polite company of corporate boardrooms.  And so we do the following:

  • Fixed price budgets are set before requirements are even known because the business wants to lower their risk (and mistrust);
  • Software development companies “pad” their estimates with generous margins to decrease their risk that the business doesn’t know what they want (classic mistrust);
  • Deadlines are imposed by the business based on gut-feel or contrived “drop dead” dates to keep the suppliers on track;
  • Project scope is mistakenly expressed in terms of dollars or effort (lagging metrics) instead of objective sizing (leading metrics);
  • Statements like “IT would be so much easier if we didn’t have to deal with users” are common;
  • Games like doubling the project estimate because the business will chop it in half become standard;
  • Unrealistic project budgets and schedules are agreed to to keep the business;
  • Neither side is happy about all the project meetings (lies, more promises, and disappointment).

Is IT doomed?

Trust is a critical component of any successful relationship involving humans (one might argue that it is also critical when pets are involved) – but so too is being confident in that trust (verify).  Several promising approaches address trust issues head on, and provide project metrics along the way to ensure that the trust remains.

One such approach is Kanban (the subject of this week’s Lean Software and Systems Development conference LSSC12 in Boston, MA).

Kanban for software and systems development was formalized by David Anderson and has been distilled into a collaborative set of practices that allow the business and software developers to be transparent about software development work – every step of the way.  Project work is prioritized and pulled in to be worked on only as the volume and pace (velocity) of the pipeline can accommodate.  Rather than having the business demand that more work be done faster, cheaper and better than is humanly possible (classic mistrust that the suppliers are not working efficiently), in Kanban, the business works collaboratively with the developers to manage (and gauge) what is possible to do and the pipeline delivers more than anticipated.  Trust and verify in action.

Another promising approach is Scope Management (supported by a body of knowledge and a European based certification) – a collaborative approach whereby software development effort is done based on “unit pricing”.  Rather than entertaining firm, fixed price, lose-lose (!!!) contracts where the business wants minimum price/maximum value and the supplier need to curtail changes to deliver within the fixed price (and not lose their shirts), unit pricing actually splits a project into known components can are priced similarly to how home construction can be priced by square foot and landscaping priced by the number of trees.

In Scope Management (see www.qualityplustech.com and www.fisma.fi for more details or send me an email and I’ll send you articles), the business retains the right to make changes and keep the reins on the budget and project progress and the supplier gets paid for the work that the business directs to be done.  Project metrics and progress metrics are a key component in the delivery process.  Again TRUST and VERIFY are key components to this approach.

What do you think? 

Please comment and share your opinion – are TRUST and VERIFY the IT elephants in the rooms at your company?

P.s., Don’t forget to sign up for the SPICE Users Group 2012 conference in 2 weeks in Palma de Mallorca, Spain. See www.spiceconference.com for details!  I’m doing a 1/2 day SCOPE MANAGEMENT tutorial on Tuesday May 29, 2012.

Jeopardy for IT projects… the answer is July 15


JeopardyIn the popular television game show Jeopardy, contestants are given short phrase answers and are challenged to pose the associated question.  The half-hour show has been a mainstay on prime time television for many years and boasts millions of viewers.

While Jeopardy provides great no-risk entertainment for contestants and viewers, a real-life “game” of Jeopardy plays out daily in corporations throughout the world, only it’s called something else: Date-driven-estimating!

Here’s how it typically plays out:
Business executive:  “July 15.”  (The Jeopardy style answer)
Contestant / CIO (pondering): “Is that the date you announced that the software would be ready?”
Business executive:  “Correct!  Now make it happen.”

Poor project estimating is one of the pervasive issues in software development – and the situation doesn’t seem to be getting any better despite process improvement models such as the Capability Maturity Model Integration (CMMI(R)) and formal project management.

It doesn’t seem to matter how advanced or mature is the IT organization – good project estimation is elusive.  When a project does come in according to schedule, it is often a matter of coercion – that is the project manager has cajoled, manipulated or somehow morphed the project schedule to match an often arbitrary end date.

Author Steve McConnell once coined the phrase “Date-Driven-Estimating” and stated that it was the most prevalent estimating method in use today.  In Date driven estimating, management announces the project delivery date and then the project team works backwards to the current date.  The worst-case (and not unlikely) result is when the project can make the date ONLY if it was started several months ago.  The schedule is then retrofitted by piling on more resources along the timeline until everything is included between the start and end dates.  Somehow, such artificial manipulation transforms the impossible schedule into one that can work…

In engineering projects where physical and scientific constraints are fixed (such as the days required for concrete to cure), it is easier to see that such practices are prone to create outright project failures before the project even gets started.  For example, a road construction project cannot be sped up by adding too many resources – physical limitations of engineering construction prevail and building codes govern project progress.  Not so in software development.  In fact, just this evening I met someone from a major financial services company who told me of a software project where the coding commenced within two weeks of the project start – even before they had documented the requirements.  (No, it was not an agile project!)

What can be done when there is a pre-set budget or firm schedule set before any requirements are documented?  From an investment point of view, customers want to curtail costs during the development and still get the software they need to improve their business.

From a supplier point of view, the goal is to get paid for the hours expended delivering the correct solution while being flexible to the inevitable changes that will happen along the way.

The two approaches seem at odds with one another – customers want to be fiscally responsible for curtailing costs, and suppliers want to be responsive to their customers yet get paid for the work they are directed to do.  Firm fixed price projects (even before requirements are known) have been the solution preferred by customers to minimize the risk of budget overruns but all the risk is assumed by the supplier – especially before requirements are known.  But as in any partnership there is no such thing as a win/lose partnership – if the customer wins and the supplier loses – both sides ultimately lose.

So, what can be done to avoid such a dilemma and achieve a win/win scenario?  One solution is to deliver bite sized, incremental 2-week software releases using Agile Methods (scrums) .  A two-week timeframe is easier to manage and gain an accurate estimate than is a long-term development project.

Another approach is unit pricing…. Unit pricing is based on working with a cost per function point (software sizing) and is a promising and realistic option to reach success.  This approach plus other evolutionary steps are contained in the northernSCOPE model – already successful in Finland, Australia, and other countries.

Watch this space for more about the northernSCOPE approach and upcoming US training workshops.

Meanwhile here’s a couple of earlier posts on the topics:

Isn’t it time we changed the channel and stopped playing IT Jeopardy?

Carol

Share

Estimating and the Psychology of On time, Early and Late


Is there a link between estimating and psychology?  I certainly believe that there is – especially when it comes to software development.  As humans, we react to what others tell us. Sometimes the results are surprisingly positive and equally likely the results may be negative.  How we react can give us insight into how others might react – and why.

When estimating the duration or effort of any activity, the word estimate means: (from freedictionary.com):

es·ti·mate  (st-mt)tr.v. es·ti·mat·ed, es·ti·mat·ing, es·ti·mates

1. To calculate approximately (the amount, extent, magnitude, position, or value of something).
2. To form an opinion about; evaluate: “While an author is yet living we estimate his powers by his worst performance” (Samuel Johnson).
n. (-mt)

1. The act of evaluating or appraising.
2. A tentative evaluation or rough calculation, as of worth, quantity, or size.
3. A statement of the approximate cost of work to be done, such as a building project or car repairs.
4. A judgment based on one’s impressions; an opinion.

Note that it does not mean prediction or forecast — but rather a rough calculation.

But, here is what happens when someone gives us an estimate: our minds access our historical database of what happened with a similar event in the past and gauges whether the “estimate” has a likelihood of being on time, early or late.  It does not matter if the person(s) involved in the current estimate had anything to do with the past because our minds will find similarity and tie the two events together. If we have no memory of a similar event, then we try to tie the estimate to our experience with the estimator or team.  In the case of no similar event or persons, we simply go on blind faith that the estimate will be correct.  If the estimate is accompanied by an estimated range of accuracy (e.g., plus or minus a % or timeframe) it “feels” even more reliable, when in fact the estimate is still only a best guess (albeit educated) of when an activity will start or finish, and we often expressly accept it.

Then if the activity finishes early (in the case of software estimating we assume customers will be pleased) we are generally surprised, pleased or irritated depending on our own plans (i.e., if we made plans based on an estimate, we might be irritated with an early delivery. On the other hand, if we hoped for an early finish, we might be surprised and pleased.) The same holds true for others. If a program or function is delivered early and we rely on our customers to test it, before we can move forward, our customer may not be ready in time because the delivery exceeded the estimate, or they may be elated because their need for the product was urgent and you over-delivered.

When the delivery is “on-time”, it simply means that life went according to plan per the estimate:  durations were right and no unforeseen life events came into play that prevented the on-time delivery. When this happens, our trust in the estimator and the team increases, even when it may not be warranted. (The estimate may not have included contingency for life events and the estimator was simply lucky that things went according to plan.)

When the delivery is “late”, it simply means that life intervened more than anticipated due to increased effort or duration (“we didn’t anticipate that this would be so difficult”).  Our reactions to this lateness of others can range from the mundane (“typical – I knew they were too optimistic”) to anger (“now it’s going to cost me more”) to relief (“thank goodness they are late, now I can also be late”).  Additional psychology happens with late deliveries and we judge (and store in our memory) future estimates on the spot.  Our customers do the same thing – they judge our ability to deliver when we are late – even if they do not understand how their own role or lack of participation contributed to the lateness.

In software estimating, an interesting disconnect occurs on the part of corporate memory. When a supplier provides an estimate of the work it takes to do a phase or activity, the estimate is often perceived as too high (too costly) and management will routinely cut the estimate in half or less and expect an on-time delivery.  More often than not, the estimate was exactly the opposite and was overly optimistic on the part of the delivery dates.  As a result, more often than not, the delivery ends up being late for several reasons: 1. the new estimate (less than the first estimate due to management gut-feel that the original estimate was high) was unrealistic; 2. The first estimate is accepted but was overly optimistic; 3. The estimate was based on an artificial delivery date set before the project was even launched (before the scope was known) and the date became immovable once it was announced; or 4. Life intervened and what was anticipated for the project was not what was planned. If no contingency for anticipated and unanticipated factors was included in the estimate, the delivery will always be late (unless less is delivered).

Does it not seem obvious that an unrealistically low estimate will predicate a late delivery?  Yet, the situation repeats itself in IT reminding us of the wisdom of Einstein:  Insanity is doing the same thing repeatedly expecting different results. To fix the dysfunction in IT of slashed estimates and late delivery, customer sponsors need to overcome their urge to adjust the estimates (no matter how high they may seem to be), and suppliers need to include ranges around estimate accuracy.  As such, an on-time delivery becomes a realistic possibility.

When our minds react to dates – even unrealistic ones – we set up a basis for future reactions.  Surprise is not a comfortable feeling – even if it is part of a positive result, and so our minds will entrench a historical reminder to prevent such reaction in the future:  “the next project will also be early” or “they will never get it right and will always be late” or “they were lucky to get finished on time” are potential responses.

An example of early, uncomfortable delivery happened to me this morning.  I had to catch a 7am flight from Dallas to Washington, DC and had called last night at 10 pm to arrange an early morning shuttle pickup at my hotel. The service told me that I would be picked up between 4:35 and 4:50 am so I set my alarm accordingly.  I judged that the service was going to be very accurate because: 1. It was within 5 hours of my pickup time; 2. they provided a 15 minute window (and my history with the service was that they would be late); and 3. I talked to a real person.  Imagine my surprise (and displeasure) when I received an automated phone call at 4:00 am alerting me that my shuttle was within 10 minutes of arriving. Less than five minutes later, another phone call informed me that the shuttle was now waiting at the door and any delay on my part would keep others waiting.  This was a mismatch with my experience (shuttles would run up to an hour late) and it interfered with my plans to be on time with their original estimate. Had the service told me that they’d be there at 4:30 am plus or minus a half-hour, I would have been better prepared, and without an explanation, I can only assume the early delivery was for their convenience in arranging the least number of trips (two others were in the van already and probably had earlier flights to catch).

Using our own life experiences prepares us to face the realities of estimating and the psychology of how others react to on time, early and late deliveries.  This helps to understand why our anticipation of user delight over early delivery may not be met, and how our perception of estimates may not be shared by those with downstream involvement on our projects.

To your successful projects!

Carol

Carol Dekkers
email: dekkers@qualityplustech.com
http://www.qualityplustech.com/

For more information on northernSCOPE(TM) visit www.fisma.fi (in English pages) and for upcoming training in Tampa, Florida  — April 26-30, 2010, visit www.qualityplustech.com.
Share/Bookmark//
=======Copyright 2010, Carol Dekkers ALL RIGHTS RESERVED =======–

What’s the (function) point of Measurement?


It’s been more than 30 years since “function point analysis”  emerged in IT and yet most of the industry either: a) has never heard of it; b) has a misguided idea of what function points are; or c) was the victim of a botched software measurement program based on function points.

Today I’d simply like to clear up some common misconceptions about what function points are and what they are NOT. Future postings will get into the nuts and bolts of function points and how to use them, this is simply a first starting point.

What’s a function point?

A “function point” (FP) is a unit of measure that can be used to gauge the functional size of a piece of software.  (I published a primer on function points titled: Managing (the Size of) Your Projects – A Project Management Look at Function Points in the Feb 1999 issue of CrossTalk – the Journal of Defense Software Engineering from which I have excerpted here):

“FPs measure the size of a software project’s work output or work product rather than measure technology-laden features such as lines of code (LOC). FPs evaluate the functional user requirements that are supported or delivered by the software. In simplest terms, FPs measure what the software must do from an external, user perspective, irrespective of how the software is constructed. Similar to the way that a building’s square measurement reflects the floor plan size, FPs reflect the size of the  software’s functional user requirements…

However, to know only the square foot size of a building is insufficient to manage a construction project. Obviously, the construction of a 20,000 square-foot airplane hangar will be different from a 20,000 square-foot office building. In the same manner, to know only the FP size of a system is insufficient to manage a system development project: A 2,000 FP client-server financial project will be quite different from a 2,000 FP aircraft avionics project.”

In short function points are an ISO standardized measure that provides an objective number that reflects the size of what the software will do from an external “user” perspective (user is defined as any person, thing, other application software, hardware, department etc – anything that sends of receives data or uses data from the software).  Function points offer a common denominator for comparing different types of software construction whereby cost per FP and effort hours per FP can be determined.  This is similar to cost per square foot or effort per square foot in construction.  However, it is critical to know that function points are only part of what is needed to do proper performance measurement or project estimating.

To read the full article, click on the title Managing (the Size of) Your Projects – A Project Management Look at Function Points.

To your successful projects!

Carol

Carol Dekkers
email: dekkers@qualityplustech.com
http://www.qualityplustech.com/

Carol Dekkers provides realistic, honest, and transparent approaches to software measurement, software estimating, process improvement and scope management.  Call her office (727 393 6048) or email her (dekkers@qualityplustech.com) for a free initial consultation on how to get started to solve your IT project management and development issues.

For more information on northernSCOPE(TM) visit www.fisma.fi (in English pages) and for upcoming training in Tampa, Florida  — April 26-30, 2010, visit www.qualityplustech.com.

Contact Carol to keynote your upcoming event – her style translates technical matters into digestible soundbites, with straightforward and honest advice that works in the real world!
=======Copyright 2010, Carol Dekkers ALL RIGHTS RESERVED =======

The “Dog Chasing its Tail” Syndrome in Project Estimating


Software estimating is plagued by dysfunction, not the least of which is estimation based on under-reported historical hours from previously completed projects.  See posting IT Performance Measurement… Time Bandits for a discussion about this problem.

BUT, other problems are prevalent when launching a project estimating initiative which I call the “Dog Chasing Its Tail” syndrome.  It symptomizes dysfunctional project behavior that is established and continues to be reinforced to the detriment of the organization. As a result the pattern repeats and process improvement is seldom realized.

What is the Dog Chasing its Tail Syndrome? It’s a noble goal to increase the predictability and reliability of project estimates – when estimating is based on sound principles.  However, estimating is often a misnomer for what should be called “guesstimating” because the data on which estimates are based is sketchy at best.

Here’s the process epitomized in the “Dog Chasing its Tail”:

1. Incomplete (or preliminary) requirements and sketchy quality/performance requirements. While preliminary (no formal requirements or  use cases are known), it is customary for management (customer or supplier or both) to demand a project estimate for budget or planning purposes. Labelled initially as a “ball park estimate” (a rough order of magnitude (ROM) guess of whether the effort is going to be bigger than a breadbox or smaller than a football field), the sketchy requirements are used as the basis to get the ROM.

2. The (Guess)timate becomes the project budget and plan. While management initially understands that an estimate is impossible without knowledge of what is to be done, estimators contribute to the reliance on the guesses by providing them with a feigned level of accuracy (e.g., if requirements span a total of two sentences, the resultant estimate may include hours or dollar figures with the ones digit filled in.  As a result, too often the (guess)timate becomes the approved upper limit budget or effort allowance.  Of course these figures will be proven wrong once the solid requirements are documented and known, but we are now stuck with this project estimate.

3. Changes challenge the status quo budget and schedule. When a change or clarification to requirements emerges (as they always do when human beings are involved), there is often a period of blame where suppliers allege that the item in question is a change (addition) to the original requirements on which the estimate was based, while the customer alleges that it is simply clarifies existing requirements.  Of course, neither one can be proven correct because the requirements on which the estimate was based were sketchy, incomplete and poorly documented. Once the dust settles and it becomes clear that the item will impact the project budget and schedule, the change/clarification is deferred to the next phase (“thrown over the fence” as an enhancement to be done in the next release) where it will be poorly documented but we will estimate it anyways, and so the cycle continues.

Dog chasing its tailIf you’ve ever had a dog – you know that this is similar to a dog-chasing-its-tail whereby the behavior goes on until either the dog gets tired or gets distracted by other things going on (such as food being served).  As smart software engineers we ought to be smarter than dogs!  And, given a scope management approach, we can be!  Break the cycle off dysfunctional estimation and investigate scope management – you and your customers will be glad to move forward rather than facing the insanity of repeating the same process over and over and expecting different results (along the lines of the Einstein quote!)  See http://www.qualityplustech.com for information on scope management training and resources available to break the “Dog Chasing its Tail” syndrome on your projects

Watch for the upcoming post on the hidden dangers in project hours…

To your successful projects!

Carol

Carol Dekkers
email: dekkers@qualityplustech.com
http://www.qualityplustech.com/

Carol Dekkers provides realistic, honest, and transparent approaches to software measurement, software estimating, process improvement and scope management.  Call her office (727 393 6048) or email her (dekkers@qualityplustech.com) for a free initial consultation on how to get started to solve your IT project management and development issues.

For more information on northernSCOPE(TM) visit www.fisma.fi (in English pages) and for upcoming training in Tampa, Florida  — April 26-30, 2010, visit www.qualityplustech.com.

Contact Carol to keynote your upcoming event – her style translates technical matters into digestible soundbites, with straightforward and honest solutions that work in the real world!
=======Copyright 2010, Carol Dekkers ALL RIGHTS RESERVED =======

IT Performance Measurement – Don’t let time bandits prevail…


Here in FL, Sunday is one of my favorite days to sit outside on the deck overlooking the mangroves (yes, spring break weather is FINALLY here!) – and read the bulging Sunday newspaper.  I am always intrigued by the many topics in Sunday’s edition, and there’s always generalist stories that remind me of something that happens with software.  This week was no exception.

I’ve nevTime Banditser seen the syndicated column called “The Ethicist” by Randy Cohen (syndicated from the NY Times Magazine), but today’s column headline:  “Underreporting hours worked puts unrealistic goals on others” caught my eye… just the title reminded me of one of the biggest problems with IT measurement:  project hours in IT are typically under reported because of unrealistic project budgets, overtime rules, accounting practices, etc.  (In my experience with measurement and productivity work – this is part of what I call the “Dog Chasing Its Tail” syndrome but more about that in another post.)

Back to the Matter at hand… In this week’s column, (originally published in NY on March 11, 2010, and in my St. Petersburg Times March 14, 2010), Randy Cohen is presented with an ethical question:
“As a manager at a large company, I work far more than the required 40 hours a week, but report only 40. If I reported my actual hours I could be penalized financially for letting my projects run over budget.  Company policy states… <snip> … is there a problem with simply declining to mention some hours when I did work?”

The crux of Randy’s answer (as it related specifically to IT Performance Measurement for me) was “By underreporting your hours, you create a false sense of your efficiency.  The company now believes that you are able to complete your weekly tasks in 40 hours. But you are not. Your deception thwarts the company’s reasonable interest in assessing your performance”.  To use a Canadian expression Bang On with your response Randy! (to read the entire article by Randy Cohen, click here.)

This is one of the most pervasive problems when tackling software project estimation – using reported project hours as the basis for new project estimates (when the actual hours are much higher). Most people would see the folly of estimating the cost of a new based on the mortgage of a house down the street (you don’t know how much was the down payment!), but overlook the hazards of doing the same thing when estimating a software project based on under reported hours.  Both fall victim to the wrong foundation for estimating.

Before you embark on a process improvement initiative that targets Predictability and IT project Estimating, make sure that you know all the facts before you waste valuable time or energy compounding the existing problems and ignoring the real status quo.  Become knowledgeable about the issues (call or email me if you need a starting point or resources.)

To your successful projects!

Carol

Carol Dekkers
email: dekkers@qualityplustech.com
http://www.qualityplustech.com/

Carol Dekkers provides realistic, honest, and transparent approaches to software measurement, software estimating, process improvement and scope management.  Call her office (727 393 6048) or email her (dekkers@qualityplustech.com) for a free initial consultation on how to get started to solve your IT project management and development issues.

For more information on northernSCOPE(TM) visit www.fisma.fi (in English pages) and for upcoming training in Tampa, Florida  — April 26-30, 2010, visit www.qualityplustech.com.

Contact Carol to keynote your upcoming event – her style translates technical matters into digestible soundbites, with straightforward and honest expertise that works in the real world!
=======Copyright 2010, Carol Dekkers ALL RIGHTS RESERVED =======

Crosstalk article on Scope Management


CrossTalk - The Journal of Defense Software Engineering

CrossTalk Jan/Feb 2010 Scope Management: 12 Steps for ICT Program Recovery

by Carol Dekkers, Quality Plus Technologies, Inc.
Pekka Forselius, 4SUM Partners, Inc.

ABSTRACT:
The information and communications technology (ICT) world is “addicted” to dysfunctional behavior and the problem is spreading globally. The sad truth is that the parties in the ICT relationship (the customer and the supplier) are largely co-dependent on a pattern of dysfunction characterized by ineffective communication, fixed price contracts with changing requirements, and eroding trust. This article focuses specifically on the northernSCOPE TM 12-step process for ICT program recovery.

For more information on northernSCOPE(TM) visit www.fisma.fi (in English pages) and for upcoming training in Tampa, Florida  — April 26-30, 2010, visit www.qualityplustech.com.

To your successful projects!

Carol

Carol Dekkers
email: dekkers@qualityplustech.com
http://www.qualityplustech.com/
Contact Carol to keynote your upcoming event – her style translates technical matters into digestible soundbites, with straightforward and honest expertise that works in the real world!
=======Copyright 2010, Carol Dekkers ALL RIGHTS RESERVED =======

Overcome CHAOS with scope management…


“Don’t waste even one heartbeat on things that don’t matter.”  Anonymous

 This quotation should be the mantra of software development – if only we could figure out what “in the customer’s eyes” doesn’t matter!

The Standish Group’s 2009 CHAOS report certainly didn’t bring any celebratory words:  After steady increases from a dismal 17% project success rate in 1996 to the 34% (a doubling!) rate in 2006, this year’s study reported a decrease to a mere 32% of projects being declared a success.

 What’s going on? With 60-99% of defects attributable to poor requirements, and 45% of development effort spent on rework, it is clear that somehow the customer side of the house and the development side are out of whack.  Worldwide, we could declare software development as a problem of epidemic proportions – especially as software pervades every aspect of our daily lives.  If we can casually launch a team of scientists to fix an ailing space station on a moment’s notice, and announce medical breakthroughs, surely the brains in software development should make strides in this area.  Take rework for example, currently hovering at an astounding 40% of software development effort – this means that while we do great work Monday to Wednesday every week, the remaining two working days are spent fixing and redoing the very work we just completed. The old adage – “if you don’t have the time to do it right, when will you find the time to do it over?” —  doesn’t seem to hold much water does it when almost half of every week is spent doing costly rework?

Here’s the crux of the situation – it’s not a matter of incompetence!  It’s a matter of the customers not being able to fully articulate what they actually need – and a matter of the suppliers not being able to deliver to requirements that are as sketchy as clouds.  BUT, there’s a solution that’s been successful in Finland and Australia called Scope Management – and I’ve mentioned this many times in the past… stay tuned for more information in the coming posts. And Scope Management training is coming to Tampa, FL in just a few weeks.  More to come!

For more information on northernSCOPE(TM) visit www.fisma.fi (in English pages) and for upcoming training in Florida, visit www.qualityplustech.com.

To your successful projects!

Carol

Carol Dekkers
email: dekkers@qualityplustech.com
http://www.qualityplustech.com/
Contact Carol to keynote your upcoming event – her style translates technical matters into digestible soundbites, with straightforward and honest expertise that works in the real world!
=======Copyright 2010, Carol Dekkers ALL RIGHTS RESERVED =======