Category Archives: estimating software development cost and effort

Function Points (Software Size) come of Age: Mature, Stable, and Relevant


It is with pride and honor to share with you news about the upcoming Sept 13-15, 2017 celebratory (and educational) conference: ISMA14 (International Software Measurement and Analysis) – and its happening in just 4 weeks in Cleveland, OH, USA!

It’s the 30th anniversary of the International Function Point Users Group (IFPUG) – a not-for-profit user group I’ve been a part of for over 25 years.

We’re also celebrating 2017 as the International Year of Software Measurement (#IYSM).  It’s a great year for YOU to get involved (or more involved) and gain the benefits of measurement for software and systems projects!

As the Director of Communications and Marketing for IFPUG, I am excited that IFPUG is now mature (age 30!) and at the same time venturing in new directions with non-functional sizing (SNAP.)  We have much to celebrate, AND we also have more work to do (to publicize how Function Points and SNAP points provide objective measures of software size!)

The time is now!

No longer does your organization need to “fumble around in the dark” to find standard, reliable and objective software sizing measures.  Certainly there is an abundance of available units of measure (story points, use case points, source lines of code, hybrid measures, etc.) — BUT, only Function Points are supported by  ISO/IEC world standards and provide consistent, objective and technologically independent assessments of software size based on “user” requirements.  (Soon, the Software Non-functional Assessment Process – SNAP points for non-functional size will also become an international standard.)

Isn’t it time that your company adopts function points as a universal standard for software size?  YOUR timing is perfect because in less than 5 weeks, International Software Measurement and Analysis (#ISMA14) will be in Cleveland and you will have the opportunity to learn from industry experts in an intimate (less than 200 people) setting. (p.s., I’m one of the main conference speakers so you’ll know at least 1 person there!)

FUNCTION POINT proof is “in the pUDDING” (so to speak)…

We have an English proverb “the proof is the pudding”

The modern version of “The proof is in the pudding.” Implies that there is a lot of evidence that I will not go through at this moment and you should take my word for it, or you could go through all of the evidence yourself. Source:  http://tinyurl.com/5uc7eq3 

I can espouse the benefits of function points, as can IFPUG insiders and supporters such as the world-respected author/guru Capers Jones (whose 17 published books use Function Points as a universal software sizing measure). But, when the mainstream media features articles on Function Points – it’s a call to action for senior executives and IT professionals to take note! Here’s a recent example: (click on the image to read the full story!)

Need help selling your boss on the benefits?

I’ve written up the top 10 reasons to attend ISMA14 with us- won’t you join me (and a ton of other measurement professionals) in Cleveland on Sep 13?

Carol Dekkers, CFPS (Fellow), AEC, PMP, P.Eng.
President, Quality Plus Technologies, Inc.
IFPUG Director of Communications and Marketing

 

Advertisements

To Succeed with Measurement, Choose Stable Measures


The pace of technology advancement can be staggering – new tools, methods, acronyms, programming languages, platforms and solutions – come at us at warp speed, morphing our IT landscape into patchwork quilts of old and new technologies.  

At times, it can be challenging to gauge the results (of change): what were the specific processes /tools /methods /technologies /architectures /solutions that contributed to or delivered positive results?  How can we tell what made things worse?

Defining positive “results” is the first step and measurement can contribute – as long as our measures don’t shift with the technology!

I and countless others have written about Victor Basilli’s GQM (Goal Question Metric) approach to measurement, (in short, choose measures that answer the questions you need to answer so you can achieve the goal of measurement…) but there’s a problem even more fundamental, and goes beyond choosing the right measures:

The key to (IT) measurement lies in stability and consistency:  choosing stable measures (industry standardized definitions that don’t change) and measuring consistently (measuring the same things in the same way.)
– Carol Dekkers, 2016

This may seem like common sense, but after 20 years of seeing how IT applies measurement, I realize common sense isn’t all that common.  There are some in the IT world that would rather invent new measures (thus decreasing stability and consistency) than embrace proven ones.  While I’ve seen the academic tendancy of “tear down what already exists to make room for my new ideas,” I believe that this is counter-productive when it comes to IT metrics.  But, I’m getting ahead of myself.  First, let’s consider how measurement is done in other industries:

  • Example 1: Building construction.  Standard units of measure (imperial or metric) are square feet and square meters.  The definition of a square foot has not changed despite advances in modular design.
  • Example 2: Manufacturing.  Units of measure for tolerances, product sizes, weights, etc. (inches, mm, pounds, kg, etc.) are the same through the years.
  • Example 3: Automobiles.  Standard ratios such as miles per gallon (mpg) and acceleration (0-60 in x seconds) remain industry standards.

In each example, the measure is stable and measurement success is a result of consistent and stable (unchanging) units of measure applied across changing environments.  Comparisons of mpg or costs per square foot would be useless if the definition of the units of measure was not stable.  Comparability across products or processes depends on the consistency and stability of both the measurement process and the measures themselves.

Steve Daum wrote in “Stability and linearity: Keys to an effective measurement system” :

“Knowing that a measurement system is stable is a comfort to the individuals involved in managing the measurement system. If the measuring process is changing over time, the ability to use the data gathered in making decisions is diminished. If there is no method used to assess stability, it will be difficult to determine the sensitivity of the measurement system to change and the frequency of the change…Stability is the key to predictability.”

One of the most stable and consistent measures of software (functional size) is called IFPUG Function Points and as The International Function Point Users Group (IFPUG) is poised to celebrate its 30th year in 2017.  The IFPUG Function Point measure is stable (with hundreds of thousands of projects having been FP counted,) and consistent (it’s been an ISO/IEC standard for almost 20 years!) – and perhaps 2017 is the year that YOUR company should look at FP based measurement.

FPA (Function Point Analysis) provides the a measure of software size under development and can be used equally well on agile, waterfall, and hybrid software development projects.  Yet, despite its benefits, much of the world still doesn’t know about the measure.

See my first post of 2016 here:  Function Point Analysis (FPA) – Creating Stability in a Sea of Shifting Metrics for more details.  FP is certainly a good place to start when you’re looking for software measurement success… why not today?

Wishing you a happy and safe holiday season wherever you live!

 

Fundamentals of Software Metrics in Two Minutes or Less


 

Fundamentals of SW Metrics in two minutes or lessTo read more click on the link:
http://www.qsm.com/blog/2013/fundamentals-software-metrics-two-minutes-or-less

Fundamentals of Software Estimating: First See the Elephant in the Room: Part 2


This is the second in a sequence of four posts (and more to come) on BASIC software estimating concepts that address “WHAT IS IT?” we plan to estimate.

Introduction

Whenever I teach project estimating or Project Management 101 (basics) to software developers, we address the fundamental questions about what we plan to do (once called “The Triple Constraint” in project management circles):

What amazes me, is that while these are great questions that need to be answered when we do an estimate, we start out by ignoring the “Elephant in the Room” – that is, what do we mean by the “it” in the questions above.  Sure, for some people “it” may be obvious that “it” is a project, but bear with me for a minute… misunderstandings about terminology and definitions are often the source of major rework when software construction is involved.

What is “it”?

  1. “It” could be the project (and all that the term entails);
  2. “It” could be the resultant product (software and/or hardware);
  3. “It” could be a phase or exploratory R&D effort; or
  4. “It” might be something altogether different...

The question of “what is it?” we are estimating is fundamental to why IMHO (In My Humble Opinion) software projects end up being over budget, late, and out of scope.  If you don’t know definitively WHAT you are estimating, how can any possible estimate be realistic?

This post is Part 2:  “It” is the resultant Product

When the “what” we want to estimate is the time, effort, cost or scope to build a software intensive product, there are different considerations than if we are estimating a project, such as:

  • What is the product – a working, full-scale “system” (multiple interlocking pieces of software plus associated hardware, and glue code for linking it all together, and documentation) or a portion thereof?
  • Are training modules and online guides to be included as part of the product delivery?
  • Does the product include hardware, and peripherals (like printers or scanners)?
  • Will the entire product (or service) be delivered in “one fell swoop” or delivered in separate pieces?  (This could involve multiple projects that may deliver parts of the software, with later projects having to redo/enhance what was delivered in an earlier release.)
  • Do we know what the product will actually be?
  • Are there multiple stakeholders (direct users or indirect ones) whose priorities for product functionality may vary or even conflict?
  • Has a product like this been done before (i.e., do we know what all is needed such as additional software, hardware, etc.)?
  • What else is included as part of the product? (And what is excluded?)

Software intensive systems are often negotiated and estimated as concrete commodities, however, the challenge of deciding “what actually constitutes the software intensive system ” is a key component that needs to be decided before doing any estimating.

Sounds pretty basic doesn’t it:  Know what “it” is you are estimating before you begin estimating…

Unfortunately, as Peter Drucker once said

“It is important to state the obvious otherwise it may be overlooked.”

Know what you are estimating BEFORE doing an estimate… pretty fundamental don’t you agree?

Comments, brickbats, responses welcome…

 

Measurement and IT – Friends or Frenemies ?


I confess, I am a software metrics ‘geek’… but I am not a zealot!  I agree that we desperately need measures to make sense of the what we are doing in software development and to find pockets of excellence (and opportunities for improvement), but it has to be done properly!

Most (process) improvement models, whether they pertain to software and systems or manufacturing or children or people attest to the power of measurement including the CMMI(R) Capability Maturity Model Integration and the SPICE (Software Process Improvement Capability dEtermination) models.

But, we often approach what seems to be a simple concept – “Measure the Work Output and divide it by the Inputs” back asswards (pardon my French!)

Anyone who has been involved with software metrics or function points or CMMI/SPICE gone bad can point to the residual damage of overzealous management (and the supporting consultants) leaving a path of destruction in their wake.  I think that Measurement and IT are sometimes the perfect illustration of the term “Virtual Frenemies” I’ll lay claim to it!) when it comes to poorly designed software metrics programs.  (The concepts can be compatible – but you need proper planning and open-minded participants! Read on…)

Wikipedia (yes, I know it is not the best source!) defines “Frenemy (alternately spelled “frienemy“):

is a portmanteau of “friend” and “enemy” that can refer to either an enemy disguised as a friend or someone who’s both a friend and a rival.[1] The term is used to describe personal, geopolitical, and commercial relationships both among individuals and groups or institutions. The word has appeared in print as early as 1953.

Measurement as a concept can be good. Measure what you want to improve (and measure it objectively, consistently, and then ensure causality can be shown) and improve it.

IT as a concept can be good. Software runs our world and makes life easier. IT’s all good.

The problem comes in when someone (or some team) looks at these two “good” concepts and says, let’s put them together, makes the introduction, and then walks away.  “Be sure to show us good results and where we can do even better!” is the edict.

Left alone to their own devices, measurement can wreak havoc and run roughshod over IT – the wrong things are measured (“just measure it all with source lines of code or FP and see what comes out”), effort is spent measuring those wrong things (“just get the numbers together and we’ll figure out the rest later”), the data doesn’t correlate properly (“now how can we make sense of what we collected”), and misinformation abounds (“just plot what we have, it’s gotta tell us something we can use”).

In the process, the people working diligently (most of the time!) in IT get slammed by data they didn’t participate in collecting, and which often illustrates their “performance” in a detrimental way.  Involvement in the metrics program design, on the part of the teams who will be measured, is often sparse (or an afterthought), yet the teams are expected to embrace measurement and commit to changing whatever practices the resultant metrics analysis says they need to improve.

This happens often when a single measure or metric is used across the board to measure disparate types of work (using function points to measure work that has nothing to do with software development is like using construction square feet to measure landscaping projects!)

Is it any wonder that the software and systems industries are loathe to embrace and take part in the latest “enterprise wide” measurement initiative? Fool me once, shame on you… fool me twice, shame on me.

What is the solution to resolving this “Frenemies” situation between Measurement and IT?  Planning, communication, multiple metrics and a solid approach (don’t bring in the metrics consultants yet!) are the way.

Just because something is not simple to measure does not make it not worth measuring – and measuring properly.

For example, I know of a major initiative where a customer wants to measure the productivity of SAP-related projects to gain an understanding of how the cost per FP tracks on their projects compared to other (dissimilar) software projects and across the industry.

Their suppliers cite that Function Points (a measure of software functionality) does not work well for configurations (this is true), integration work (this is true), and that it can take a lot of effort to collect FP for large SAP implementations (can be true).  However, that does not mean that the productivity cannot be measured at all!  (If all you have is a hammer, everything might look like a nail.)

It will require planning and design effort to arrive at an appropriate measurement approach to equitably and consistently track productivity across these “unique” types of projects. While this is non-trivial, the insight and benefits to the business will far exceed the effort.  Resistance on the part of suppliers to be measured (especially in anticipation of an unfair assessment based on a single metric!) is justified, but a good measurement approach (that will fairly measure the types of effort into different buckets using different measures) is definitely attainable (and desired by the business.)

The results of knowing where and how the money is “invested” in these projects will lead to higher levels of understanding on both sides, and open up discussions about how to better deliver!  The business might even realize where they can improve to make such projects more productive!

Watch my next few posts for further details about how to set up a fair and balanced software measurement program.

What do you think?  Are measurement and IT doomed to be frenemies forever? What is your experience?

Have a good week!
Carol

Trust and Verify are the (IT) Elephants in the room


As a party involved in some aspect of software development, why do you think projects are so hard?  Millions of dollars in research work to solve this question, with the result being new models, agile approaches and standards, all intended to streamline software development.

What do you think is the core reason for project success or failure?  Is it the people, process, requirements, budgets, schedule, personalities, the creative process or some combination?

Sure, IT (information technology) is a relatively new industry, plagued by technology advances at the speed of light, industries of customers and users who don’t know what they want, budgets are preset, schedules are imposed, scope is elusive, and, ultimately computer scientists and customers still speak their own language.  Some people argue that it boils down to communication (especially poor communication).  After all, isn’t communication the root cause of all wars, disputes, divorces, broken negotiations, and failed software projects?

I disagree.

I believe that TRUST and VERIFY are THE TWO most important factors in software development

These two elements are the IT elements in the room (so to speak!) I could be wrong, but it seems like the commonly cited factors (including communication) are simply symptoms of the elephants in the room – and no one is willing to talk about them.  Instead, we bring in new methodologies, new tools intended to bring customers and suppliers together, new approaches, and new standards – and all of these skirt the major issues: TRUST and VERIFY.

Why are these so critical?

Trust is the difference between negotiation and partnership – trust implies confidence,  a willingness to believe in (an)other, the assurance that your position and interests are protected, and the rationale that when life changes, the other party will understand and work with you. A partnership means that there is an agreement to trust in a second party and to give trust in return.  Trust is essential in software development.

BUT… many a contract and agreement have gone wrong with blind trust, and that is why VERIFY is as important as trust. Verify means to use due diligence to make sure that the trust is grounded in fact by using knowledge, history, and past performance as the basis.  Verify grounds trust, yet allows it to grow.

President Ronald Reagan coined the phrase “Trust, but Verify” – but I believe it is better stated as “Trust and Verify” because the two reinforce each other.  This also suggests the saying:  “Fool me Once, Shame on You… Fool me Twice, Shame on Me.”

Proof that Trust and Verify are the Elephants in the Room

Software development has a history of dysfunctional behavior built on ignoring that Trust and Verify are key issues. It is easier for both the business (customers) and the engineers (suppliers) to pretend that they trust each other than address the issues once and for all.  To admit to a lack of trust is tantamount to declaring war and accusing your “partners” of espionage.  It simply is not done in the polite company of corporate boardrooms.  And so we do the following:

  • Fixed price budgets are set before requirements are even known because the business wants to lower their risk (and mistrust);
  • Software development companies “pad” their estimates with generous margins to decrease their risk that the business doesn’t know what they want (classic mistrust);
  • Deadlines are imposed by the business based on gut-feel or contrived “drop dead” dates to keep the suppliers on track;
  • Project scope is mistakenly expressed in terms of dollars or effort (lagging metrics) instead of objective sizing (leading metrics);
  • Statements like “IT would be so much easier if we didn’t have to deal with users” are common;
  • Games like doubling the project estimate because the business will chop it in half become standard;
  • Unrealistic project budgets and schedules are agreed to to keep the business;
  • Neither side is happy about all the project meetings (lies, more promises, and disappointment).

Is IT doomed?

Trust is a critical component of any successful relationship involving humans (one might argue that it is also critical when pets are involved) – but so too is being confident in that trust (verify).  Several promising approaches address trust issues head on, and provide project metrics along the way to ensure that the trust remains.

One such approach is Kanban (the subject of this week’s Lean Software and Systems Development conference LSSC12 in Boston, MA).

Kanban for software and systems development was formalized by David Anderson and has been distilled into a collaborative set of practices that allow the business and software developers to be transparent about software development work – every step of the way.  Project work is prioritized and pulled in to be worked on only as the volume and pace (velocity) of the pipeline can accommodate.  Rather than having the business demand that more work be done faster, cheaper and better than is humanly possible (classic mistrust that the suppliers are not working efficiently), in Kanban, the business works collaboratively with the developers to manage (and gauge) what is possible to do and the pipeline delivers more than anticipated.  Trust and verify in action.

Another promising approach is Scope Management (supported by a body of knowledge and a European based certification) – a collaborative approach whereby software development effort is done based on “unit pricing”.  Rather than entertaining firm, fixed price, lose-lose (!!!) contracts where the business wants minimum price/maximum value and the supplier need to curtail changes to deliver within the fixed price (and not lose their shirts), unit pricing actually splits a project into known components can are priced similarly to how home construction can be priced by square foot and landscaping priced by the number of trees.

In Scope Management (see www.qualityplustech.com and www.fisma.fi for more details or send me an email and I’ll send you articles), the business retains the right to make changes and keep the reins on the budget and project progress and the supplier gets paid for the work that the business directs to be done.  Project metrics and progress metrics are a key component in the delivery process.  Again TRUST and VERIFY are key components to this approach.

What do you think? 

Please comment and share your opinion – are TRUST and VERIFY the IT elephants in the rooms at your company?

P.s., Don’t forget to sign up for the SPICE Users Group 2012 conference in 2 weeks in Palma de Mallorca, Spain. See www.spiceconference.com for details!  I’m doing a 1/2 day SCOPE MANAGEMENT tutorial on Tuesday May 29, 2012.

“Scoping” out IT project failure…


According to a 2008 Gartner report, 15% of all IT projects failed that year because of high cost variance, while 18% were unsuccessful because they were substantially late. This means that in 2008, 1 in 3 technology projects failed.

Hmmm… IT project failure based on the fact that the estimates for budget and schedule proved to be incorrect.  Since many IT projects produce unique software products (i.e., never been done in exactly the same way before), is it any wonder that the estimates for scope AND budget AND schedule would be wrong?

Consider what this would mean in other industries if failure was based on having a high cost variance and substantially late (whatever “high” and “substantial” mean):

  • Medicine:  when a doctor “estimates” that a full term baby is due on Oct 15 and the baby is born on October 31 (16 days late) – is that baby a failure?
  • Medicine: when an oncologist “estimates” that a patient has 18 months to live and the patient is still alive 5 years later – is that patient a failure?
  • Survival: when the mining accident happened in Peru and scientists predicted that there would be casualties and all were rescued alive – was that a failure when the time frame exceeded the expectations and all miners were still saved?
  • Hurricanes:  when the scientist in Colorado predicts the number of hurricanes that will form in the Atlantic and how many will make landfall, and the number of storms falls short of his predictions – is that a failed hurricane season?
  • Everyday life: when you go grocery shopping with a list and a preset amount of cash, and you have to make several trips to purchase the items and they cost you more than you anticipated – is your Saturday a failure?
  • Everyday life: when your son makes the basketball team at school and your planned school budget is exceeded after shoes and uniforms – is the school year a failure?

All of these and many more examples in other industries illustrates how an “estimate” is simply a best guess based on history and science.  But life doesn’t follow science (truth can be stranger than fiction but that is another post for another time).  Just as a psychic cannot reliably predict the future (they can only make an educated guess based on intuition and observation), software estimators cannot reliably forecast the life of a project before it begins.  In fact, software estimators often work with even less predictable scopes than those outlined in the examples above.

So, in the context of a software project, what does on-time and on-budget really mean?  Given a set of approximate inputs for the major cost drivers (based on the information at hand) together with historical data for similar projects, an estimate is derived.  Will this estimate be correct?  Never!  It is always only a best guess given “typical” environment and situational characteristics – and an optimistic view of what could happen during the project.

If the estimator is gifted with a solid and complete set of requirements for a piece of software similar to a historical one, s/he might come within a range of accuracy.  Software, however, remains an amorphous product for which good requirements are often ill-defined or are discovered late in the development life-cycle.

IT project failure (in my humble opinion) should be defined based on scope – if the project misses the product requirements or gets them wrong, then the project should be deemed a failure.  Not whether it cost more than the estimator thought (which is a reflection on how good the estimator could predict life) or whether the product was delivered late (also a function of how well the estimate mirrored life).  It makes so much more sense to track project delivery based on scope!  But, wouldn’t that mean we’d have to concentrate on getting the requirements right and then delivering the product right?  That’s a novel thought.

What do you think?

Carol

Share

Estimating and the Psychology of On time, Early and Late


Is there a link between estimating and psychology?  I certainly believe that there is – especially when it comes to software development.  As humans, we react to what others tell us. Sometimes the results are surprisingly positive and equally likely the results may be negative.  How we react can give us insight into how others might react – and why.

When estimating the duration or effort of any activity, the word estimate means: (from freedictionary.com):

es·ti·mate  (st-mt)tr.v. es·ti·mat·ed, es·ti·mat·ing, es·ti·mates

1. To calculate approximately (the amount, extent, magnitude, position, or value of something).
2. To form an opinion about; evaluate: “While an author is yet living we estimate his powers by his worst performance” (Samuel Johnson).
n. (-mt)

1. The act of evaluating or appraising.
2. A tentative evaluation or rough calculation, as of worth, quantity, or size.
3. A statement of the approximate cost of work to be done, such as a building project or car repairs.
4. A judgment based on one’s impressions; an opinion.

Note that it does not mean prediction or forecast — but rather a rough calculation.

But, here is what happens when someone gives us an estimate: our minds access our historical database of what happened with a similar event in the past and gauges whether the “estimate” has a likelihood of being on time, early or late.  It does not matter if the person(s) involved in the current estimate had anything to do with the past because our minds will find similarity and tie the two events together. If we have no memory of a similar event, then we try to tie the estimate to our experience with the estimator or team.  In the case of no similar event or persons, we simply go on blind faith that the estimate will be correct.  If the estimate is accompanied by an estimated range of accuracy (e.g., plus or minus a % or timeframe) it “feels” even more reliable, when in fact the estimate is still only a best guess (albeit educated) of when an activity will start or finish, and we often expressly accept it.

Then if the activity finishes early (in the case of software estimating we assume customers will be pleased) we are generally surprised, pleased or irritated depending on our own plans (i.e., if we made plans based on an estimate, we might be irritated with an early delivery. On the other hand, if we hoped for an early finish, we might be surprised and pleased.) The same holds true for others. If a program or function is delivered early and we rely on our customers to test it, before we can move forward, our customer may not be ready in time because the delivery exceeded the estimate, or they may be elated because their need for the product was urgent and you over-delivered.

When the delivery is “on-time”, it simply means that life went according to plan per the estimate:  durations were right and no unforeseen life events came into play that prevented the on-time delivery. When this happens, our trust in the estimator and the team increases, even when it may not be warranted. (The estimate may not have included contingency for life events and the estimator was simply lucky that things went according to plan.)

When the delivery is “late”, it simply means that life intervened more than anticipated due to increased effort or duration (“we didn’t anticipate that this would be so difficult”).  Our reactions to this lateness of others can range from the mundane (“typical – I knew they were too optimistic”) to anger (“now it’s going to cost me more”) to relief (“thank goodness they are late, now I can also be late”).  Additional psychology happens with late deliveries and we judge (and store in our memory) future estimates on the spot.  Our customers do the same thing – they judge our ability to deliver when we are late – even if they do not understand how their own role or lack of participation contributed to the lateness.

In software estimating, an interesting disconnect occurs on the part of corporate memory. When a supplier provides an estimate of the work it takes to do a phase or activity, the estimate is often perceived as too high (too costly) and management will routinely cut the estimate in half or less and expect an on-time delivery.  More often than not, the estimate was exactly the opposite and was overly optimistic on the part of the delivery dates.  As a result, more often than not, the delivery ends up being late for several reasons: 1. the new estimate (less than the first estimate due to management gut-feel that the original estimate was high) was unrealistic; 2. The first estimate is accepted but was overly optimistic; 3. The estimate was based on an artificial delivery date set before the project was even launched (before the scope was known) and the date became immovable once it was announced; or 4. Life intervened and what was anticipated for the project was not what was planned. If no contingency for anticipated and unanticipated factors was included in the estimate, the delivery will always be late (unless less is delivered).

Does it not seem obvious that an unrealistically low estimate will predicate a late delivery?  Yet, the situation repeats itself in IT reminding us of the wisdom of Einstein:  Insanity is doing the same thing repeatedly expecting different results. To fix the dysfunction in IT of slashed estimates and late delivery, customer sponsors need to overcome their urge to adjust the estimates (no matter how high they may seem to be), and suppliers need to include ranges around estimate accuracy.  As such, an on-time delivery becomes a realistic possibility.

When our minds react to dates – even unrealistic ones – we set up a basis for future reactions.  Surprise is not a comfortable feeling – even if it is part of a positive result, and so our minds will entrench a historical reminder to prevent such reaction in the future:  “the next project will also be early” or “they will never get it right and will always be late” or “they were lucky to get finished on time” are potential responses.

An example of early, uncomfortable delivery happened to me this morning.  I had to catch a 7am flight from Dallas to Washington, DC and had called last night at 10 pm to arrange an early morning shuttle pickup at my hotel. The service told me that I would be picked up between 4:35 and 4:50 am so I set my alarm accordingly.  I judged that the service was going to be very accurate because: 1. It was within 5 hours of my pickup time; 2. they provided a 15 minute window (and my history with the service was that they would be late); and 3. I talked to a real person.  Imagine my surprise (and displeasure) when I received an automated phone call at 4:00 am alerting me that my shuttle was within 10 minutes of arriving. Less than five minutes later, another phone call informed me that the shuttle was now waiting at the door and any delay on my part would keep others waiting.  This was a mismatch with my experience (shuttles would run up to an hour late) and it interfered with my plans to be on time with their original estimate. Had the service told me that they’d be there at 4:30 am plus or minus a half-hour, I would have been better prepared, and without an explanation, I can only assume the early delivery was for their convenience in arranging the least number of trips (two others were in the van already and probably had earlier flights to catch).

Using our own life experiences prepares us to face the realities of estimating and the psychology of how others react to on time, early and late deliveries.  This helps to understand why our anticipation of user delight over early delivery may not be met, and how our perception of estimates may not be shared by those with downstream involvement on our projects.

To your successful projects!

Carol

Carol Dekkers
email: dekkers@qualityplustech.com
http://www.qualityplustech.com/

For more information on northernSCOPE(TM) visit www.fisma.fi (in English pages) and for upcoming training in Tampa, Florida  — April 26-30, 2010, visit www.qualityplustech.com.
Share/Bookmark//
=======Copyright 2010, Carol Dekkers ALL RIGHTS RESERVED =======–

What’s the (function) point of Measurement?


It’s been more than 30 years since “function point analysis”  emerged in IT and yet most of the industry either: a) has never heard of it; b) has a misguided idea of what function points are; or c) was the victim of a botched software measurement program based on function points.

Today I’d simply like to clear up some common misconceptions about what function points are and what they are NOT. Future postings will get into the nuts and bolts of function points and how to use them, this is simply a first starting point.

What’s a function point?

A “function point” (FP) is a unit of measure that can be used to gauge the functional size of a piece of software.  (I published a primer on function points titled: Managing (the Size of) Your Projects – A Project Management Look at Function Points in the Feb 1999 issue of CrossTalk – the Journal of Defense Software Engineering from which I have excerpted here):

“FPs measure the size of a software project’s work output or work product rather than measure technology-laden features such as lines of code (LOC). FPs evaluate the functional user requirements that are supported or delivered by the software. In simplest terms, FPs measure what the software must do from an external, user perspective, irrespective of how the software is constructed. Similar to the way that a building’s square measurement reflects the floor plan size, FPs reflect the size of the  software’s functional user requirements…

However, to know only the square foot size of a building is insufficient to manage a construction project. Obviously, the construction of a 20,000 square-foot airplane hangar will be different from a 20,000 square-foot office building. In the same manner, to know only the FP size of a system is insufficient to manage a system development project: A 2,000 FP client-server financial project will be quite different from a 2,000 FP aircraft avionics project.”

In short function points are an ISO standardized measure that provides an objective number that reflects the size of what the software will do from an external “user” perspective (user is defined as any person, thing, other application software, hardware, department etc – anything that sends of receives data or uses data from the software).  Function points offer a common denominator for comparing different types of software construction whereby cost per FP and effort hours per FP can be determined.  This is similar to cost per square foot or effort per square foot in construction.  However, it is critical to know that function points are only part of what is needed to do proper performance measurement or project estimating.

To read the full article, click on the title Managing (the Size of) Your Projects – A Project Management Look at Function Points.

To your successful projects!

Carol

Carol Dekkers
email: dekkers@qualityplustech.com
http://www.qualityplustech.com/

Carol Dekkers provides realistic, honest, and transparent approaches to software measurement, software estimating, process improvement and scope management.  Call her office (727 393 6048) or email her (dekkers@qualityplustech.com) for a free initial consultation on how to get started to solve your IT project management and development issues.

For more information on northernSCOPE(TM) visit www.fisma.fi (in English pages) and for upcoming training in Tampa, Florida  — April 26-30, 2010, visit www.qualityplustech.com.

Contact Carol to keynote your upcoming event – her style translates technical matters into digestible soundbites, with straightforward and honest advice that works in the real world!
=======Copyright 2010, Carol Dekkers ALL RIGHTS RESERVED =======

The “Dog Chasing its Tail” Syndrome in Project Estimating


Software estimating is plagued by dysfunction, not the least of which is estimation based on under-reported historical hours from previously completed projects.  See posting IT Performance Measurement… Time Bandits for a discussion about this problem.

BUT, other problems are prevalent when launching a project estimating initiative which I call the “Dog Chasing Its Tail” syndrome.  It symptomizes dysfunctional project behavior that is established and continues to be reinforced to the detriment of the organization. As a result the pattern repeats and process improvement is seldom realized.

What is the Dog Chasing its Tail Syndrome? It’s a noble goal to increase the predictability and reliability of project estimates – when estimating is based on sound principles.  However, estimating is often a misnomer for what should be called “guesstimating” because the data on which estimates are based is sketchy at best.

Here’s the process epitomized in the “Dog Chasing its Tail”:

1. Incomplete (or preliminary) requirements and sketchy quality/performance requirements. While preliminary (no formal requirements or  use cases are known), it is customary for management (customer or supplier or both) to demand a project estimate for budget or planning purposes. Labelled initially as a “ball park estimate” (a rough order of magnitude (ROM) guess of whether the effort is going to be bigger than a breadbox or smaller than a football field), the sketchy requirements are used as the basis to get the ROM.

2. The (Guess)timate becomes the project budget and plan. While management initially understands that an estimate is impossible without knowledge of what is to be done, estimators contribute to the reliance on the guesses by providing them with a feigned level of accuracy (e.g., if requirements span a total of two sentences, the resultant estimate may include hours or dollar figures with the ones digit filled in.  As a result, too often the (guess)timate becomes the approved upper limit budget or effort allowance.  Of course these figures will be proven wrong once the solid requirements are documented and known, but we are now stuck with this project estimate.

3. Changes challenge the status quo budget and schedule. When a change or clarification to requirements emerges (as they always do when human beings are involved), there is often a period of blame where suppliers allege that the item in question is a change (addition) to the original requirements on which the estimate was based, while the customer alleges that it is simply clarifies existing requirements.  Of course, neither one can be proven correct because the requirements on which the estimate was based were sketchy, incomplete and poorly documented. Once the dust settles and it becomes clear that the item will impact the project budget and schedule, the change/clarification is deferred to the next phase (“thrown over the fence” as an enhancement to be done in the next release) where it will be poorly documented but we will estimate it anyways, and so the cycle continues.

Dog chasing its tailIf you’ve ever had a dog – you know that this is similar to a dog-chasing-its-tail whereby the behavior goes on until either the dog gets tired or gets distracted by other things going on (such as food being served).  As smart software engineers we ought to be smarter than dogs!  And, given a scope management approach, we can be!  Break the cycle off dysfunctional estimation and investigate scope management – you and your customers will be glad to move forward rather than facing the insanity of repeating the same process over and over and expecting different results (along the lines of the Einstein quote!)  See http://www.qualityplustech.com for information on scope management training and resources available to break the “Dog Chasing its Tail” syndrome on your projects

Watch for the upcoming post on the hidden dangers in project hours…

To your successful projects!

Carol

Carol Dekkers
email: dekkers@qualityplustech.com
http://www.qualityplustech.com/

Carol Dekkers provides realistic, honest, and transparent approaches to software measurement, software estimating, process improvement and scope management.  Call her office (727 393 6048) or email her (dekkers@qualityplustech.com) for a free initial consultation on how to get started to solve your IT project management and development issues.

For more information on northernSCOPE(TM) visit www.fisma.fi (in English pages) and for upcoming training in Tampa, Florida  — April 26-30, 2010, visit www.qualityplustech.com.

Contact Carol to keynote your upcoming event – her style translates technical matters into digestible soundbites, with straightforward and honest solutions that work in the real world!
=======Copyright 2010, Carol Dekkers ALL RIGHTS RESERVED =======