Category Archives: certified scope manager (CSM)

To Succeed with Measurement, Choose Stable Measures


The pace of technology advancement can be staggering – new tools, methods, acronyms, programming languages, platforms and solutions – come at us at warp speed, morphing our IT landscape into patchwork quilts of old and new technologies.  

At times, it can be challenging to gauge the results (of change): what were the specific processes /tools /methods /technologies /architectures /solutions that contributed to or delivered positive results?  How can we tell what made things worse?

Defining positive “results” is the first step and measurement can contribute – as long as our measures don’t shift with the technology!

I and countless others have written about Victor Basilli’s GQM (Goal Question Metric) approach to measurement, (in short, choose measures that answer the questions you need to answer so you can achieve the goal of measurement…) but there’s a problem even more fundamental, and goes beyond choosing the right measures:

The key to (IT) measurement lies in stability and consistency:  choosing stable measures (industry standardized definitions that don’t change) and measuring consistently (measuring the same things in the same way.)
– Carol Dekkers, 2016

This may seem like common sense, but after 20 years of seeing how IT applies measurement, I realize common sense isn’t all that common.  There are some in the IT world that would rather invent new measures (thus decreasing stability and consistency) than embrace proven ones.  While I’ve seen the academic tendancy of “tear down what already exists to make room for my new ideas,” I believe that this is counter-productive when it comes to IT metrics.  But, I’m getting ahead of myself.  First, let’s consider how measurement is done in other industries:

  • Example 1: Building construction.  Standard units of measure (imperial or metric) are square feet and square meters.  The definition of a square foot has not changed despite advances in modular design.
  • Example 2: Manufacturing.  Units of measure for tolerances, product sizes, weights, etc. (inches, mm, pounds, kg, etc.) are the same through the years.
  • Example 3: Automobiles.  Standard ratios such as miles per gallon (mpg) and acceleration (0-60 in x seconds) remain industry standards.

In each example, the measure is stable and measurement success is a result of consistent and stable (unchanging) units of measure applied across changing environments.  Comparisons of mpg or costs per square foot would be useless if the definition of the units of measure was not stable.  Comparability across products or processes depends on the consistency and stability of both the measurement process and the measures themselves.

Steve Daum wrote in “Stability and linearity: Keys to an effective measurement system” :

“Knowing that a measurement system is stable is a comfort to the individuals involved in managing the measurement system. If the measuring process is changing over time, the ability to use the data gathered in making decisions is diminished. If there is no method used to assess stability, it will be difficult to determine the sensitivity of the measurement system to change and the frequency of the change…Stability is the key to predictability.”

One of the most stable and consistent measures of software (functional size) is called IFPUG Function Points and as The International Function Point Users Group (IFPUG) is poised to celebrate its 30th year in 2017.  The IFPUG Function Point measure is stable (with hundreds of thousands of projects having been FP counted,) and consistent (it’s been an ISO/IEC standard for almost 20 years!) – and perhaps 2017 is the year that YOUR company should look at FP based measurement.

FPA (Function Point Analysis) provides the a measure of software size under development and can be used equally well on agile, waterfall, and hybrid software development projects.  Yet, despite its benefits, much of the world still doesn’t know about the measure.

See my first post of 2016 here:  Function Point Analysis (FPA) – Creating Stability in a Sea of Shifting Metrics for more details.  FP is certainly a good place to start when you’re looking for software measurement success… why not today?

Wishing you a happy and safe holiday season wherever you live!

 

Function Point Analysis (FPA) – Creating Stability in a Sea of Shifting Metrics


Years ago when Karl Weigers (Software Requirements) introduced his  “No More Models” presentation the IT landscape was rife with new concepts ranging from Extreme Programming to the Agile Manifesto to CMMI’s (multiple models), to Project/Program/Portfolio Management.

Since then, the rapidity of change in software and systems development has slowed, leaving the IT landscape checkered with agile, hybrid, spiral and waterfall projects.  Change is the new black, leaving busy professionals and project estimators stressed to find consistent metrics applicable to the diverse project portfolio.  Velocity, burn rates, story points and other modern metrics apply to agile projects, while defect density, use cases, productivity and duration delivery rates are common on waterfall projects.

What can a prudent estimator or process improvement specialist do to level the playing field when faced with disparate data and the challenge to find the productivity or quality “sweet spot”?  You may be surprised to find out that Function Point Analysis (FPA) is part of the answer and that Function Points are as relevant today as when first invented in the late 1970’s.

What are function points (FP) and how can they be used?

Function points are a unit of measure of software functional size – the size of a piece of software based on its “functional user requirements,” in other words a quantification that answers the question “what are the self-contained functions done by the software?”

Function points are analogous to the square feet of a construction floor plan and are independent of how the software must perform (the non-functional “building code” for the software,) and how the software will be built (the technical requirements.)

As such, functional size, (expressed in FP,) is independent of the programming language and methodology approach:  a 1000 FP piece of software will be the same size no matter if it is developed using Java, C++, or other programming language.

Continuing with the construction analogy, the FP size does not change on a project whether it is done using waterfall or agile or hybrid approaches.  Because it is a consistent and objective measure dependent only on the functional requirements, FP can be used to size the software delivered in a release (a consistent delivery  concept) on agile and waterfall projects alike.

WHy are fp a consistent and stable measure?

The standard methodology to count function points is an ISO standard (ISO/IEC 20926) and supported by the International Function Point User Group (IFPUG.)  Established in 1984, IFPUG maintains the method and publishes case studies to demonstrate how to apply the measurement method regardless of variations in how functional requirements are documented.  FP counting rules are both consistent and easy to apply; for the past decade the rules have not changed.

RELEVANCE OF fp in today’s it environment

No matter what method is used to prepare and document a building floor plan, the square foot size is the same.  Similarly, no matter what development methodology or programming language is used, the function point size is the same.  This means that functional size remains a relevant and important measure across an IT landscape of ever-changing technologies, methods, tools, and programming languages.  FP works as a consistent common denominator for calculating productivity and quality ratios (hours / FP and defects / FP respectively), and facilitates the comparisons of projects developed using different methods (agile, waterfall, hybrid, etc.) and technical architectures.

consistency reigns supreme

THE single most important characteristic of any software measure is consistency of measurement!

This applies to EVERY measure in our estimating or benchmarking efforts, whether we’re talking about effort (project hours), size (functional size), quality (defects), duration (calendar time) or customer satisfaction (using the same questionnaire.)  Consistency is seldom a given and can be easily overlooked – especially in one’s haste to collect data.

It takes planning to ensure that every number that goes into a metric or ratio is measured the same way using the same rules.  As such, definitions for defects, effort (especially who is included, when a project starts/stops, and what is collected), and size (FP) must be documented and used.

For more information about Function Point Analysis (FPA) and how it can be applied to different software environments or if you have any questions or comments, please send me an email (dekkers@qualityplustech.com) or post a comment below.

To a productive 2016!

Carol

QSM (Quantitative Software Management) 2014 Research Almanac Published this week!


Over the years I’ve been privileged to have articles included in compendiums with industry thought leaders whose work I’ve admired.  This week, I had another proud moment as my article was featured in the QSM Software Almanac: 2014 Research Edition along with a veritable who’s who of QSM.

This is the first almanac produced by QSM, Inc. since 2006, and it features 200+ pages of relevant research-based articles garnered from the SLIM(R) database of over 10,000 completed and validated software projects.

Download your FREE copy of the 2014 Almanac by clicking this link or the image below.

almanac

What Software Project Benchmarking & Estimating Can Learn from Dr. Seuss


Sometimes truth is stranger than fiction – or children’s stories at least, and I’m hoping you’ll relate to the latest blog post I published on the QSM blog last week.  I grew up on Dr. Seuss stories – and I think my four siblings and I shared the entire series (probably one of the first loyalty programs – buy the first 10 books, get one free…)

I’d love to hear your comments and whether you agree with the analogy that we seek to create precise software sample sets for benchmarking and in so doing, lose the benefits of the trends we can leverage with larger sample sets.  Read on and let me know!  (Click on the image below or here.)

Happy November!

Carol

dr seuss

Fundamentals of Software Estimating: First See the Elephant in the Room: Part 2


This is the second in a sequence of four posts (and more to come) on BASIC software estimating concepts that address “WHAT IS IT?” we plan to estimate.

Introduction

Whenever I teach project estimating or Project Management 101 (basics) to software developers, we address the fundamental questions about what we plan to do (once called “The Triple Constraint” in project management circles):

What amazes me, is that while these are great questions that need to be answered when we do an estimate, we start out by ignoring the “Elephant in the Room” – that is, what do we mean by the “it” in the questions above.  Sure, for some people “it” may be obvious that “it” is a project, but bear with me for a minute… misunderstandings about terminology and definitions are often the source of major rework when software construction is involved.

What is “it”?

  1. “It” could be the project (and all that the term entails);
  2. “It” could be the resultant product (software and/or hardware);
  3. “It” could be a phase or exploratory R&D effort; or
  4. “It” might be something altogether different...

The question of “what is it?” we are estimating is fundamental to why IMHO (In My Humble Opinion) software projects end up being over budget, late, and out of scope.  If you don’t know definitively WHAT you are estimating, how can any possible estimate be realistic?

This post is Part 2:  “It” is the resultant Product

When the “what” we want to estimate is the time, effort, cost or scope to build a software intensive product, there are different considerations than if we are estimating a project, such as:

  • What is the product – a working, full-scale “system” (multiple interlocking pieces of software plus associated hardware, and glue code for linking it all together, and documentation) or a portion thereof?
  • Are training modules and online guides to be included as part of the product delivery?
  • Does the product include hardware, and peripherals (like printers or scanners)?
  • Will the entire product (or service) be delivered in “one fell swoop” or delivered in separate pieces?  (This could involve multiple projects that may deliver parts of the software, with later projects having to redo/enhance what was delivered in an earlier release.)
  • Do we know what the product will actually be?
  • Are there multiple stakeholders (direct users or indirect ones) whose priorities for product functionality may vary or even conflict?
  • Has a product like this been done before (i.e., do we know what all is needed such as additional software, hardware, etc.)?
  • What else is included as part of the product? (And what is excluded?)

Software intensive systems are often negotiated and estimated as concrete commodities, however, the challenge of deciding “what actually constitutes the software intensive system ” is a key component that needs to be decided before doing any estimating.

Sounds pretty basic doesn’t it:  Know what “it” is you are estimating before you begin estimating…

Unfortunately, as Peter Drucker once said

“It is important to state the obvious otherwise it may be overlooked.”

Know what you are estimating BEFORE doing an estimate… pretty fundamental don’t you agree?

Comments, brickbats, responses welcome…

 

Trust and Verify are the (IT) Elephants in the room


As a party involved in some aspect of software development, why do you think projects are so hard?  Millions of dollars in research work to solve this question, with the result being new models, agile approaches and standards, all intended to streamline software development.

What do you think is the core reason for project success or failure?  Is it the people, process, requirements, budgets, schedule, personalities, the creative process or some combination?

Sure, IT (information technology) is a relatively new industry, plagued by technology advances at the speed of light, industries of customers and users who don’t know what they want, budgets are preset, schedules are imposed, scope is elusive, and, ultimately computer scientists and customers still speak their own language.  Some people argue that it boils down to communication (especially poor communication).  After all, isn’t communication the root cause of all wars, disputes, divorces, broken negotiations, and failed software projects?

I disagree.

I believe that TRUST and VERIFY are THE TWO most important factors in software development

These two elements are the IT elements in the room (so to speak!) I could be wrong, but it seems like the commonly cited factors (including communication) are simply symptoms of the elephants in the room – and no one is willing to talk about them.  Instead, we bring in new methodologies, new tools intended to bring customers and suppliers together, new approaches, and new standards – and all of these skirt the major issues: TRUST and VERIFY.

Why are these so critical?

Trust is the difference between negotiation and partnership – trust implies confidence,  a willingness to believe in (an)other, the assurance that your position and interests are protected, and the rationale that when life changes, the other party will understand and work with you. A partnership means that there is an agreement to trust in a second party and to give trust in return.  Trust is essential in software development.

BUT… many a contract and agreement have gone wrong with blind trust, and that is why VERIFY is as important as trust. Verify means to use due diligence to make sure that the trust is grounded in fact by using knowledge, history, and past performance as the basis.  Verify grounds trust, yet allows it to grow.

President Ronald Reagan coined the phrase “Trust, but Verify” – but I believe it is better stated as “Trust and Verify” because the two reinforce each other.  This also suggests the saying:  “Fool me Once, Shame on You… Fool me Twice, Shame on Me.”

Proof that Trust and Verify are the Elephants in the Room

Software development has a history of dysfunctional behavior built on ignoring that Trust and Verify are key issues. It is easier for both the business (customers) and the engineers (suppliers) to pretend that they trust each other than address the issues once and for all.  To admit to a lack of trust is tantamount to declaring war and accusing your “partners” of espionage.  It simply is not done in the polite company of corporate boardrooms.  And so we do the following:

  • Fixed price budgets are set before requirements are even known because the business wants to lower their risk (and mistrust);
  • Software development companies “pad” their estimates with generous margins to decrease their risk that the business doesn’t know what they want (classic mistrust);
  • Deadlines are imposed by the business based on gut-feel or contrived “drop dead” dates to keep the suppliers on track;
  • Project scope is mistakenly expressed in terms of dollars or effort (lagging metrics) instead of objective sizing (leading metrics);
  • Statements like “IT would be so much easier if we didn’t have to deal with users” are common;
  • Games like doubling the project estimate because the business will chop it in half become standard;
  • Unrealistic project budgets and schedules are agreed to to keep the business;
  • Neither side is happy about all the project meetings (lies, more promises, and disappointment).

Is IT doomed?

Trust is a critical component of any successful relationship involving humans (one might argue that it is also critical when pets are involved) – but so too is being confident in that trust (verify).  Several promising approaches address trust issues head on, and provide project metrics along the way to ensure that the trust remains.

One such approach is Kanban (the subject of this week’s Lean Software and Systems Development conference LSSC12 in Boston, MA).

Kanban for software and systems development was formalized by David Anderson and has been distilled into a collaborative set of practices that allow the business and software developers to be transparent about software development work – every step of the way.  Project work is prioritized and pulled in to be worked on only as the volume and pace (velocity) of the pipeline can accommodate.  Rather than having the business demand that more work be done faster, cheaper and better than is humanly possible (classic mistrust that the suppliers are not working efficiently), in Kanban, the business works collaboratively with the developers to manage (and gauge) what is possible to do and the pipeline delivers more than anticipated.  Trust and verify in action.

Another promising approach is Scope Management (supported by a body of knowledge and a European based certification) – a collaborative approach whereby software development effort is done based on “unit pricing”.  Rather than entertaining firm, fixed price, lose-lose (!!!) contracts where the business wants minimum price/maximum value and the supplier need to curtail changes to deliver within the fixed price (and not lose their shirts), unit pricing actually splits a project into known components can are priced similarly to how home construction can be priced by square foot and landscaping priced by the number of trees.

In Scope Management (see www.qualityplustech.com and www.fisma.fi for more details or send me an email and I’ll send you articles), the business retains the right to make changes and keep the reins on the budget and project progress and the supplier gets paid for the work that the business directs to be done.  Project metrics and progress metrics are a key component in the delivery process.  Again TRUST and VERIFY are key components to this approach.

What do you think? 

Please comment and share your opinion – are TRUST and VERIFY the IT elephants in the rooms at your company?

P.s., Don’t forget to sign up for the SPICE Users Group 2012 conference in 2 weeks in Palma de Mallorca, Spain. See www.spiceconference.com for details!  I’m doing a 1/2 day SCOPE MANAGEMENT tutorial on Tuesday May 29, 2012.

Spice Conference


Spice Conference.

Please join us for the Software Process Improvement Capability dEtermination conference!

May 29-31, 2012 Palma de Mallorca SPAIN.

Everyone welcome!

Spice Conference


 

THE process improvement conference YOU need to attend

Spice Conference.

Jeopardy for IT projects… the answer is July 15


JeopardyIn the popular television game show Jeopardy, contestants are given short phrase answers and are challenged to pose the associated question.  The half-hour show has been a mainstay on prime time television for many years and boasts millions of viewers.

While Jeopardy provides great no-risk entertainment for contestants and viewers, a real-life “game” of Jeopardy plays out daily in corporations throughout the world, only it’s called something else: Date-driven-estimating!

Here’s how it typically plays out:
Business executive:  “July 15.”  (The Jeopardy style answer)
Contestant / CIO (pondering): “Is that the date you announced that the software would be ready?”
Business executive:  “Correct!  Now make it happen.”

Poor project estimating is one of the pervasive issues in software development – and the situation doesn’t seem to be getting any better despite process improvement models such as the Capability Maturity Model Integration (CMMI(R)) and formal project management.

It doesn’t seem to matter how advanced or mature is the IT organization – good project estimation is elusive.  When a project does come in according to schedule, it is often a matter of coercion – that is the project manager has cajoled, manipulated or somehow morphed the project schedule to match an often arbitrary end date.

Author Steve McConnell once coined the phrase “Date-Driven-Estimating” and stated that it was the most prevalent estimating method in use today.  In Date driven estimating, management announces the project delivery date and then the project team works backwards to the current date.  The worst-case (and not unlikely) result is when the project can make the date ONLY if it was started several months ago.  The schedule is then retrofitted by piling on more resources along the timeline until everything is included between the start and end dates.  Somehow, such artificial manipulation transforms the impossible schedule into one that can work…

In engineering projects where physical and scientific constraints are fixed (such as the days required for concrete to cure), it is easier to see that such practices are prone to create outright project failures before the project even gets started.  For example, a road construction project cannot be sped up by adding too many resources – physical limitations of engineering construction prevail and building codes govern project progress.  Not so in software development.  In fact, just this evening I met someone from a major financial services company who told me of a software project where the coding commenced within two weeks of the project start – even before they had documented the requirements.  (No, it was not an agile project!)

What can be done when there is a pre-set budget or firm schedule set before any requirements are documented?  From an investment point of view, customers want to curtail costs during the development and still get the software they need to improve their business.

From a supplier point of view, the goal is to get paid for the hours expended delivering the correct solution while being flexible to the inevitable changes that will happen along the way.

The two approaches seem at odds with one another – customers want to be fiscally responsible for curtailing costs, and suppliers want to be responsive to their customers yet get paid for the work they are directed to do.  Firm fixed price projects (even before requirements are known) have been the solution preferred by customers to minimize the risk of budget overruns but all the risk is assumed by the supplier – especially before requirements are known.  But as in any partnership there is no such thing as a win/lose partnership – if the customer wins and the supplier loses – both sides ultimately lose.

So, what can be done to avoid such a dilemma and achieve a win/win scenario?  One solution is to deliver bite sized, incremental 2-week software releases using Agile Methods (scrums) .  A two-week timeframe is easier to manage and gain an accurate estimate than is a long-term development project.

Another approach is unit pricing…. Unit pricing is based on working with a cost per function point (software sizing) and is a promising and realistic option to reach success.  This approach plus other evolutionary steps are contained in the northernSCOPE model – already successful in Finland, Australia, and other countries.

Watch this space for more about the northernSCOPE approach and upcoming US training workshops.

Meanwhile here’s a couple of earlier posts on the topics:

Isn’t it time we changed the channel and stopped playing IT Jeopardy?

Carol

Share

EU nations receptive to Scope Management


Greetings from Copenhagen

Bo Balstrop, Carol Dekkers, Morten Korsaa
Bo Balstrop, Carol Dekkers, Morten Korsaa

I have to confess that Europeans are much more progressive and receptive to evolutionary ideas such as northernSCOPE and formalized unit pricing ($/FP) on public tendering of IT projects than the U.S.  Just this week, I met with four different groups and presented the highlights and gains to be made from adopting formalized scope management, and I am hopeful that we (Delta Axiom of Denmark, my company Quality Plus Technologies, and 4SUM Partners of Finland) can begin to offer Certified Scope Management (CSM) training courses in Denmark in the not-too-distant future.  The approach is straight-forward and minimizes the lose/lose risk that accompanies firm fixed pricing of software projects when the requirements are not well-known.

Also interesting was the fact that those most interested in learning more about the method are not even from customer or developer groups that outsource (tender) their software development, but rather companies that do development in-house.  It wasn’t a hard sell at all – the audiences in all four presentations were receptive, open to discuss how northernSCOPE might work in their organizations, and optimistic about how it can succeed in Denmark.  After all, if we can create the demand for Certified Scope Manager (CSM) services with the public or private sectors, then training is the natural next step to create CSM’s to satisfy such demand.

Representatives from one of the nation’s largest government departments that governs taxation, the equivalent of the IRS in the US, were on-hand and expressed interest in knowing more.  I’m not sure why, aside from the “not-invented-here” characteristic (that plagues more than just the US), scope management has not raised anything more than a passing glance in the US.  With all the major expenditures on software intensive systems projects underway with our public administration and Department of Defense, together with rework figures hovering around 40%, you’d think that formalized Scope Management (aka northernSCOPE) would be of interest.  While the economy definitely plays a role to cut interest in training and consulting overall, it is still a question mark in this author’s mind why there is little interest despite ample presentations and workshops I’ve done at technology conferences throughout the US over the past few years.

No worries though… as long as some part of the world sees value in the method today – it matters not where the demand is rising.  It will be interesting to watch as more CSM’s are certified in Europe to see if it peaks any interest in my home continent.

Best wishes for healthy, productive projects.

p.s., Want to know more about northernSCOPE and the Certified Scope Manager (CSM) designation?  Email me for a copy of our northernSCOPE brochure and our CSM training information.

Carol

Share
//