Category Archives: Uncategorized

To estimate or not to estimate? Not the right question…


Over the past few years I’ve seen an increase in articles and posts about whether or not to do estimation (of cost, schedule and effort) for software development projects. This is especially true when agile/iterative methods are used to develop software for which requirements are not readily known in advance.  There are actual “movements” set up to prove that estimating in and of itself is bad for software development.  At the same time, I’ve worked done more and more work for clients related to software benchmarking (to find best-in-class methods, tools, and combinations to develop software) and estimation (including price-to-win estimating.)  I’m now convinced that “To estimate or not to estimate?” is simply the wrong question – or at least a premature question for many companies.

Estimation is often viewed as fundamental to software development (and any other development projects or programs) as are ingredients to cooking or oxygen to life.  While we might wish to discard or discredit the practice of estimation as an inconvenience and even the reason for software “failures” –(Sidenote:  The Standish group’s annual CHAOS reports cite lack of “on-time” and “on-budget” software delivery as rationale for declaring project failure; both of which would disappear as factors if estimating was eliminated) – the truth is that C-level executives need a level of confidence (based on estimates) to bound their investment in new initiatives, no matter how much faith or confidence the executives have in the development teams’ ability to deliver.  In my humble opinion, project managers MUST  develop skills to do solid, reliable project estimates if they are to survive (and thrive.)  But this is where things often fall apart – estimation is not seen as a discipline based on solid data (in part, because some organizations do estimating haphazardly based on bad data, poor models, flawed assumptions, premature input values taken as fact, among other factors.)

This does not include those organizations where the mere notion of projects (being a temporary endeavor intended to deliver an identified product, outcome, or service such as a piece of software) is like a foreign language.  When I teach courses according to the Project Management Institute’s Project Management Body of Knowledge (PMBOK(R)), it’s not uncommon to find IT pros who profess that project management is not needed because their work is bounded solely by calendar months and the number of full-time-staffers.  The idea that work should be managed towards a specified outcome (with goals, objectives, timelines, milestones, deliverables and a formal end) just doesn’t fit into their paradigm, even for those involved in developing advanced technology solutions.  I’m excluding these companies because projects (and estimating cost and schedule) are actually beyond their comprehension, as is productivity, project comparisons or process improvement.

Given the premise that “to estimate or not to estimate” is the wrong (or at least a premature) question – then what are the right ones?  Here’s a short list:

  • If we do an estimate, do we know what are the correct input variables (and values) we should use?  (i.e., Some idea of scope, non-functional requirements, constraints, goals, project environment, etc.)  Garbage in equals garbage out.
  • When estimating, do we have access to correct and appropriate historical data on which to rely? (i.e., does the historical completed project information accurately depict what actually happened on the project? Often up to 40% of true project work effort is not recorded – or it is recorded inconsistently.)  Incomplete or incorrect historical data make for poor comparisons, and even worse estimates.
  • Are the estimating models we propose, appropriate for the industry and application?  (i.e., in construction, it would be folly to use a home building model for a hospital construction or bridge construction project, so too with software.)  Every model, no matter how advanced, needs to be calibrated for the organization using it.
  • Do we know enough about the object of estimation? (i.e., if it is simply an idea about an outcome without any idea of component programs or projects, a “guess”timate or rough-order-of-magnitude may be the only possibility until more data are known.)
  • Are the estimating exercise/practices paid “lip service” by management? (i.e., does management summarily cut every estimate in half, or dictate due dates that override those of professional estimators?)
  • Does the organization take (software) measurement seriously?  (i.e., how are project measures and metrics collected – if adhoc, inconsistent, without formal processes or procedures to validate the quality of project data, then estimating will likely be equally inconsistent)

These are just a few of the important questions that need to be addressed – before we attempt to estimate and rely on the results of the practice.   When estimating is done without proper planning, discipline and consistency, the results will be unreliable and even worse, downright wrong.

In IT as in life, if you’re going to invest in an endeavor (such as estimating), take the time to do it right the first time, or don’t bother doing it at all.  And that, really answers the question of  “to estimate or not to estimate.”

What other questions are critical to ask?  What do YOU think?

Advertisements

Function Points (Software Size) come of Age: Mature, Stable, and Relevant


It is with pride and honor to share with you news about the upcoming Sept 13-15, 2017 celebratory (and educational) conference: ISMA14 (International Software Measurement and Analysis) – and its happening in just 4 weeks in Cleveland, OH, USA!

It’s the 30th anniversary of the International Function Point Users Group (IFPUG) – a not-for-profit user group I’ve been a part of for over 25 years.

We’re also celebrating 2017 as the International Year of Software Measurement (#IYSM).  It’s a great year for YOU to get involved (or more involved) and gain the benefits of measurement for software and systems projects!

As the Director of Communications and Marketing for IFPUG, I am excited that IFPUG is now mature (age 30!) and at the same time venturing in new directions with non-functional sizing (SNAP.)  We have much to celebrate, AND we also have more work to do (to publicize how Function Points and SNAP points provide objective measures of software size!)

The time is now!

No longer does your organization need to “fumble around in the dark” to find standard, reliable and objective software sizing measures.  Certainly there is an abundance of available units of measure (story points, use case points, source lines of code, hybrid measures, etc.) — BUT, only Function Points are supported by  ISO/IEC world standards and provide consistent, objective and technologically independent assessments of software size based on “user” requirements.  (Soon, the Software Non-functional Assessment Process – SNAP points for non-functional size will also become an international standard.)

Isn’t it time that your company adopts function points as a universal standard for software size?  YOUR timing is perfect because in less than 5 weeks, International Software Measurement and Analysis (#ISMA14) will be in Cleveland and you will have the opportunity to learn from industry experts in an intimate (less than 200 people) setting. (p.s., I’m one of the main conference speakers so you’ll know at least 1 person there!)

FUNCTION POINT proof is “in the pUDDING” (so to speak)…

We have an English proverb “the proof is the pudding”

The modern version of “The proof is in the pudding.” Implies that there is a lot of evidence that I will not go through at this moment and you should take my word for it, or you could go through all of the evidence yourself. Source:  http://tinyurl.com/5uc7eq3 

I can espouse the benefits of function points, as can IFPUG insiders and supporters such as the world-respected author/guru Capers Jones (whose 17 published books use Function Points as a universal software sizing measure). But, when the mainstream media features articles on Function Points – it’s a call to action for senior executives and IT professionals to take note! Here’s a recent example: (click on the image to read the full story!)

Need help selling your boss on the benefits?

I’ve written up the top 10 reasons to attend ISMA14 with us- won’t you join me (and a ton of other measurement professionals) in Cleveland on Sep 13?

Carol Dekkers, CFPS (Fellow), AEC, PMP, P.Eng.
President, Quality Plus Technologies, Inc.
IFPUG Director of Communications and Marketing

 

To Succeed with Measurement, Choose Stable Measures


The pace of technology advancement can be staggering – new tools, methods, acronyms, programming languages, platforms and solutions – come at us at warp speed, morphing our IT landscape into patchwork quilts of old and new technologies.  

At times, it can be challenging to gauge the results (of change): what were the specific processes /tools /methods /technologies /architectures /solutions that contributed to or delivered positive results?  How can we tell what made things worse?

Defining positive “results” is the first step and measurement can contribute – as long as our measures don’t shift with the technology!

I and countless others have written about Victor Basilli’s GQM (Goal Question Metric) approach to measurement, (in short, choose measures that answer the questions you need to answer so you can achieve the goal of measurement…) but there’s a problem even more fundamental, and goes beyond choosing the right measures:

The key to (IT) measurement lies in stability and consistency:  choosing stable measures (industry standardized definitions that don’t change) and measuring consistently (measuring the same things in the same way.)
– Carol Dekkers, 2016

This may seem like common sense, but after 20 years of seeing how IT applies measurement, I realize common sense isn’t all that common.  There are some in the IT world that would rather invent new measures (thus decreasing stability and consistency) than embrace proven ones.  While I’ve seen the academic tendancy of “tear down what already exists to make room for my new ideas,” I believe that this is counter-productive when it comes to IT metrics.  But, I’m getting ahead of myself.  First, let’s consider how measurement is done in other industries:

  • Example 1: Building construction.  Standard units of measure (imperial or metric) are square feet and square meters.  The definition of a square foot has not changed despite advances in modular design.
  • Example 2: Manufacturing.  Units of measure for tolerances, product sizes, weights, etc. (inches, mm, pounds, kg, etc.) are the same through the years.
  • Example 3: Automobiles.  Standard ratios such as miles per gallon (mpg) and acceleration (0-60 in x seconds) remain industry standards.

In each example, the measure is stable and measurement success is a result of consistent and stable (unchanging) units of measure applied across changing environments.  Comparisons of mpg or costs per square foot would be useless if the definition of the units of measure was not stable.  Comparability across products or processes depends on the consistency and stability of both the measurement process and the measures themselves.

Steve Daum wrote in “Stability and linearity: Keys to an effective measurement system” :

“Knowing that a measurement system is stable is a comfort to the individuals involved in managing the measurement system. If the measuring process is changing over time, the ability to use the data gathered in making decisions is diminished. If there is no method used to assess stability, it will be difficult to determine the sensitivity of the measurement system to change and the frequency of the change…Stability is the key to predictability.”

One of the most stable and consistent measures of software (functional size) is called IFPUG Function Points and as The International Function Point Users Group (IFPUG) is poised to celebrate its 30th year in 2017.  The IFPUG Function Point measure is stable (with hundreds of thousands of projects having been FP counted,) and consistent (it’s been an ISO/IEC standard for almost 20 years!) – and perhaps 2017 is the year that YOUR company should look at FP based measurement.

FPA (Function Point Analysis) provides the a measure of software size under development and can be used equally well on agile, waterfall, and hybrid software development projects.  Yet, despite its benefits, much of the world still doesn’t know about the measure.

See my first post of 2016 here:  Function Point Analysis (FPA) – Creating Stability in a Sea of Shifting Metrics for more details.  FP is certainly a good place to start when you’re looking for software measurement success… why not today?

Wishing you a happy and safe holiday season wherever you live!

 

Estimation Poker – Bluffing (and Winning) with Metrics


In May 2016, I presented a webinar for ITMPI on the topic of Estimation Poker based on the broad topic of software project estimation – regardless of the development approach.  The webinar was well attended despite technical difficulties (I recorded it while in Italy and suffice to say, internet connections from my site happened to be… less than optimum.)  I re-recorded the webinar on my return (with far superior results) and the recording can be accessed at this link:  ITMPI Estimation Poker Webinar Re-Recording:

A teaser 10 minute segment is on YouTube – Dekkers Estimation Poker teaser

I’ve also uploaded the full slide deck to Research Gate – click Research Gate – Dekkers Slides here to download.

Let me know what you think.  Note that this is different than the Agile Estimation Poker (which I forgot about was already established when I designed my webinar.)

Have a great weekend!

Carol

 

Function Point Analysis (FPA) – Creating Stability in a Sea of Shifting Metrics


Years ago when Karl Weigers (Software Requirements) introduced his  “No More Models” presentation the IT landscape was rife with new concepts ranging from Extreme Programming to the Agile Manifesto to CMMI’s (multiple models), to Project/Program/Portfolio Management.

Since then, the rapidity of change in software and systems development has slowed, leaving the IT landscape checkered with agile, hybrid, spiral and waterfall projects.  Change is the new black, leaving busy professionals and project estimators stressed to find consistent metrics applicable to the diverse project portfolio.  Velocity, burn rates, story points and other modern metrics apply to agile projects, while defect density, use cases, productivity and duration delivery rates are common on waterfall projects.

What can a prudent estimator or process improvement specialist do to level the playing field when faced with disparate data and the challenge to find the productivity or quality “sweet spot”?  You may be surprised to find out that Function Point Analysis (FPA) is part of the answer and that Function Points are as relevant today as when first invented in the late 1970’s.

What are function points (FP) and how can they be used?

Function points are a unit of measure of software functional size – the size of a piece of software based on its “functional user requirements,” in other words a quantification that answers the question “what are the self-contained functions done by the software?”

Function points are analogous to the square feet of a construction floor plan and are independent of how the software must perform (the non-functional “building code” for the software,) and how the software will be built (the technical requirements.)

As such, functional size, (expressed in FP,) is independent of the programming language and methodology approach:  a 1000 FP piece of software will be the same size no matter if it is developed using Java, C++, or other programming language.

Continuing with the construction analogy, the FP size does not change on a project whether it is done using waterfall or agile or hybrid approaches.  Because it is a consistent and objective measure dependent only on the functional requirements, FP can be used to size the software delivered in a release (a consistent delivery  concept) on agile and waterfall projects alike.

WHy are fp a consistent and stable measure?

The standard methodology to count function points is an ISO standard (ISO/IEC 20926) and supported by the International Function Point User Group (IFPUG.)  Established in 1984, IFPUG maintains the method and publishes case studies to demonstrate how to apply the measurement method regardless of variations in how functional requirements are documented.  FP counting rules are both consistent and easy to apply; for the past decade the rules have not changed.

RELEVANCE OF fp in today’s it environment

No matter what method is used to prepare and document a building floor plan, the square foot size is the same.  Similarly, no matter what development methodology or programming language is used, the function point size is the same.  This means that functional size remains a relevant and important measure across an IT landscape of ever-changing technologies, methods, tools, and programming languages.  FP works as a consistent common denominator for calculating productivity and quality ratios (hours / FP and defects / FP respectively), and facilitates the comparisons of projects developed using different methods (agile, waterfall, hybrid, etc.) and technical architectures.

consistency reigns supreme

THE single most important characteristic of any software measure is consistency of measurement!

This applies to EVERY measure in our estimating or benchmarking efforts, whether we’re talking about effort (project hours), size (functional size), quality (defects), duration (calendar time) or customer satisfaction (using the same questionnaire.)  Consistency is seldom a given and can be easily overlooked – especially in one’s haste to collect data.

It takes planning to ensure that every number that goes into a metric or ratio is measured the same way using the same rules.  As such, definitions for defects, effort (especially who is included, when a project starts/stops, and what is collected), and size (FP) must be documented and used.

For more information about Function Point Analysis (FPA) and how it can be applied to different software environments or if you have any questions or comments, please send me an email (dekkers@qualityplustech.com) or post a comment below.

To a productive 2016!

Carol

Image

In a few words: why IT is so intimidating


As a project manager and software metrics expert, I’ve learned that simplicity and clarity are the keys to effective communication.  Consider that when we meet someone from another country, we use simple words, phrases and paraphrasing to communicate our meaning. Most of us would consider it rude and intimidating to talk to a foreigner using complex English and idioms.

Yet, that’s exactly what happens when we, software professionals, talk to … well almost anyone but ourselves.  We are technical professionals with access to reams of data, and you might think the idea of simplicity and clarity would be common sense.  Sadly, it’s quite the opposite.  Like medicine, engineering, and other technical professions, we seem to take pride in creating acronyms and continually redefining the English language to suit our purpose.  Then, we scoff at anyone who doesn’t understand, and expect them to bone up on their vocabulary.

It really only takes a few obscure words to intimidate someone, in IT we can do it with one or two (such as “artifact” or “construct” or “provisioning.”)

I’ve seen it for decades – instead of using common English words (with known definitions) or inventing brand new terms, the software industry tends to complicate things by using words that are already known, and changing the definitions.

I noticed this trend in my first post-college job when someone in my department (pipeline engineering) set me up to use the mainframe computer.  As luck would have it, my system crashed on the first day and I had to call computer services.  When asked for my “terminal address” the group howled when I said “the fourth floor” when obviously they had referred to the 16 digit serial number on the right side of my computer monitor.  When I took a job working in that same technical group months later, I had to learn a whole new vocabulary.  Instead of talking about documents or papers or manuals, my co-workers talked about “deliverables” which also included hardware and software among other things.

I learned that DASD and TCPIP were words in themselves used to mean specific things but few could remember what were the words that made up the acronym.  As confused as I was as a graduate engineer with programming experience, I wondered how much more confused our customers must be.

Then along came new SDLC’s (software development “life cycles”), new methodologies (approaches and guidelines for developing software), and new concepts such as object-oriented programming. Each new wave washed ashore with a mixture of new, re-defined and sometimes arcane terms with very specific meanings. Sometimes the “common English usage” definition prevailed, other times the term had an entirely new definition.

Take the word “artifact” for example.  The first definition is the way that it is defined in common English usage (Google.com) and the latter is specific to IT.

artifact

artifact it

 

 

 

So, now instead of saying document or manual or deliverables in general conversation and in meetings, artifact was used.  Ugh…. customers shrugged, IT didn’t notice the misunderstanding.  Business chugged on with an ever widening communication gap, and projects missed their targets.

Today things are beyond mere terminology changes.  We’ve even started banning certain words we don’t think fit our purpose – in spite that a term is well-understood.  For example, I recently read a post that proposed banning the word “project” from the vocabulary and replacing it with “initiative” to redirect professionals to focus on product delivery instead of start and end date.  It’s a great idea to focus on product delivery and getting all the teams on board to focus on output, but terminology is already a fundamental divisive issue. Ugh.

All in all, I believe that one of the biggest chasms in software development today lies in communication between technical professionals and the business.  We’re really two different cultures (more about that in another post) and the use of simple, common English terms (with standard definitions) could bridge some of the gap.

As the title says:  In just a few words… IT is intimidating.

What do you think?

Have a great week!

Carol

Tech Folks Don’t Grok People Things


Wow, “Grok” was first used in 1961 and this was the first I’ve heard of the word. Great post – hopefully a few people in IT will grok the meaning of this post.

Think Different

Tech Folks Don’t Grok People Things

Geek-inside

Nor do they often grok the connection between attending to their own and others’ needs, and the grokking of people things.

Tech Folks Focus On Tech

Let’s face it, most folks in IT (a.k.a. software development) made it their career choice because they like tech. Personally, I started programming way back when because I liked making little coloured lights flash on and off at my command.

And although liking tech doesn’t necessarily preclude grokking people things, in practice it generally does.

People Things Trump Tech

Yet it’s the people things that make all the difference when it comes to non-trivial, collaborative knowledge work. Such as teams building software systems and solutions. Questions like “What accounts for the way folks behave?”, “How can we work together?” and “Why is everything so borked round here?”.

Some tech folks wake up to the primacy of people things sooner or later…

View original post 49 more words

No free lunch in Software Estimation and Benchmarking


I’d love to have comments on my latest QSM blog post of the same name… read more

22 no free lunch

Latest installment of Ask Carol: With Software Sizing, If You Don’t Know the What, You Can’t Estimate the How


One of the biggest (and not so obvious) reasons that software estimation goes awry is that amateur estimators don’t always realize how important it is to figure out the “object of estimation” – that is, what it is that we want to estimate. 

I’ve addressed this issue on several occasions – through a set of 4 blog posts called “First see the elephant in the room (the what you are estimating…)”

This week, I did a blog post for QSM, Inc. on the same topic.  Let me know what you think.

21 if you dont know the whathttp://www.qsm.com/blog/2014/ask-carol-software-sizing-if-you-dont-know-what-you-cant-estimate-how

(Mis)Perceptions about Software Estimation – Opportunities or Crisis?


Dr Dobb’s online published my article on this topic this week… and quickly comments started pouring in.  Some asked why I would publish an article with observations without solutions while others implied that this is really a customer problem or a human communications problem (I agree with the latter) –  What do YOU think?

Read it, and PLEASE give me your feedback.  Do you agree, disagree, don’t care?  Inquiring minds want to know!

Dr Dobbs