Category Archives: Software measurement

To Succeed with Measurement, Choose Stable Measures


The pace of technology advancement can be staggering – new tools, methods, acronyms, programming languages, platforms and solutions – come at us at warp speed, morphing our IT landscape into patchwork quilts of old and new technologies.  

At times, it can be challenging to gauge the results (of change): what were the specific processes /tools /methods /technologies /architectures /solutions that contributed to or delivered positive results?  How can we tell what made things worse?

Defining positive “results” is the first step and measurement can contribute – as long as our measures don’t shift with the technology!

I and countless others have written about Victor Basilli’s GQM (Goal Question Metric) approach to measurement, (in short, choose measures that answer the questions you need to answer so you can achieve the goal of measurement…) but there’s a problem even more fundamental, and goes beyond choosing the right measures:

The key to (IT) measurement lies in stability and consistency:  choosing stable measures (industry standardized definitions that don’t change) and measuring consistently (measuring the same things in the same way.)
– Carol Dekkers, 2016

This may seem like common sense, but after 20 years of seeing how IT applies measurement, I realize common sense isn’t all that common.  There are some in the IT world that would rather invent new measures (thus decreasing stability and consistency) than embrace proven ones.  While I’ve seen the academic tendancy of “tear down what already exists to make room for my new ideas,” I believe that this is counter-productive when it comes to IT metrics.  But, I’m getting ahead of myself.  First, let’s consider how measurement is done in other industries:

  • Example 1: Building construction.  Standard units of measure (imperial or metric) are square feet and square meters.  The definition of a square foot has not changed despite advances in modular design.
  • Example 2: Manufacturing.  Units of measure for tolerances, product sizes, weights, etc. (inches, mm, pounds, kg, etc.) are the same through the years.
  • Example 3: Automobiles.  Standard ratios such as miles per gallon (mpg) and acceleration (0-60 in x seconds) remain industry standards.

In each example, the measure is stable and measurement success is a result of consistent and stable (unchanging) units of measure applied across changing environments.  Comparisons of mpg or costs per square foot would be useless if the definition of the units of measure was not stable.  Comparability across products or processes depends on the consistency and stability of both the measurement process and the measures themselves.

Steve Daum wrote in “Stability and linearity: Keys to an effective measurement system” :

“Knowing that a measurement system is stable is a comfort to the individuals involved in managing the measurement system. If the measuring process is changing over time, the ability to use the data gathered in making decisions is diminished. If there is no method used to assess stability, it will be difficult to determine the sensitivity of the measurement system to change and the frequency of the change…Stability is the key to predictability.”

One of the most stable and consistent measures of software (functional size) is called IFPUG Function Points and as The International Function Point Users Group (IFPUG) is poised to celebrate its 30th year in 2017.  The IFPUG Function Point measure is stable (with hundreds of thousands of projects having been FP counted,) and consistent (it’s been an ISO/IEC standard for almost 20 years!) – and perhaps 2017 is the year that YOUR company should look at FP based measurement.

FPA (Function Point Analysis) provides the a measure of software size under development and can be used equally well on agile, waterfall, and hybrid software development projects.  Yet, despite its benefits, much of the world still doesn’t know about the measure.

See my first post of 2016 here:  Function Point Analysis (FPA) – Creating Stability in a Sea of Shifting Metrics for more details.  FP is certainly a good place to start when you’re looking for software measurement success… why not today?

Wishing you a happy and safe holiday season wherever you live!

 

Function Point Analysis (FPA) – Creating Stability in a Sea of Shifting Metrics


Years ago when Karl Weigers (Software Requirements) introduced his  “No More Models” presentation the IT landscape was rife with new concepts ranging from Extreme Programming to the Agile Manifesto to CMMI’s (multiple models), to Project/Program/Portfolio Management.

Since then, the rapidity of change in software and systems development has slowed, leaving the IT landscape checkered with agile, hybrid, spiral and waterfall projects.  Change is the new black, leaving busy professionals and project estimators stressed to find consistent metrics applicable to the diverse project portfolio.  Velocity, burn rates, story points and other modern metrics apply to agile projects, while defect density, use cases, productivity and duration delivery rates are common on waterfall projects.

What can a prudent estimator or process improvement specialist do to level the playing field when faced with disparate data and the challenge to find the productivity or quality “sweet spot”?  You may be surprised to find out that Function Point Analysis (FPA) is part of the answer and that Function Points are as relevant today as when first invented in the late 1970’s.

What are function points (FP) and how can they be used?

Function points are a unit of measure of software functional size – the size of a piece of software based on its “functional user requirements,” in other words a quantification that answers the question “what are the self-contained functions done by the software?”

Function points are analogous to the square feet of a construction floor plan and are independent of how the software must perform (the non-functional “building code” for the software,) and how the software will be built (the technical requirements.)

As such, functional size, (expressed in FP,) is independent of the programming language and methodology approach:  a 1000 FP piece of software will be the same size no matter if it is developed using Java, C++, or other programming language.

Continuing with the construction analogy, the FP size does not change on a project whether it is done using waterfall or agile or hybrid approaches.  Because it is a consistent and objective measure dependent only on the functional requirements, FP can be used to size the software delivered in a release (a consistent delivery  concept) on agile and waterfall projects alike.

WHy are fp a consistent and stable measure?

The standard methodology to count function points is an ISO standard (ISO/IEC 20926) and supported by the International Function Point User Group (IFPUG.)  Established in 1984, IFPUG maintains the method and publishes case studies to demonstrate how to apply the measurement method regardless of variations in how functional requirements are documented.  FP counting rules are both consistent and easy to apply; for the past decade the rules have not changed.

RELEVANCE OF fp in today’s it environment

No matter what method is used to prepare and document a building floor plan, the square foot size is the same.  Similarly, no matter what development methodology or programming language is used, the function point size is the same.  This means that functional size remains a relevant and important measure across an IT landscape of ever-changing technologies, methods, tools, and programming languages.  FP works as a consistent common denominator for calculating productivity and quality ratios (hours / FP and defects / FP respectively), and facilitates the comparisons of projects developed using different methods (agile, waterfall, hybrid, etc.) and technical architectures.

consistency reigns supreme

THE single most important characteristic of any software measure is consistency of measurement!

This applies to EVERY measure in our estimating or benchmarking efforts, whether we’re talking about effort (project hours), size (functional size), quality (defects), duration (calendar time) or customer satisfaction (using the same questionnaire.)  Consistency is seldom a given and can be easily overlooked – especially in one’s haste to collect data.

It takes planning to ensure that every number that goes into a metric or ratio is measured the same way using the same rules.  As such, definitions for defects, effort (especially who is included, when a project starts/stops, and what is collected), and size (FP) must be documented and used.

For more information about Function Point Analysis (FPA) and how it can be applied to different software environments or if you have any questions or comments, please send me an email (dekkers@qualityplustech.com) or post a comment below.

To a productive 2016!

Carol

Image

In a few words: why IT is so intimidating


As a project manager and software metrics expert, I’ve learned that simplicity and clarity are the keys to effective communication.  Consider that when we meet someone from another country, we use simple words, phrases and paraphrasing to communicate our meaning. Most of us would consider it rude and intimidating to talk to a foreigner using complex English and idioms.

Yet, that’s exactly what happens when we, software professionals, talk to … well almost anyone but ourselves.  We are technical professionals with access to reams of data, and you might think the idea of simplicity and clarity would be common sense.  Sadly, it’s quite the opposite.  Like medicine, engineering, and other technical professions, we seem to take pride in creating acronyms and continually redefining the English language to suit our purpose.  Then, we scoff at anyone who doesn’t understand, and expect them to bone up on their vocabulary.

It really only takes a few obscure words to intimidate someone, in IT we can do it with one or two (such as “artifact” or “construct” or “provisioning.”)

I’ve seen it for decades – instead of using common English words (with known definitions) or inventing brand new terms, the software industry tends to complicate things by using words that are already known, and changing the definitions.

I noticed this trend in my first post-college job when someone in my department (pipeline engineering) set me up to use the mainframe computer.  As luck would have it, my system crashed on the first day and I had to call computer services.  When asked for my “terminal address” the group howled when I said “the fourth floor” when obviously they had referred to the 16 digit serial number on the right side of my computer monitor.  When I took a job working in that same technical group months later, I had to learn a whole new vocabulary.  Instead of talking about documents or papers or manuals, my co-workers talked about “deliverables” which also included hardware and software among other things.

I learned that DASD and TCPIP were words in themselves used to mean specific things but few could remember what were the words that made up the acronym.  As confused as I was as a graduate engineer with programming experience, I wondered how much more confused our customers must be.

Then along came new SDLC’s (software development “life cycles”), new methodologies (approaches and guidelines for developing software), and new concepts such as object-oriented programming. Each new wave washed ashore with a mixture of new, re-defined and sometimes arcane terms with very specific meanings. Sometimes the “common English usage” definition prevailed, other times the term had an entirely new definition.

Take the word “artifact” for example.  The first definition is the way that it is defined in common English usage (Google.com) and the latter is specific to IT.

artifact

artifact it

 

 

 

So, now instead of saying document or manual or deliverables in general conversation and in meetings, artifact was used.  Ugh…. customers shrugged, IT didn’t notice the misunderstanding.  Business chugged on with an ever widening communication gap, and projects missed their targets.

Today things are beyond mere terminology changes.  We’ve even started banning certain words we don’t think fit our purpose – in spite that a term is well-understood.  For example, I recently read a post that proposed banning the word “project” from the vocabulary and replacing it with “initiative” to redirect professionals to focus on product delivery instead of start and end date.  It’s a great idea to focus on product delivery and getting all the teams on board to focus on output, but terminology is already a fundamental divisive issue. Ugh.

All in all, I believe that one of the biggest chasms in software development today lies in communication between technical professionals and the business.  We’re really two different cultures (more about that in another post) and the use of simple, common English terms (with standard definitions) could bridge some of the gap.

As the title says:  In just a few words… IT is intimidating.

What do you think?

Have a great week!

Carol

10 Steps to Better Metrics… for Everyone


The more things change, the more they stay the same – especially when it comes to initiatives that involve cultural change. Measurement is a perfect example – and I’m not talking purely about “software metrics,” rather measurement in any industry.

When you take a business that has traditionally “flown by the seat of its pants” (in other words, it is a monopoly of sorts or it has made money in spite of itself) and start to keep track of what’s going on, people have issues.  The first step often is to simply measure anything that moves – data that are easy to capture – and then try to figure out some sort of conclusions or action plans.   In IT (Information Technology) the landscape is littered with discarded data from failed measurement initiatives.  Data in and of themselves are not bad, IF the data are used appropriately and in the right context.

I recently wrote the following article for Projects at Work based on concepts I first observed nearly 20 years ago, and they are as valid today as ever before.

As a consultant, I LOVE to work with companies who want to succeed with measurement. If you are tasked with starting metrics for your company, give me a call – maybe I can give you some ideas to save you time and money – and succeed with metrics!

Send me an email or leave a comment – measurement is too important to leave to chance.  (Let me know if you’d like a full copy of this article!)

Cheers,
Carol

ProjectsAtWork - 10 Steps to Better Metrics July 2015

QSM (Quantitative Software Management) 2014 Research Almanac Published this week!


Over the years I’ve been privileged to have articles included in compendiums with industry thought leaders whose work I’ve admired.  This week, I had another proud moment as my article was featured in the QSM Software Almanac: 2014 Research Edition along with a veritable who’s who of QSM.

This is the first almanac produced by QSM, Inc. since 2006, and it features 200+ pages of relevant research-based articles garnered from the SLIM(R) database of over 10,000 completed and validated software projects.

Download your FREE copy of the 2014 Almanac by clicking this link or the image below.

almanac

What Software Project Benchmarking & Estimating Can Learn from Dr. Seuss


Sometimes truth is stranger than fiction – or children’s stories at least, and I’m hoping you’ll relate to the latest blog post I published on the QSM blog last week.  I grew up on Dr. Seuss stories – and I think my four siblings and I shared the entire series (probably one of the first loyalty programs – buy the first 10 books, get one free…)

I’d love to hear your comments and whether you agree with the analogy that we seek to create precise software sample sets for benchmarking and in so doing, lose the benefits of the trends we can leverage with larger sample sets.  Read on and let me know!  (Click on the image below or here.)

Happy November!

Carol

dr seuss

Combining Soft Skills and Hard Tools for Better Software


One of the more interesting topics in software development (at least from my perspective) is the culture of the industry.  Seldom does one find an industry burgeoning with linguistics majors, philosophers, artists, engineers (all types – classically trained to self-named), scientists, politicians, and sales people – all working on the same team in the same IT department.

This creates an incredible diversity and richness – and leads to sometimes astounding leaps and bounds in innovation and technological advancement, but it can also create challenges in basic workplace behavior.  This post takes a look at the often overlooked soft skills (empathy, leadership, respect, communication, and other non-technical skills) together with technical competencies as an “opportunity” (aka challenge or obstacle to overcome.)

It was published first on the Project Management Institute (PMI) Knowledge Shelf – recently open to the general non-PMI public.

soft skills

Added bonus here:  I referenced the You Tube 2013 University of Western Australia commencement address by Australian comedian/actor Tim Minchin at the University of Western Australia in 2013 in my post (he shares his 9 recommendations to graduates, my favorite -and the one I quoted – is #7 Define yourself by something you love!)  I believe it’s worth the watch/listen if you need to take a break and just sit back and think about soft skills during your technical day. (Warning to the meek of heart – it’s irreverent, offensive, and IMHO, bang on in his core sentiments.  If you’re offended, I apologize in advance!)

If you’d like a pdf copy of the post above, please leave me a comment with your email address!  (And even if you don’t, I’d love your opinion!)

Have a great week!

Carol

No Communication Sends a Message… (and it’s usually not good)


In these past few weeks of blogging about Communication for PM’s and Techies, I realize that there are situations where No Communication sends an even LOUDER message.  You probably already know what no communication means (we’ve all been victims of the “silent treatment” at one point in our life!) – but it also means a negative view of what will be the outcome of communication.

Here’s what I mean by the first interpretation of “No Communication” and the messages it sends:

  1. Avoiding communication:
    After a negative interaction with someone (criticism, conflict, discomfort, intimidation, or other non-positive interaction), it can be difficult to talk to the person the next time.  As time passes, a continuing lack of communication can amplify the original discomfort – it just doesn’t feel good to undergo the initial encounter and we don’t want to experience it yet again.  If the original situation was verbal or in-person, subsequent communication often ensues in a more distant way such as email.  Often the offensive party doesn’t even know that they caused the situation in the first place and is unaware of the ongoing angst.
  2. Eliminating communication:
    We do this when we block incoming phone calls or divert unwanted emails to trash.  Sometimes this is a good stop-gap measure to prevent unwanted communication until it eventually stops all together.  While this is a good tactic to prevent communication, it sometimes backfires by escalating into more direct forms of contact before the sender gets the message you do not want to communicate.
  3. Ignoring communication:
    Instead of avoiding or blocking communication, we also sometimes ignore incoming communication through call screening, letting calls go to voice mail, leaving emails unopened, and simply not responding.  While this may be an appropriate coping mechanism with personal situations, it does not work well in a corporate environment when you are expected to communicate effectively.

In all the above situations where NO communication is sent, there is a perceived “clear” message that is sent regardless of the lack of words. To the person on the receiving end of the avoidance, blocking or ignorance – there is a message they receive.  They will make their own judgment (based on their own perceptions) about what they think is happening, and then typically come to the wrong conclusions.  “Perception is reality in the absence of fact” is an adage that certainly bodes well when there is no communication exchanged.  One such flawed conclusion could be that the original message (that caused the problem) was never received or was interrupted.  If this is the perception, then the person on the receiving end of the “no communication” may resend their message or escalate the attempts to communicate and send increasingly urgent (and sometimes event abusive) messages back to entice a reply.  We might say that “They’ll eventually get the message”, but unfortunately this does not always happen.  When we want to communicate with someone who does not want to communicate with us, we sometimes become quite dense.  The best communication is always active communication rather than passive non-communication.

There is a second interpretation of what no communication means. It can be the pre-conception of negative (i.e., No means negative) outcomes or envisioning a negative result.  For example, if I am going into a meeting where I anticipate a negative outcome and express such sentiments to co-workers beforehand, it is likely that the outcome WILL be negative.  The saying, “if you think you can or you think you can’t – you’re right” – ties into this.  Envisioning and verbalizing negative communication outcomes is like a self-fulfilling prophecy before the fact.  Why not envision potential positive outcomes and then making that happen?  It won’t necessarily change what happens in every situation, but aren’t a few positive outcomes a good reason to change your outlook?

It really can work – envisioning a positive outcome to a tenuous communication can give direction and a positive boost to upcoming meetings and interactions.  Why not work towards the positive instead of the other way?

Remember, no communication delivers a message all the same – and it’s usually not good or lead to a positive outcome.  Plan to communicate by communicating effectively.

To your positive interactions and communication!

Carol

Share

What comes first – estimates or requirements?


We’ve seen many advancements of late in process improvement, better software estimating models, introducing flexibility and agility into software development, formal project management, limiting work-in-progress, etc. – all which incrementally advance projects forward and promise to reduce costly rework.  However, none of these methods addresses one of the most fundamental questions in software development:  what comes first estimates or requirements?

Chicken or Egg?If estimates come first… It seems like putting the cart before the horse if estimate precede requirements, but that is precisely how a good number of software projects go.  To begin with, someone has an idea or a business problem that software can solve but before any work even one dollar can be spent, the work has to go into next year’s budget and get funded.  This means figuring out some sort of estimate of work that has yet to be scoped out.  “Is it going to be bigger than a bread box and smaller than a football field?” is one way of saying – we need a “rough ball park” (guesstimate) that we can use in the budgeting process. Unfortunately this guesstimate process is clearly flawed because it is based on invisible and ether-like requirements. As such, the guesstimate is prone to + or – 500% or more variance once the real requirements are known.

This is like saying – how much will it cost to build a house for my family, just give me a rough estimate so I can go to the bank and arrange a mortgage.  This would be an absurd behavior – especially when one usually doesn’t get a mortgage in advance, and especially because the cost will vary depending on where, how big, how custom, and how the house will be built.  If  one secures a $500K mortgage amount – it gives an upper limit but doesn’t guarantee that a suitable house can actually be done for that amount.  Yet, we engage in this behavior in IT all the time – we guess(timate) for the budget cycle, the amount gets slashed during meetings, and ultimately the fixed price (based on little information) becomes the project budget!

If requirements come first… then in many companies nothing will ever get built and problems will remain the same forever.  Analysis paralysis is common (especially with shifting new business requirements) which gives rise to the support of agile and extreme programming approaches to requirements.  Many companies shifted their support from the arduous front end heavy “waterfall” methods of software development in favor of “progressive requirements elaboration” whereby requirements are discovered along the way. As such, requirements are always evolving with new user stories emerging only after the earlier ones are delivered in software.  So what happens when requirements are needed to build a better estimate (and thereby make sure the project has enough budget) yet an estimate is required before one can begin to scope out the requirements that will fit the budget?  It is a circular situation akin to the chicken and egg conundrum that has plagued humankind for years.

Pathway to cooperative results…One method that proves to work well with this dilemma is scope management – whereby a business “project” (more likely a program of work) is divided into sub-projects, scoped out at the highest level, quality requirements are thought about, and traceable estimates ensue.  More to come on this topic in the next post…

To your successful projects!

Carol

Carol Dekkers
email: dekkers@qualityplustech.com
http://www.qualityplustech.com/

Carol Dekkers provides realistic, honest, and transparent approaches to software measurement, software estimating, process improvement and scope management.  Call her office (727 393 6048) or email her (dekkers@qualityplustech.com) for a free initial consultation on how to get started to solve your IT project management and development issues.

For more information on northernSCOPE(TM) visit www.fisma.fi (in English pages) and for upcoming training in Tampa, Florida  — April 26-30, 2010, visit www.qualityplustech.com.

Share/Bookmark
=======Copyright 2010, Carol Dekkers ALL RIGHTS RESERVED =======

What’s the (function) point of Measurement?


It’s been more than 30 years since “function point analysis”  emerged in IT and yet most of the industry either: a) has never heard of it; b) has a misguided idea of what function points are; or c) was the victim of a botched software measurement program based on function points.

Today I’d simply like to clear up some common misconceptions about what function points are and what they are NOT. Future postings will get into the nuts and bolts of function points and how to use them, this is simply a first starting point.

What’s a function point?

A “function point” (FP) is a unit of measure that can be used to gauge the functional size of a piece of software.  (I published a primer on function points titled: Managing (the Size of) Your Projects – A Project Management Look at Function Points in the Feb 1999 issue of CrossTalk – the Journal of Defense Software Engineering from which I have excerpted here):

“FPs measure the size of a software project’s work output or work product rather than measure technology-laden features such as lines of code (LOC). FPs evaluate the functional user requirements that are supported or delivered by the software. In simplest terms, FPs measure what the software must do from an external, user perspective, irrespective of how the software is constructed. Similar to the way that a building’s square measurement reflects the floor plan size, FPs reflect the size of the  software’s functional user requirements…

However, to know only the square foot size of a building is insufficient to manage a construction project. Obviously, the construction of a 20,000 square-foot airplane hangar will be different from a 20,000 square-foot office building. In the same manner, to know only the FP size of a system is insufficient to manage a system development project: A 2,000 FP client-server financial project will be quite different from a 2,000 FP aircraft avionics project.”

In short function points are an ISO standardized measure that provides an objective number that reflects the size of what the software will do from an external “user” perspective (user is defined as any person, thing, other application software, hardware, department etc – anything that sends of receives data or uses data from the software).  Function points offer a common denominator for comparing different types of software construction whereby cost per FP and effort hours per FP can be determined.  This is similar to cost per square foot or effort per square foot in construction.  However, it is critical to know that function points are only part of what is needed to do proper performance measurement or project estimating.

To read the full article, click on the title Managing (the Size of) Your Projects – A Project Management Look at Function Points.

To your successful projects!

Carol

Carol Dekkers
email: dekkers@qualityplustech.com
http://www.qualityplustech.com/

Carol Dekkers provides realistic, honest, and transparent approaches to software measurement, software estimating, process improvement and scope management.  Call her office (727 393 6048) or email her (dekkers@qualityplustech.com) for a free initial consultation on how to get started to solve your IT project management and development issues.

For more information on northernSCOPE(TM) visit www.fisma.fi (in English pages) and for upcoming training in Tampa, Florida  — April 26-30, 2010, visit www.qualityplustech.com.

Contact Carol to keynote your upcoming event – her style translates technical matters into digestible soundbites, with straightforward and honest advice that works in the real world!
=======Copyright 2010, Carol Dekkers ALL RIGHTS RESERVED =======