Category Archives: Uncategorized

Opinion: Soft skills Leverage Value on Agile Projects

“Potential is all of the resources you have in front of you. Efficiency is putting those resources to use effectively.” — Garrett Gunderson

I hear all the time that we live in interesting times (meaning challenging, regressive/progressive – depending on how you view the world) – but I believe every time is an interesting time.  In software development, the above quote seems especially apt for our times considering that the pace of technology (Moore’s law) continues to accelerate, AI is beckoning at our door with self-driving cars and robotics, and the race to be first in anything and everything (a US stronghold) appears to dominate our workplaces.

In our pursuit of good/better/best solutions, I believe that the technical teams often overlook the gains that could be gained from clear communication and cross-cultural team collaboration. In other words, the value of soft skills (meaning acceptance, emotional intelligence, honesty, embracing differences) are often overlooked in favor of technological advances.

One of the biggest elephants in the room (in my humble opinion) is the fact that technical professionals (programmers, systems analysts, scrum masters) are much more well versed in the hard, technical skills than they are in the soft, people skills.  The barrier or crevasse between different types of people (for example marketing and IT professionals) remains one of the biggest, unspoken gaps in software development.

I already anticipate the responses – Agile HAS increased overall collaboration and cross team/discipline understanding, but the basic human diversity remains an issue.

To check your soft skills or collaboration level with your team (all members of your co-located or geographically dispersed team) – take a look at this checklist.  If you cannot answer at least 2 of these questions for each team member, then the communication and true collaboration could use some work:

  • Do you know what order your team members would prioritize the following values: Honesty, directness, kindness, punctuality, teamwork?
  • What is one activity that each team member looks forward to doing on the weekend?
  • What kind of pets do your team members have?
  • What is the favorite food of your team members?
  • Where did your team members go for vacation in the past 18 months?

You might say that these things are invasive, personal details that don’t belong in workplace conversation. And, depending on your company, the culture might allow or prevent such disclosures.

I present these as a few questions to consider – and food for thought for advancing collaboration.  What do you think?

Are soft skills overlooked on your team?


The Heart of Agile – Simplifying the State of the Art

I’ve been an absentee blogger here for the past several months (yikes, has it been almost a year – time flies at my age!!!)

BUT… I’ve not been sunning myself on the slopes of Austria or lying at the beach (while I do live close to one!) – I’ve been busy with project management training, software measurement consulting, leadership development (for new courseware) and, most exciting to me, becoming involved with Dr. Alistair Cockburn’s HEART OF AGILE community.

I’ve joined an illustrious and creative team of international knowledge experts from around the world. Our common goal is to share and gain insights from real-world agile successes and simplify the sometimes complex world of agile development (beyond software development) into the four major concepts of the Heart of Agile (HOA or HoA):

  • Collaboration
  • Delivery
  • Reflection
  • Improvement

The concept of the Heart of Agile is innovative, refreshing, human and heart centered and it works wonders.  More to come in a plethora of posts (making up for lost time!)

Meanwhile, take a look at the website and let me know what you think!

To future MusingsAboutSoftwareDevelopment!

p.s. It’s good to be back in the writing mode. I’ve got so much to share with you!


Unit Pricing and Scope Management — New concepts for Agile projects?

The Agile Manifesto spawned a revolution in software development almost 20 years ago and continues to change a generation of product developers.  From the humble beginnings of 17 software development disrupters penning a manifesto in Park City UT to today’s scaled agile frameworks and a myriad of prescriptive methods, agile has taken over center stage from yesterday’s “waterfall” development approaches.

As with any cultural change (I’d argue that true agile is a culture), there’s been clashes of old school thinking and new school progressiveness…

Old school “staples” for software development included estimating, budgeting, on-time and on-budget project tracking, EVM, scope management, and measurement.  Not only do some of these concepts strike panic in agile developers, some have even sparked new “anti-”  movements (e.g., #noestimates.)  More than once I’ve heard developers refer to  “old school bean counters” and “out of touch C suite execs” when the words estimates or budgets are mentioned.

Yet as much as some agile creatives like to assert that estimates are futile (how can one estimate creativity?) and early failures are critical to innovation (it can take 3000 ideas to deliver the right one) – corporate America still runs on dollars and days (budgets and schedules.)

One way forward that has proven successful that satisfies the needs of executives (who want to know progress and early ROI) and also project teams (who are challenged to measure delivered business value) is called “Unit pricing.”

Here’s the approach:

Step 1:  Revisit completed projects:  Evaluate and measure past completed product releases:  delivered software product size, team effort (person hours) to deliver the release, estimated rework (%), number of sprints, team size, cost.

Having such a historical basis for unit pricing is useful, however, publicly available databases such as the Development and Enhancement project repository (International Software Benchmarking Standards Group – ISBSG) can suffice if you don’t have your own historical data.

Step 2: Generate a high level “budget” for your new product delivery (release or project), by first guesstimating a “relative” size (ballpark) for how big could be the software. Some companies use a rudimentary t-shirt sizing scheme (xs, small, medium, large, xl, xxl, etc.) as a comparator.

This estimated size will be wrong from the get-go, but selecting a “ballpark size” gives you an upper limit of software size to guide the project.  The overall budget is derived by taking your best guess of the size (in function points) and the applicable unit price in $/FP (as derived from the actual costs of similar past projects.)

Management will then approve the project (or release) based on this “guesstimated” size and budget, and the project begins.  (Budget $ / size in function points provides an initial agreed upon rate of $/FP. This can be used as a payment mechanism and/or a gauge of project progress.)

Step 3:  Given the “bucket” of project money and an assumed, not to exceed (maximum) product size, you’ve got the unit pricing that you’ll use when bits of software are delivered.

Step 4:  The scope of the project is the product backlog as defined by the product owner /customer.  (Note that the size of the product backlog is unknown at this point, however, as user stories are completed, they will be sized and recorded along with the sprint they were part of.  At the end of the project, the size of the completed user stories will be known.)

Step 5:  As bits of software are delivered, they are sized (in FP.) The cost is calculated as the budgeted unit pricing * FP) – and both the size and cost are recorded for the delivery.  Costs are tallied and subtracted from the original budget.  If a product backlog item is returned to the backlog at the end of a sprint (partial completion), there is no measurement of its size or cost.

Step 6:  During the project, the team gets to focus on product development and delivery rather than estimates.  However, at any point in time, team members and executives can get a glimpse of where the project is at by looking at the delivery sizes and costs to date, and gauge how muchprogress is being made.  Creating product burndown charts (costs expended to date out of the original budget amount) and depicting the growing product size can be done quickly and easily.

The agile teams stay focused on being creative and developing product.  At the same time, executives have their progress reporting (delivered size, budget spent to date, remaining budget.) This is a modified approach that follows concepts outlined in the Finnish Software Measurement Association (FiSMA) northernSCOPE method (

What do you think? Would unit pricing and progress tracking be a way forward for your agile projects?

Comments, brickbats (an old school word for criticism), ideas welcome.


GDPR, Forget Me and Human Connection

Last week, the first of many sweeping data privacy laws went into effect in Europe:  The General Data Protection Regulation or GDPR.  If you’ve ever traveled abroad or given your email address to a company that does business in Europe, you’re likely to be peripherally familiar with the legislation that has far-reaching implications globally and regulates what types of identifying data that companies can keep about you.  Without going in to detail, (a reasonable read on the subject is found at among others,)  people must explicitly renew and opt-in to continue to receive emails from corporations and services. Companies have spent hundreds and thousands of hours pulling together new policies about how they store and keep data about customers/subscribers.

Individuals under GDPR can also ask to “Be Forgotten” – to which corporations must remove ALL DATA EVER STORED ABOUT AN INDIVIDUAL (including previous names, email addresses, and other data.)  GDPR will change how the world at large views and saves personal data, and, that’s a good thing.

This falls on the heels of the latest issues with Facebook and Data Privacy Breaches where Mark Zuckerberg faced the US Congress, and later  revised their privacy practices (which, hopefully will cause other social media sites to also change.)

With the rise of identity theft and other wrongdoing based on negligent (and sometimes fraudulent) handling of personal data, these new requirements are good provisions. Corporations and their officers should be safeguarding the data they collect at our expense or face financial penalties for non-compliance. It’s going to be interesting to watch in the coming months as lawsuits start to amass… just speculating.

Sidenote: On a related topic, the regulations don’t help to allay the fear about what ‘Big Brother’ (government and corporations) do with the video and audio they collect on me (and everyone else) as we go about our everyday lives.

Am I the only one who finds it a little too intrusive when Google asks me “Have you recently been to Starbucks on xxx St ?” and then asks me to rate the experience to help out other visitors? (Note to self: turn off the Location setting on my cell phone.)

is keeping in touch out of vogue – can we reconnect?

Personally, I think it is disconcerting to see how disconnected we truly are today — despite the seemingly increased digital connectivity of social (and other) media.

Over the past 30 years as a consultant and speaker, I’ve met tens of thousands of people whose email addresses are scattered across various pieces of hardware (some long obsolete!)  I have so many fond memories of people I’ve met and places I’ve traveled; the stories and snippets of life we’ve shared (part of what makes conferences and consulting so worthwhile!)

I wonder about the people I’ve met and lost touch with (maybe even you!) and regardless of whether we shared a moment or a month, there was a connection.  I recall warm handshakes at the end of a presentation, smiles and shared conversations over coffee and dinners, solving problems with strangers (with corporate challenges,) and, of course, the goodbyes at the end of a class or a contract.  (Yes, I love my work!)

Every email address in my databases equals at least one human being with whom I’ve crossed paths with, and most likely lost touch. I wonder about your news and your kids and your experiences since we last met (or correspondence or chatted) – and, if you’re interested, I’d love to reconnect.

I’ll go first (to give an update):  I’m still actively presenting new ideas about measurement, agile, and leadership at conferences, still consulting and teaching workshops on function points (the square foot measure of software size) in new environments (especially agile!), as well as developing new courses to enrich corporate health through leadership, project management and metrics.  I’m passionate about cultural diversity (Hofstede and Trompenaars), the Heart of Agile (thank you Allistair Cockburn!), EI (emotional intelligence) and transforming our workplaces/workforces to be inclusive of people, technology, and fresh ideas.

I’m still the same energetic, optimistic, curious female engineer and consultant you met somewhere on some occasion, and as with every consultant, I’m always open to new / renewed client engagements where I can help you to streamline your operations (with great leadership and Project Management initiatives) and add measurement to demonstrate your department’s value! (I hope you’re okay with this shameless plug, I am taking on new clients at this time. Call or email me…)

Forget me….

On the opposite side, the Forget Me concept is interesting… considering the high percentage of flawed and incorrect data stored about all of us.  (Case in point – have you ever done a vanity search of your name in Google and found your name associated with the current address of exes and family members at locations where you’ve never lived?  Or done a public records search where identity results show records of invalid and incorrect data?  Data are gathered from disparate and diverse public and private databases – with little data validation.)  I wonder how corporations will actually be able to guarantee data removal when so much of the stored information is flawed.

Compounding the situation are purposely errant data (mis-entered by applicants who mistype their email address or falsify identifying information to avoid later spam, when registering on a site) – I’m curious how companies will be able to make sure that all data are removed on request. “Forget me” – what an interesting concept in a world that wants to be appropriately connected.

It’s probably time for a full EMAIL dataBASE refresh

I read a perspective piece in this past Sunday’s Tampa Bay Times newspaper “One man is updating his own privacy policy” by Konstantin Kakaes – it was an  interesting article that opened with

“Dear everybody who has sent me an email since  April 23, 2004, the day I got a Gmail account…”

The three column, half page piece addressed every type of email communication (from family to friends to generic spam to subscriptions to group lists,) 13 different groups in all, and outlined how he plans (tongue-in-cheek) to use the various pieces of data he’s retained on his various laptop incarnations and storage devices.  An interesting approach to cleaning out (or at least contacting) everyone in his email database.

P.s., watch this space for news about an exciting Powerful Presentations and Corporate Engagement workshop (I’m developing it now) – set to launch in the autumn of 2018.  Interested in knowing more (we’re targeting Sept in Napa Valley, CA!) – drop me a quick note!

Blogging is the ultimate “broken telephone” so, if you’ve read this far, do me a favor and shoot me a quick email ( or drop a comment and let me know that you’re out there…  

p.s. Is anyone there?  Did this post resonate with you? Was it too long/just right/boring/fun?  LMK (Let me know…)

Quality Assurance Assn of MD – Register Now: “How to Identify and Mitigate Software Security Risks and Vulnerabilities” on May 15, 2018 in Columbia, MD – Earn 3 PDU’s for only $35 for attending.

Topic: How to Identify and Mitigate Software Security Risks and Vulnerabilities.

 Presenter:        Joe Jarzombek, PMP, CSSLP

Date:                     May 15, 2018


As the cyber threat landscape evolves and external dependencies grow more complex, managing risk in the Internet of Things (IoT) supply chain must focus on the entire lifecycle.  IoT is contributing to a massive proliferation of a variety of types of software-reliant, connected devices throughout critical infrastructure sectors.  With IoT increasingly dependent upon third-party software of unknown provenance and pedigree, software composition analysis and other forms of testing are needed to determine ‘fitness for use’ and trustworthiness. Application vulnerability management should leverage automated means for detecting weaknesses, vulnerabilities, and exploits.

Addressing supply chain dependencies enables enterprises to harden their attack surface by: comprehensively identifying exploit targets; understanding how assets are attacked and providing more responsive mitigation.  Security automation tools and services, and testing and certification programs now provide means upon which organizations can use to reduce risk exposures attributable to exploitable software in IoT devices.

Attendees will Learn:

  • How external dependencies create risks throughout in IoT/software supply chain;
  • How software composition, static code analysis, fuzzing, and other forms of testing can be used to determine weaknesses and vulnerabilities that represent vectors for attack and exploitation;
  • How testing can support procurement and enterprise risk management to reduce risk exposures   attributable to exploitable software in IoT devices.

Please Note:  Attendees receive PDUs or CPE’s to use for recertification credits at PMI, QAI, ASQ, ISC2, ISACA, IEEE or ISTQB, etc.

 Speaker Biography:  Joe Jarzombek is Director for Government, Aerospace & Defense Programs in Synopsys, Inc., the Silicon to Software™ partner for innovative organizations developing electronic products and software applications.  He guides efforts to focus Synopsys’ global leadership in electronic design automation (EDA), semiconductor IP, and software security and quality solutions in addressing technology challenges of the public sector and aerospace and defense communities.   He participates in relevant consortia, public-private collaboration groups, standards groups, and academic R&D projects to assist in accelerating technology adoption.

Previously, Joe served as Global Manager for Software Supply Chain Solutions in the Software Integrity Group at Synopsys.  In that role he led efforts to enhance capabilities to mitigate software supply chain risks via testing technologies and services that integrate within acquisition and development processes; enabling detection, reporting, and remediation of defects and security vulnerabilities to gain assurance and visibility within the software supply chain.  Jarzombek has more than 30 years focused on software security, safety and quality in embedded and networked systems.  He has participated in industry consortia such as ITI, SAFECode and CISQ; test and certification organizations such as Underwriters Labs’ Cybersecurity Assurance Program, standards bodies, and government agencies to address software assurance and supply chain challenges.

Prior to joining Synopsys, Jarzombek served in the government public sector; collaborating with industry, federal agencies, and international allies in addressing cybersecurity challenges.  He served in the US Department of Homeland Security Office of Cybersecurity and Communications as the Director for Software & Supply Chain Assurance, and he served in the US Department of Defense as the Deputy Director for Information Assurance (responsible for Software Assurance) in the Office of the CIO and the Director for Software Intensive Systems in the Office of Acquisition, Technology and Logistics.

Joe Jarzombek is a retired Lt Colonel in US Air Force and a Certified Secure Software Lifecycle Professional (CSSLP) a Project Management Professional (PMP) as well as a frequent key presenter at industry conferences. He received an MS in Computer Information Systems from the Air Force Institute of Technology, and a BA in Computer Science and BBA in Data Processing and Analysis from the University of Texas – Austin.


Registration and Networking:    8:00 a.m. to 9:00 a.m.

Opening Remarks:                      9:00 a.m. to 9:05 a.m.

Speaker Presentation:                9:05 a.m. to 12 noon.                   

Lunch (Provided):                       Noon to 1:00 p.m.                                            Q&A:           1:00 p.m. to 2:00 p.m.   

 Meeting Registration: If you would like to attend the May 15th, 2018 Quality Assurance Association of Maryland (QAAM) meeting, please register at

Once on the QAAM website:

  • Click on QAAM meetings
  • Then Click on Meeting Registration drop down next to QAAM Meetings
  • Complete the Meeting Registration form to attend
  • Click <send>.

If you have questions about the QAAM meeting or would like to register after May 11, 2018, please call the QAAM President at 571-232-0683. Fee for the attendance is $35.00 and includes lunch.  Parking is free.  QAAM does not accept credit cards.  There is no fee if your company has a Corporate Membership in QAAM.  If your company would like information about joining, or have any questions, please send us an email at and we will respond promptly.

Directions:  Homewood Suites by Hilton – Note the New Location

8320 Benson Drive
Columbia, MD 21045

To estimate or not to estimate? Not the right question…

Over the past few years I’ve seen an increase in articles and posts about whether or not to do estimation (of cost, schedule and effort) for software development projects. This is especially true when agile/iterative methods are used to develop software for which requirements are not readily known in advance.  There are actual “movements” set up to prove that estimating in and of itself is bad for software development.  At the same time, I’ve worked done more and more work for clients related to software benchmarking (to find best-in-class methods, tools, and combinations to develop software) and estimation (including price-to-win estimating.)  I’m now convinced that “To estimate or not to estimate?” is simply the wrong question – or at least a premature question for many companies.

Estimation is often viewed as fundamental to software development (and any other development projects or programs) as are ingredients to cooking or oxygen to life.  While we might wish to discard or discredit the practice of estimation as an inconvenience and even the reason for software “failures” –(Sidenote:  The Standish group’s annual CHAOS reports cite lack of “on-time” and “on-budget” software delivery as rationale for declaring project failure; both of which would disappear as factors if estimating was eliminated) – the truth is that C-level executives need a level of confidence (based on estimates) to bound their investment in new initiatives, no matter how much faith or confidence the executives have in the development teams’ ability to deliver.  In my humble opinion, project managers MUST  develop skills to do solid, reliable project estimates if they are to survive (and thrive.)  But this is where things often fall apart – estimation is not seen as a discipline based on solid data (in part, because some organizations do estimating haphazardly based on bad data, poor models, flawed assumptions, premature input values taken as fact, among other factors.)

This does not include those organizations where the mere notion of projects (being a temporary endeavor intended to deliver an identified product, outcome, or service such as a piece of software) is like a foreign language.  When I teach courses according to the Project Management Institute’s Project Management Body of Knowledge (PMBOK(R)), it’s not uncommon to find IT pros who profess that project management is not needed because their work is bounded solely by calendar months and the number of full-time-staffers.  The idea that work should be managed towards a specified outcome (with goals, objectives, timelines, milestones, deliverables and a formal end) just doesn’t fit into their paradigm, even for those involved in developing advanced technology solutions.  I’m excluding these companies because projects (and estimating cost and schedule) are actually beyond their comprehension, as is productivity, project comparisons or process improvement.

Given the premise that “to estimate or not to estimate” is the wrong (or at least a premature) question – then what are the right ones?  Here’s a short list:

  • If we do an estimate, do we know what are the correct input variables (and values) we should use?  (i.e., Some idea of scope, non-functional requirements, constraints, goals, project environment, etc.)  Garbage in equals garbage out.
  • When estimating, do we have access to correct and appropriate historical data on which to rely? (i.e., does the historical completed project information accurately depict what actually happened on the project? Often up to 40% of true project work effort is not recorded – or it is recorded inconsistently.)  Incomplete or incorrect historical data make for poor comparisons, and even worse estimates.
  • Are the estimating models we propose, appropriate for the industry and application?  (i.e., in construction, it would be folly to use a home building model for a hospital construction or bridge construction project, so too with software.)  Every model, no matter how advanced, needs to be calibrated for the organization using it.
  • Do we know enough about the object of estimation? (i.e., if it is simply an idea about an outcome without any idea of component programs or projects, a “guess”timate or rough-order-of-magnitude may be the only possibility until more data are known.)
  • Are the estimating exercise/practices paid “lip service” by management? (i.e., does management summarily cut every estimate in half, or dictate due dates that override those of professional estimators?)
  • Does the organization take (software) measurement seriously?  (i.e., how are project measures and metrics collected – if adhoc, inconsistent, without formal processes or procedures to validate the quality of project data, then estimating will likely be equally inconsistent)

These are just a few of the important questions that need to be addressed – before we attempt to estimate and rely on the results of the practice.   When estimating is done without proper planning, discipline and consistency, the results will be unreliable and even worse, downright wrong.

In IT as in life, if you’re going to invest in an endeavor (such as estimating), take the time to do it right the first time, or don’t bother doing it at all.  And that, really answers the question of  “to estimate or not to estimate.”

What other questions are critical to ask?  What do YOU think?

Function Points (Software Size) come of Age: Mature, Stable, and Relevant

It is with pride and honor to share with you news about the upcoming Sept 13-15, 2017 celebratory (and educational) conference: ISMA14 (International Software Measurement and Analysis) – and its happening in just 4 weeks in Cleveland, OH, USA!

It’s the 30th anniversary of the International Function Point Users Group (IFPUG) – a not-for-profit user group I’ve been a part of for over 25 years.

We’re also celebrating 2017 as the International Year of Software Measurement (#IYSM).  It’s a great year for YOU to get involved (or more involved) and gain the benefits of measurement for software and systems projects!

As the Director of Communications and Marketing for IFPUG, I am excited that IFPUG is now mature (age 30!) and at the same time venturing in new directions with non-functional sizing (SNAP.)  We have much to celebrate, AND we also have more work to do (to publicize how Function Points and SNAP points provide objective measures of software size!)

The time is now!

No longer does your organization need to “fumble around in the dark” to find standard, reliable and objective software sizing measures.  Certainly there is an abundance of available units of measure (story points, use case points, source lines of code, hybrid measures, etc.) — BUT, only Function Points are supported by  ISO/IEC world standards and provide consistent, objective and technologically independent assessments of software size based on “user” requirements.  (Soon, the Software Non-functional Assessment Process – SNAP points for non-functional size will also become an international standard.)

Isn’t it time that your company adopts function points as a universal standard for software size?  YOUR timing is perfect because in less than 5 weeks, International Software Measurement and Analysis (#ISMA14) will be in Cleveland and you will have the opportunity to learn from industry experts in an intimate (less than 200 people) setting. (p.s., I’m one of the main conference speakers so you’ll know at least 1 person there!)

FUNCTION POINT proof is “in the pUDDING” (so to speak)…

We have an English proverb “the proof is the pudding”

The modern version of “The proof is in the pudding.” Implies that there is a lot of evidence that I will not go through at this moment and you should take my word for it, or you could go through all of the evidence yourself. Source: 

I can espouse the benefits of function points, as can IFPUG insiders and supporters such as the world-respected author/guru Capers Jones (whose 17 published books use Function Points as a universal software sizing measure). But, when the mainstream media features articles on Function Points – it’s a call to action for senior executives and IT professionals to take note! Here’s a recent example: (click on the image to read the full story!)

Need help selling your boss on the benefits?

I’ve written up the top 10 reasons to attend ISMA14 with us- won’t you join me (and a ton of other measurement professionals) in Cleveland on Sep 13?

Carol Dekkers, CFPS (Fellow), AEC, PMP, P.Eng.
President, Quality Plus Technologies, Inc.
IFPUG Director of Communications and Marketing


To Succeed with Measurement, Choose Stable Measures

The pace of technology advancement can be staggering – new tools, methods, acronyms, programming languages, platforms and solutions – come at us at warp speed, morphing our IT landscape into patchwork quilts of old and new technologies.  

At times, it can be challenging to gauge the results (of change): what were the specific processes /tools /methods /technologies /architectures /solutions that contributed to or delivered positive results?  How can we tell what made things worse?

Defining positive “results” is the first step and measurement can contribute – as long as our measures don’t shift with the technology!

I and countless others have written about Victor Basilli’s GQM (Goal Question Metric) approach to measurement, (in short, choose measures that answer the questions you need to answer so you can achieve the goal of measurement…) but there’s a problem even more fundamental, and goes beyond choosing the right measures:

The key to (IT) measurement lies in stability and consistency:  choosing stable measures (industry standardized definitions that don’t change) and measuring consistently (measuring the same things in the same way.)
– Carol Dekkers, 2016

This may seem like common sense, but after 20 years of seeing how IT applies measurement, I realize common sense isn’t all that common.  There are some in the IT world that would rather invent new measures (thus decreasing stability and consistency) than embrace proven ones.  While I’ve seen the academic tendancy of “tear down what already exists to make room for my new ideas,” I believe that this is counter-productive when it comes to IT metrics.  But, I’m getting ahead of myself.  First, let’s consider how measurement is done in other industries:

  • Example 1: Building construction.  Standard units of measure (imperial or metric) are square feet and square meters.  The definition of a square foot has not changed despite advances in modular design.
  • Example 2: Manufacturing.  Units of measure for tolerances, product sizes, weights, etc. (inches, mm, pounds, kg, etc.) are the same through the years.
  • Example 3: Automobiles.  Standard ratios such as miles per gallon (mpg) and acceleration (0-60 in x seconds) remain industry standards.

In each example, the measure is stable and measurement success is a result of consistent and stable (unchanging) units of measure applied across changing environments.  Comparisons of mpg or costs per square foot would be useless if the definition of the units of measure was not stable.  Comparability across products or processes depends on the consistency and stability of both the measurement process and the measures themselves.

Steve Daum wrote in “Stability and linearity: Keys to an effective measurement system” :

“Knowing that a measurement system is stable is a comfort to the individuals involved in managing the measurement system. If the measuring process is changing over time, the ability to use the data gathered in making decisions is diminished. If there is no method used to assess stability, it will be difficult to determine the sensitivity of the measurement system to change and the frequency of the change…Stability is the key to predictability.”

One of the most stable and consistent measures of software (functional size) is called IFPUG Function Points and as The International Function Point Users Group (IFPUG) is poised to celebrate its 30th year in 2017.  The IFPUG Function Point measure is stable (with hundreds of thousands of projects having been FP counted,) and consistent (it’s been an ISO/IEC standard for almost 20 years!) – and perhaps 2017 is the year that YOUR company should look at FP based measurement.

FPA (Function Point Analysis) provides the a measure of software size under development and can be used equally well on agile, waterfall, and hybrid software development projects.  Yet, despite its benefits, much of the world still doesn’t know about the measure.

See my first post of 2016 here:  Function Point Analysis (FPA) – Creating Stability in a Sea of Shifting Metrics for more details.  FP is certainly a good place to start when you’re looking for software measurement success… why not today?

Wishing you a happy and safe holiday season wherever you live!


Estimation Poker – Bluffing (and Winning) with Metrics

In May 2016, I presented a webinar for ITMPI on the topic of Estimation Poker based on the broad topic of software project estimation – regardless of the development approach.  The webinar was well attended despite technical difficulties (I recorded it while in Italy and suffice to say, internet connections from my site happened to be… less than optimum.)  I re-recorded the webinar on my return (with far superior results) and the recording can be accessed at this link:  ITMPI Estimation Poker Webinar Re-Recording:

A teaser 10 minute segment is on YouTube – Dekkers Estimation Poker teaser

I’ve also uploaded the full slide deck to Research Gate – click Research Gate – Dekkers Slides here to download.

Let me know what you think.  Note that this is different than the Agile Estimation Poker (which I forgot about was already established when I designed my webinar.)

Have a great weekend!



Function Point Analysis (FPA) – Creating Stability in a Sea of Shifting Metrics

Years ago when Karl Weigers (Software Requirements) introduced his  “No More Models” presentation the IT landscape was rife with new concepts ranging from Extreme Programming to the Agile Manifesto to CMMI’s (multiple models), to Project/Program/Portfolio Management.

Since then, the rapidity of change in software and systems development has slowed, leaving the IT landscape checkered with agile, hybrid, spiral and waterfall projects.  Change is the new black, leaving busy professionals and project estimators stressed to find consistent metrics applicable to the diverse project portfolio.  Velocity, burn rates, story points and other modern metrics apply to agile projects, while defect density, use cases, productivity and duration delivery rates are common on waterfall projects.

What can a prudent estimator or process improvement specialist do to level the playing field when faced with disparate data and the challenge to find the productivity or quality “sweet spot”?  You may be surprised to find out that Function Point Analysis (FPA) is part of the answer and that Function Points are as relevant today as when first invented in the late 1970’s.

What are function points (FP) and how can they be used?

Function points are a unit of measure of software functional size – the size of a piece of software based on its “functional user requirements,” in other words a quantification that answers the question “what are the self-contained functions done by the software?”

Function points are analogous to the square feet of a construction floor plan and are independent of how the software must perform (the non-functional “building code” for the software,) and how the software will be built (the technical requirements.)

As such, functional size, (expressed in FP,) is independent of the programming language and methodology approach:  a 1000 FP piece of software will be the same size no matter if it is developed using Java, C++, or other programming language.

Continuing with the construction analogy, the FP size does not change on a project whether it is done using waterfall or agile or hybrid approaches.  Because it is a consistent and objective measure dependent only on the functional requirements, FP can be used to size the software delivered in a release (a consistent delivery  concept) on agile and waterfall projects alike.

WHy are fp a consistent and stable measure?

The standard methodology to count function points is an ISO standard (ISO/IEC 20926) and supported by the International Function Point User Group (IFPUG.)  Established in 1984, IFPUG maintains the method and publishes case studies to demonstrate how to apply the measurement method regardless of variations in how functional requirements are documented.  FP counting rules are both consistent and easy to apply; for the past decade the rules have not changed.

RELEVANCE OF fp in today’s it environment

No matter what method is used to prepare and document a building floor plan, the square foot size is the same.  Similarly, no matter what development methodology or programming language is used, the function point size is the same.  This means that functional size remains a relevant and important measure across an IT landscape of ever-changing technologies, methods, tools, and programming languages.  FP works as a consistent common denominator for calculating productivity and quality ratios (hours / FP and defects / FP respectively), and facilitates the comparisons of projects developed using different methods (agile, waterfall, hybrid, etc.) and technical architectures.

consistency reigns supreme

THE single most important characteristic of any software measure is consistency of measurement!

This applies to EVERY measure in our estimating or benchmarking efforts, whether we’re talking about effort (project hours), size (functional size), quality (defects), duration (calendar time) or customer satisfaction (using the same questionnaire.)  Consistency is seldom a given and can be easily overlooked – especially in one’s haste to collect data.

It takes planning to ensure that every number that goes into a metric or ratio is measured the same way using the same rules.  As such, definitions for defects, effort (especially who is included, when a project starts/stops, and what is collected), and size (FP) must be documented and used.

For more information about Function Point Analysis (FPA) and how it can be applied to different software environments or if you have any questions or comments, please send me an email ( or post a comment below.

To a productive 2016!