Estimation Poker – Bluffing (and Winning) with Metrics


In May 2016, I presented a webinar for ITMPI on the topic of Estimation Poker based on the broad topic of software project estimation – regardless of the development approach.  The webinar was well attended despite technical difficulties (I recorded it while in Italy and suffice to say, internet connections from my site happened to be… less than optimum.)  I re-recorded the webinar on my return (with far superior results) and the recording can be accessed at this link:  ITMPI Estimation Poker Webinar Re-Recording:

A teaser 10 minute segment is on YouTube – Dekkers Estimation Poker teaser

I’ve also uploaded the full slide deck to Research Gate – click Research Gate – Dekkers Slides here to download.

Let me know what you think.  Note that this is different than the Agile Estimation Poker (which I forgot about was already established when I designed my webinar.)

Have a great weekend!

Carol

 

Function Point Analysis (FPA) – Creating Stability in a Sea of Shifting Metrics


Years ago when Karl Weigers (Software Requirements) introduced his  “No More Models” presentation the IT landscape was rife with new concepts ranging from Extreme Programming to the Agile Manifesto to CMMI’s (multiple models), to Project/Program/Portfolio Management.

Since then, the rapidity of change in software and systems development has slowed, leaving the IT landscape checkered with agile, hybrid, spiral and waterfall projects.  Change is the new black, leaving busy professionals and project estimators stressed to find consistent metrics applicable to the diverse project portfolio.  Velocity, burn rates, story points and other modern metrics apply to agile projects, while defect density, use cases, productivity and duration delivery rates are common on waterfall projects.

What can a prudent estimator or process improvement specialist do to level the playing field when faced with disparate data and the challenge to find the productivity or quality “sweet spot”?  You may be surprised to find out that Function Point Analysis (FPA) is part of the answer and that Function Points are as relevant today as when first invented in the late 1970’s.

What are function points (FP) and how can they be used?

Function points are a unit of measure of software functional size – the size of a piece of software based on its “functional user requirements,” in other words a quantification that answers the question “what are the self-contained functions done by the software?”

Function points are analogous to the square feet of a construction floor plan and are independent of how the software must perform (the non-functional “building code” for the software,) and how the software will be built (the technical requirements.)

As such, functional size, (expressed in FP,) is independent of the programming language and methodology approach:  a 1000 FP piece of software will be the same size no matter if it is developed using Java, C++, or other programming language.

Continuing with the construction analogy, the FP size does not change on a project whether it is done using waterfall or agile or hybrid approaches.  Because it is a consistent and objective measure dependent only on the functional requirements, FP can be used to size the software delivered in a release (a consistent delivery  concept) on agile and waterfall projects alike.

WHy are fp a consistent and stable measure?

The standard methodology to count function points is an ISO standard (ISO/IEC 20926) and supported by the International Function Point User Group (IFPUG.)  Established in 1984, IFPUG maintains the method and publishes case studies to demonstrate how to apply the measurement method regardless of variations in how functional requirements are documented.  FP counting rules are both consistent and easy to apply; for the past decade the rules have not changed.

RELEVANCE OF fp in today’s it environment

No matter what method is used to prepare and document a building floor plan, the square foot size is the same.  Similarly, no matter what development methodology or programming language is used, the function point size is the same.  This means that functional size remains a relevant and important measure across an IT landscape of ever-changing technologies, methods, tools, and programming languages.  FP works as a consistent common denominator for calculating productivity and quality ratios (hours / FP and defects / FP respectively), and facilitates the comparisons of projects developed using different methods (agile, waterfall, hybrid, etc.) and technical architectures.

consistency reigns supreme

THE single most important characteristic of any software measure is consistency of measurement!

This applies to EVERY measure in our estimating or benchmarking efforts, whether we’re talking about effort (project hours), size (functional size), quality (defects), duration (calendar time) or customer satisfaction (using the same questionnaire.)  Consistency is seldom a given and can be easily overlooked – especially in one’s haste to collect data.

It takes planning to ensure that every number that goes into a metric or ratio is measured the same way using the same rules.  As such, definitions for defects, effort (especially who is included, when a project starts/stops, and what is collected), and size (FP) must be documented and used.

For more information about Function Point Analysis (FPA) and how it can be applied to different software environments or if you have any questions or comments, please send me an email (dekkers@qualityplustech.com) or post a comment below.

To a productive 2016!

Carol

Image

In a few words: why IT is so intimidating


As a project manager and software metrics expert, I’ve learned that simplicity and clarity are the keys to effective communication.  Consider that when we meet someone from another country, we use simple words, phrases and paraphrasing to communicate our meaning. Most of us would consider it rude and intimidating to talk to a foreigner using complex English and idioms.

Yet, that’s exactly what happens when we, software professionals, talk to … well almost anyone but ourselves.  We are technical professionals with access to reams of data, and you might think the idea of simplicity and clarity would be common sense.  Sadly, it’s quite the opposite.  Like medicine, engineering, and other technical professions, we seem to take pride in creating acronyms and continually redefining the English language to suit our purpose.  Then, we scoff at anyone who doesn’t understand, and expect them to bone up on their vocabulary.

It really only takes a few obscure words to intimidate someone, in IT we can do it with one or two (such as “artifact” or “construct” or “provisioning.”)

I’ve seen it for decades – instead of using common English words (with known definitions) or inventing brand new terms, the software industry tends to complicate things by using words that are already known, and changing the definitions.

I noticed this trend in my first post-college job when someone in my department (pipeline engineering) set me up to use the mainframe computer.  As luck would have it, my system crashed on the first day and I had to call computer services.  When asked for my “terminal address” the group howled when I said “the fourth floor” when obviously they had referred to the 16 digit serial number on the right side of my computer monitor.  When I took a job working in that same technical group months later, I had to learn a whole new vocabulary.  Instead of talking about documents or papers or manuals, my co-workers talked about “deliverables” which also included hardware and software among other things.

I learned that DASD and TCPIP were words in themselves used to mean specific things but few could remember what were the words that made up the acronym.  As confused as I was as a graduate engineer with programming experience, I wondered how much more confused our customers must be.

Then along came new SDLC’s (software development “life cycles”), new methodologies (approaches and guidelines for developing software), and new concepts such as object-oriented programming. Each new wave washed ashore with a mixture of new, re-defined and sometimes arcane terms with very specific meanings. Sometimes the “common English usage” definition prevailed, other times the term had an entirely new definition.

Take the word “artifact” for example.  The first definition is the way that it is defined in common English usage (Google.com) and the latter is specific to IT.

artifact

artifact it

 

 

 

So, now instead of saying document or manual or deliverables in general conversation and in meetings, artifact was used.  Ugh…. customers shrugged, IT didn’t notice the misunderstanding.  Business chugged on with an ever widening communication gap, and projects missed their targets.

Today things are beyond mere terminology changes.  We’ve even started banning certain words we don’t think fit our purpose – in spite that a term is well-understood.  For example, I recently read a post that proposed banning the word “project” from the vocabulary and replacing it with “initiative” to redirect professionals to focus on product delivery instead of start and end date.  It’s a great idea to focus on product delivery and getting all the teams on board to focus on output, but terminology is already a fundamental divisive issue. Ugh.

All in all, I believe that one of the biggest chasms in software development today lies in communication between technical professionals and the business.  We’re really two different cultures (more about that in another post) and the use of simple, common English terms (with standard definitions) could bridge some of the gap.

As the title says:  In just a few words… IT is intimidating.

What do you think?

Have a great week!

Carol

10 Steps to Better Metrics… for Everyone


The more things change, the more they stay the same – especially when it comes to initiatives that involve cultural change. Measurement is a perfect example – and I’m not talking purely about “software metrics,” rather measurement in any industry.

When you take a business that has traditionally “flown by the seat of its pants” (in other words, it is a monopoly of sorts or it has made money in spite of itself) and start to keep track of what’s going on, people have issues.  The first step often is to simply measure anything that moves – data that are easy to capture – and then try to figure out some sort of conclusions or action plans.   In IT (Information Technology) the landscape is littered with discarded data from failed measurement initiatives.  Data in and of themselves are not bad, IF the data are used appropriately and in the right context.

I recently wrote the following article for Projects at Work based on concepts I first observed nearly 20 years ago, and they are as valid today as ever before.

As a consultant, I LOVE to work with companies who want to succeed with measurement. If you are tasked with starting metrics for your company, give me a call – maybe I can give you some ideas to save you time and money – and succeed with metrics!

Send me an email or leave a comment – measurement is too important to leave to chance.  (Let me know if you’d like a full copy of this article!)

Cheers,
Carol

ProjectsAtWork - 10 Steps to Better Metrics July 2015

Tech Folks Don’t Grok People Things


Wow, “Grok” was first used in 1961 and this was the first I’ve heard of the word. Great post – hopefully a few people in IT will grok the meaning of this post.

Think Different

Tech Folks Don’t Grok People Things

Geek-inside

Nor do they often grok the connection between attending to their own and others’ needs, and the grokking of people things.

Tech Folks Focus On Tech

Let’s face it, most folks in IT (a.k.a. software development) made it their career choice because they like tech. Personally, I started programming way back when because I liked making little coloured lights flash on and off at my command.

And although liking tech doesn’t necessarily preclude grokking people things, in practice it generally does.

People Things Trump Tech

Yet it’s the people things that make all the difference when it comes to non-trivial, collaborative knowledge work. Such as teams building software systems and solutions. Questions like “What accounts for the way folks behave?”, “How can we work together?” and “Why is everything so borked round here?”.

Some tech folks wake up to the primacy of people things sooner or later…

View original post 49 more words

QSM (Quantitative Software Management) 2014 Research Almanac Published this week!


Over the years I’ve been privileged to have articles included in compendiums with industry thought leaders whose work I’ve admired.  This week, I had another proud moment as my article was featured in the QSM Software Almanac: 2014 Research Edition along with a veritable who’s who of QSM.

This is the first almanac produced by QSM, Inc. since 2006, and it features 200+ pages of relevant research-based articles garnered from the SLIM(R) database of over 10,000 completed and validated software projects.

Download your FREE copy of the 2014 Almanac by clicking this link or the image below.

almanac

What Software Project Benchmarking & Estimating Can Learn from Dr. Seuss


Sometimes truth is stranger than fiction – or children’s stories at least, and I’m hoping you’ll relate to the latest blog post I published on the QSM blog last week.  I grew up on Dr. Seuss stories – and I think my four siblings and I shared the entire series (probably one of the first loyalty programs – buy the first 10 books, get one free…)

I’d love to hear your comments and whether you agree with the analogy that we seek to create precise software sample sets for benchmarking and in so doing, lose the benefits of the trends we can leverage with larger sample sets.  Read on and let me know!  (Click on the image below or here.)

Happy November!

Carol

dr seuss

Combining Soft Skills and Hard Tools for Better Software


One of the more interesting topics in software development (at least from my perspective) is the culture of the industry.  Seldom does one find an industry burgeoning with linguistics majors, philosophers, artists, engineers (all types – classically trained to self-named), scientists, politicians, and sales people – all working on the same team in the same IT department.

This creates an incredible diversity and richness – and leads to sometimes astounding leaps and bounds in innovation and technological advancement, but it can also create challenges in basic workplace behavior.  This post takes a look at the often overlooked soft skills (empathy, leadership, respect, communication, and other non-technical skills) together with technical competencies as an “opportunity” (aka challenge or obstacle to overcome.)

It was published first on the Project Management Institute (PMI) Knowledge Shelf – recently open to the general non-PMI public.

soft skills

Added bonus here:  I referenced the You Tube 2013 University of Western Australia commencement address by Australian comedian/actor Tim Minchin at the University of Western Australia in 2013 in my post (he shares his 9 recommendations to graduates, my favorite -and the one I quoted – is #7 Define yourself by something you love!)  I believe it’s worth the watch/listen if you need to take a break and just sit back and think about soft skills during your technical day. (Warning to the meek of heart – it’s irreverent, offensive, and IMHO, bang on in his core sentiments.  If you’re offended, I apologize in advance!)

If you’d like a pdf copy of the post above, please leave me a comment with your email address!  (And even if you don’t, I’d love your opinion!)

Have a great week!

Carol

IFPUG (News) Beyond MetricViews – FP for Agile / Iterative S/W Dev


With the support of QSM, Inc., I wrote and published this article on a new area of the International Function Point Users Group (IFPUG) website called “Beyond MetricViews.”

While the IFPUG already had published guidelines in this area, the key points to this article include:

  • If you want to measure productivity (or anything else) consistently across two or more software development projects – where each was developed using a different approach (i.e., waterfall vs. agile) – one must be consistent in the definition and application of the measures (and metrics);
  • Function points are defined in terms of elementary processes and agile methodologies deliver such functions iteratively (not complete in one iteration) – posing challenges to the uninitiated;
  • Regardless of whether you measure productivity, defect density (quality), costs or other aspect of software delivery – it is critical to do an “apples to apples” comparison.

Here’s the article (click on the image) for your interest.  (You can also visit the blog at www.qsm.com for details.)

ifpug

Comments and feedback is appreciated!

No free lunch in Software Estimation and Benchmarking


I’d love to have comments on my latest QSM blog post of the same name… read more

22 no free lunch