Category Archives: software development

Bias: It’s like Kryptonite to Collaboration


Collaborate: the concept is so ingrained with agile development that it’s one of the four components of Alistair Cockburn’s simplified Heart of Agile approach (along with Deliver, Reflect and Improve.)  Yet, collaboration has an “achilles heel”, a Kryptonite of sorts, that no one really talks about:  Bias.

Before we get into the roots of Bias and its effects, let’s explore what’s at the core of Collaboration.

Collaboration

To collaborate (according to Google Dictionary) is to

cooperate, join (up), join forces, team up, get together, come together, band together, work together, work jointly, participate, unite, combine, merge, link, ally, associate, amalgamate, integrate, form an alliance, pool resources, club together,  fraternize, conspire, collude, cooperate, consort, sympathize…

Collaboration relies on a combination of team skills: listening, communicating, connecting, cooperating, observing, and setting aside personal agendas and opinion.

Bias is like Kryptonite to Collaboration

Bias creeps in innocuously while we’re truly practicing our good communication skills; it can coat our words and can jade our thought process without us even being aware of it.  At best, bias skews our creativity, at worst, it kills trust and collaboration.

While we may not perceive bias in ourselves or others (“after all, we’re open and honest professionals doing our best”) – it’s there.

The good news is that Bias is like the iceberg below the surface that we didn’t know was there, and once we’re aware its there, we can dismantle its effects.

Through conscious practice and exercises, our awareness of our own biases can increase and we can override its effects.

Types of Bias

Bias comes in a variety of flavors and styles – and, we all have it in one way or another. The first step to overcoming bias is to recognize and acknowledge it in ourselves.

What kind of bias(es) do you hold?  I’ve condensed the list of bias types (from data analytics and psychology sites) and added examples for agile projects:

  1. Confirmation Bias – We skew our observation of an event, concept, person in a way that confirms our already held belief.  A pattern of highlighting news that agrees with the agenda of the left or right, and ignoring the other side.
    Example:  Changes are needed to a delivered report because legislation changed (fact), and we think “Typical of the users always changing their minds” (confirms our belief.)
  2. Attribution Bias – We attribute an action to someone based on their personality (flaws) rather than the situation.
    Example:  The project is delayed because testing isn’t finished.  We think:  “the tester doesn’t know what they are doing” (attributing it to a personality flaw) rather than looking at the situation of tester work overload.
  3. Commitment Bias – Continuing to invest in a derailed project because of the sunk costs already committed (time/effort/money.)
    Example: We think: How can we bail on this project after we’ve already spent a year and $500K on it?
  4. Labeling Bias – We (un)consciously describe or label a person/group in a way that influences how we think about them.
    Example:  We think marketing (people) have no clue about technology (so why ask them for input)?
  5. Spin Bias – We accept only one interpretation (ours) of an event or policy to the exclusion of the other.
    Example:  We spin the news (especially when there is a gap):  China banned cryptocurrencies, so it must be because of… (nefarious financial dealings…)
  6. Spurious Correlation Bias – We believe causality (without checking) between two potentially (un)related events or circumstances.
    Example:  Gas prices always go up before a long weekend.
  7. Omission Bias – Leaving out important information/people (certain facts or details) on a project because we don’t see its relevance.
    Example:  We didn’t think of including IT folks in a renovation project meetings (we omitted the importance of moving the network…)
  8. Not like Me Bias – Dismissing information or ideas because of the source or group generating it.
    Example:  We think that new hires have no idea about how our company works (when their input may be fresh and show us new ways to work.)

When we allow these unconscious biases to influence how we send or receive information, we stymie true collaboration.

What biases do you see in yourself and your workplace?  When I, personally realize I have a particular bias, I make a conscious effort to pause before responding.  When I’m prone to say something that would block collaboration (words like “we can’t do that…” or even the seemingly innocent thought in my head like “what on earth?” I try to stop and ask myself why I would think that.  Pausing to reflect and see the part that my unconscious bias plays, really helps me to collaborate better.  Might it work for you?

Would you Play Along?

Over the coming weeks, please join me in a new blog series called “The 5% Social Experiment” where I’ll post some easy social experiments to identify and overcome workplace bias.

Can I count on you to play along?

Carol

Advertisements

3C’s of Measurement Success – Create, Confirm, Convince


I recently recorded the remote meet-up presentation I did for Ryerson College (Toronto, Canada) called 3C’s of (FSM- Functional Size Measurement) Success – Create, Confirm, Convince.  The focus of the presentation was about how to position software measurement for success with management – especially in the context of agile software development.

What is this Thing – the Heart of Agile – all about?


If you’re an agilist (my word for those who embrace agile concepts to do great work) – you’ll be interested to hear about the Heart of Agile directly from the mouth of its founder Dr Alistair Cockburn.

Join me TOMORROW (Thursday Oct 4, 2018) at 10:00am Eastern Daylight Time as I interview Dr Cockburn in St Petersburg, FL about the new Heart of Agile and where it fits into the landscape of Agile Software Development.

Here’s the link (recording will be posted later if you cannot access Facebook!)

No automatic alt text available.

https://www.facebook.com/events/364530300951242/

p.s. Here’s the link to the website under development:  https://heartofagile.com/

Join me!

Carol

To estimate or not to estimate? Not the right question…


Over the past few years I’ve seen an increase in articles and posts about whether or not to do estimation (of cost, schedule and effort) for software development projects. This is especially true when agile/iterative methods are used to develop software for which requirements are not readily known in advance.  There are actual “movements” set up to prove that estimating in and of itself is bad for software development.  At the same time, I’ve worked done more and more work for clients related to software benchmarking (to find best-in-class methods, tools, and combinations to develop software) and estimation (including price-to-win estimating.)  I’m now convinced that “To estimate or not to estimate?” is simply the wrong question – or at least a premature question for many companies.

Estimation is often viewed as fundamental to software development (and any other development projects or programs) as are ingredients to cooking or oxygen to life.  While we might wish to discard or discredit the practice of estimation as an inconvenience and even the reason for software “failures” –(Sidenote:  The Standish group’s annual CHAOS reports cite lack of “on-time” and “on-budget” software delivery as rationale for declaring project failure; both of which would disappear as factors if estimating was eliminated) – the truth is that C-level executives need a level of confidence (based on estimates) to bound their investment in new initiatives, no matter how much faith or confidence the executives have in the development teams’ ability to deliver.  In my humble opinion, project managers MUST  develop skills to do solid, reliable project estimates if they are to survive (and thrive.)  But this is where things often fall apart – estimation is not seen as a discipline based on solid data (in part, because some organizations do estimating haphazardly based on bad data, poor models, flawed assumptions, premature input values taken as fact, among other factors.)

This does not include those organizations where the mere notion of projects (being a temporary endeavor intended to deliver an identified product, outcome, or service such as a piece of software) is like a foreign language.  When I teach courses according to the Project Management Institute’s Project Management Body of Knowledge (PMBOK(R)), it’s not uncommon to find IT pros who profess that project management is not needed because their work is bounded solely by calendar months and the number of full-time-staffers.  The idea that work should be managed towards a specified outcome (with goals, objectives, timelines, milestones, deliverables and a formal end) just doesn’t fit into their paradigm, even for those involved in developing advanced technology solutions.  I’m excluding these companies because projects (and estimating cost and schedule) are actually beyond their comprehension, as is productivity, project comparisons or process improvement.

Given the premise that “to estimate or not to estimate” is the wrong (or at least a premature) question – then what are the right ones?  Here’s a short list:

  • If we do an estimate, do we know what are the correct input variables (and values) we should use?  (i.e., Some idea of scope, non-functional requirements, constraints, goals, project environment, etc.)  Garbage in equals garbage out.
  • When estimating, do we have access to correct and appropriate historical data on which to rely? (i.e., does the historical completed project information accurately depict what actually happened on the project? Often up to 40% of true project work effort is not recorded – or it is recorded inconsistently.)  Incomplete or incorrect historical data make for poor comparisons, and even worse estimates.
  • Are the estimating models we propose, appropriate for the industry and application?  (i.e., in construction, it would be folly to use a home building model for a hospital construction or bridge construction project, so too with software.)  Every model, no matter how advanced, needs to be calibrated for the organization using it.
  • Do we know enough about the object of estimation? (i.e., if it is simply an idea about an outcome without any idea of component programs or projects, a “guess”timate or rough-order-of-magnitude may be the only possibility until more data are known.)
  • Are the estimating exercise/practices paid “lip service” by management? (i.e., does management summarily cut every estimate in half, or dictate due dates that override those of professional estimators?)
  • Does the organization take (software) measurement seriously?  (i.e., how are project measures and metrics collected – if adhoc, inconsistent, without formal processes or procedures to validate the quality of project data, then estimating will likely be equally inconsistent)

These are just a few of the important questions that need to be addressed – before we attempt to estimate and rely on the results of the practice.   When estimating is done without proper planning, discipline and consistency, the results will be unreliable and even worse, downright wrong.

In IT as in life, if you’re going to invest in an endeavor (such as estimating), take the time to do it right the first time, or don’t bother doing it at all.  And that, really answers the question of  “to estimate or not to estimate.”

What other questions are critical to ask?  What do YOU think?

Function Points (Software Size) come of Age: Mature, Stable, and Relevant


It is with pride and honor to share with you news about the upcoming Sept 13-15, 2017 celebratory (and educational) conference: ISMA14 (International Software Measurement and Analysis) – and its happening in just 4 weeks in Cleveland, OH, USA!

It’s the 30th anniversary of the International Function Point Users Group (IFPUG) – a not-for-profit user group I’ve been a part of for over 25 years.

We’re also celebrating 2017 as the International Year of Software Measurement (#IYSM).  It’s a great year for YOU to get involved (or more involved) and gain the benefits of measurement for software and systems projects!

As the Director of Communications and Marketing for IFPUG, I am excited that IFPUG is now mature (age 30!) and at the same time venturing in new directions with non-functional sizing (SNAP.)  We have much to celebrate, AND we also have more work to do (to publicize how Function Points and SNAP points provide objective measures of software size!)

The time is now!

No longer does your organization need to “fumble around in the dark” to find standard, reliable and objective software sizing measures.  Certainly there is an abundance of available units of measure (story points, use case points, source lines of code, hybrid measures, etc.) — BUT, only Function Points are supported by  ISO/IEC world standards and provide consistent, objective and technologically independent assessments of software size based on “user” requirements.  (Soon, the Software Non-functional Assessment Process – SNAP points for non-functional size will also become an international standard.)

Isn’t it time that your company adopts function points as a universal standard for software size?  YOUR timing is perfect because in less than 5 weeks, International Software Measurement and Analysis (#ISMA14) will be in Cleveland and you will have the opportunity to learn from industry experts in an intimate (less than 200 people) setting. (p.s., I’m one of the main conference speakers so you’ll know at least 1 person there!)

FUNCTION POINT proof is “in the pUDDING” (so to speak)…

We have an English proverb “the proof is the pudding”

The modern version of “The proof is in the pudding.” Implies that there is a lot of evidence that I will not go through at this moment and you should take my word for it, or you could go through all of the evidence yourself. Source:  http://tinyurl.com/5uc7eq3 

I can espouse the benefits of function points, as can IFPUG insiders and supporters such as the world-respected author/guru Capers Jones (whose 17 published books use Function Points as a universal software sizing measure). But, when the mainstream media features articles on Function Points – it’s a call to action for senior executives and IT professionals to take note! Here’s a recent example: (click on the image to read the full story!)

Need help selling your boss on the benefits?

I’ve written up the top 10 reasons to attend ISMA14 with us- won’t you join me (and a ton of other measurement professionals) in Cleveland on Sep 13?

Carol Dekkers, CFPS (Fellow), AEC, PMP, P.Eng.
President, Quality Plus Technologies, Inc.
IFPUG Director of Communications and Marketing

 

To Succeed with Measurement, Choose Stable Measures


The pace of technology advancement can be staggering – new tools, methods, acronyms, programming languages, platforms and solutions – come at us at warp speed, morphing our IT landscape into patchwork quilts of old and new technologies.  

At times, it can be challenging to gauge the results (of change): what were the specific processes /tools /methods /technologies /architectures /solutions that contributed to or delivered positive results?  How can we tell what made things worse?

Defining positive “results” is the first step and measurement can contribute – as long as our measures don’t shift with the technology!

I and countless others have written about Victor Basilli’s GQM (Goal Question Metric) approach to measurement, (in short, choose measures that answer the questions you need to answer so you can achieve the goal of measurement…) but there’s a problem even more fundamental, and goes beyond choosing the right measures:

The key to (IT) measurement lies in stability and consistency:  choosing stable measures (industry standardized definitions that don’t change) and measuring consistently (measuring the same things in the same way.)
– Carol Dekkers, 2016

This may seem like common sense, but after 20 years of seeing how IT applies measurement, I realize common sense isn’t all that common.  There are some in the IT world that would rather invent new measures (thus decreasing stability and consistency) than embrace proven ones.  While I’ve seen the academic tendancy of “tear down what already exists to make room for my new ideas,” I believe that this is counter-productive when it comes to IT metrics.  But, I’m getting ahead of myself.  First, let’s consider how measurement is done in other industries:

  • Example 1: Building construction.  Standard units of measure (imperial or metric) are square feet and square meters.  The definition of a square foot has not changed despite advances in modular design.
  • Example 2: Manufacturing.  Units of measure for tolerances, product sizes, weights, etc. (inches, mm, pounds, kg, etc.) are the same through the years.
  • Example 3: Automobiles.  Standard ratios such as miles per gallon (mpg) and acceleration (0-60 in x seconds) remain industry standards.

In each example, the measure is stable and measurement success is a result of consistent and stable (unchanging) units of measure applied across changing environments.  Comparisons of mpg or costs per square foot would be useless if the definition of the units of measure was not stable.  Comparability across products or processes depends on the consistency and stability of both the measurement process and the measures themselves.

Steve Daum wrote in “Stability and linearity: Keys to an effective measurement system” :

“Knowing that a measurement system is stable is a comfort to the individuals involved in managing the measurement system. If the measuring process is changing over time, the ability to use the data gathered in making decisions is diminished. If there is no method used to assess stability, it will be difficult to determine the sensitivity of the measurement system to change and the frequency of the change…Stability is the key to predictability.”

One of the most stable and consistent measures of software (functional size) is called IFPUG Function Points and as The International Function Point Users Group (IFPUG) is poised to celebrate its 30th year in 2017.  The IFPUG Function Point measure is stable (with hundreds of thousands of projects having been FP counted,) and consistent (it’s been an ISO/IEC standard for almost 20 years!) – and perhaps 2017 is the year that YOUR company should look at FP based measurement.

FPA (Function Point Analysis) provides the a measure of software size under development and can be used equally well on agile, waterfall, and hybrid software development projects.  Yet, despite its benefits, much of the world still doesn’t know about the measure.

See my first post of 2016 here:  Function Point Analysis (FPA) – Creating Stability in a Sea of Shifting Metrics for more details.  FP is certainly a good place to start when you’re looking for software measurement success… why not today?

Wishing you a happy and safe holiday season wherever you live!

 

10 Steps to Better Metrics… for Everyone


The more things change, the more they stay the same – especially when it comes to initiatives that involve cultural change. Measurement is a perfect example – and I’m not talking purely about “software metrics,” rather measurement in any industry.

When you take a business that has traditionally “flown by the seat of its pants” (in other words, it is a monopoly of sorts or it has made money in spite of itself) and start to keep track of what’s going on, people have issues.  The first step often is to simply measure anything that moves – data that are easy to capture – and then try to figure out some sort of conclusions or action plans.   In IT (Information Technology) the landscape is littered with discarded data from failed measurement initiatives.  Data in and of themselves are not bad, IF the data are used appropriately and in the right context.

I recently wrote the following article for Projects at Work based on concepts I first observed nearly 20 years ago, and they are as valid today as ever before.

As a consultant, I LOVE to work with companies who want to succeed with measurement. If you are tasked with starting metrics for your company, give me a call – maybe I can give you some ideas to save you time and money – and succeed with metrics!

Send me an email or leave a comment – measurement is too important to leave to chance.  (Let me know if you’d like a full copy of this article!)

Cheers,
Carol

ProjectsAtWork - 10 Steps to Better Metrics July 2015

QSM (Quantitative Software Management) 2014 Research Almanac Published this week!


Over the years I’ve been privileged to have articles included in compendiums with industry thought leaders whose work I’ve admired.  This week, I had another proud moment as my article was featured in the QSM Software Almanac: 2014 Research Edition along with a veritable who’s who of QSM.

This is the first almanac produced by QSM, Inc. since 2006, and it features 200+ pages of relevant research-based articles garnered from the SLIM(R) database of over 10,000 completed and validated software projects.

Download your FREE copy of the 2014 Almanac by clicking this link or the image below.

almanac

Combining Soft Skills and Hard Tools for Better Software


One of the more interesting topics in software development (at least from my perspective) is the culture of the industry.  Seldom does one find an industry burgeoning with linguistics majors, philosophers, artists, engineers (all types – classically trained to self-named), scientists, politicians, and sales people – all working on the same team in the same IT department.

This creates an incredible diversity and richness – and leads to sometimes astounding leaps and bounds in innovation and technological advancement, but it can also create challenges in basic workplace behavior.  This post takes a look at the often overlooked soft skills (empathy, leadership, respect, communication, and other non-technical skills) together with technical competencies as an “opportunity” (aka challenge or obstacle to overcome.)

It was published first on the Project Management Institute (PMI) Knowledge Shelf – recently open to the general non-PMI public.

soft skills

Added bonus here:  I referenced the You Tube 2013 University of Western Australia commencement address by Australian comedian/actor Tim Minchin at the University of Western Australia in 2013 in my post (he shares his 9 recommendations to graduates, my favorite -and the one I quoted – is #7 Define yourself by something you love!)  I believe it’s worth the watch/listen if you need to take a break and just sit back and think about soft skills during your technical day. (Warning to the meek of heart – it’s irreverent, offensive, and IMHO, bang on in his core sentiments.  If you’re offended, I apologize in advance!)

If you’d like a pdf copy of the post above, please leave me a comment with your email address!  (And even if you don’t, I’d love your opinion!)

Have a great week!

Carol

Spice Conference


Spice Conference.

Please join us for the Software Process Improvement Capability dEtermination conference!

May 29-31, 2012 Palma de Mallorca SPAIN.

Everyone welcome!