Tag Archives: software development

Bias: It’s like Kryptonite to Collaboration


Collaborate: the concept is so ingrained with agile development that it’s one of the four components of Alistair Cockburn’s simplified Heart of Agile approach (along with Deliver, Reflect and Improve.)  Yet, collaboration has an “achilles heel”, a Kryptonite of sorts, that no one really talks about:  Bias.

Before we get into the roots of Bias and its effects, let’s explore what’s at the core of Collaboration.

Collaboration

To collaborate (according to Google Dictionary) is to

cooperate, join (up), join forces, team up, get together, come together, band together, work together, work jointly, participate, unite, combine, merge, link, ally, associate, amalgamate, integrate, form an alliance, pool resources, club together,  fraternize, conspire, collude, cooperate, consort, sympathize…

Collaboration relies on a combination of team skills: listening, communicating, connecting, cooperating, observing, and setting aside personal agendas and opinion.

Bias is like Kryptonite to Collaboration

Bias creeps in innocuously while we’re truly practicing our good communication skills; it can coat our words and can jade our thought process without us even being aware of it.  At best, bias skews our creativity, at worst, it kills trust and collaboration.

While we may not perceive bias in ourselves or others (“after all, we’re open and honest professionals doing our best”) – it’s there.

The good news is that Bias is like the iceberg below the surface that we didn’t know was there, and once we’re aware its there, we can dismantle its effects.

Through conscious practice and exercises, our awareness of our own biases can increase and we can override its effects.

Types of Bias

Bias comes in a variety of flavors and styles – and, we all have it in one way or another. The first step to overcoming bias is to recognize and acknowledge it in ourselves.

What kind of bias(es) do you hold?  I’ve condensed the list of bias types (from data analytics and psychology sites) and added examples for agile projects:

  1. Confirmation Bias – We skew our observation of an event, concept, person in a way that confirms our already held belief.  A pattern of highlighting news that agrees with the agenda of the left or right, and ignoring the other side.
    Example:  Changes are needed to a delivered report because legislation changed (fact), and we think “Typical of the users always changing their minds” (confirms our belief.)
  2. Attribution Bias – We attribute an action to someone based on their personality (flaws) rather than the situation.
    Example:  The project is delayed because testing isn’t finished.  We think:  “the tester doesn’t know what they are doing” (attributing it to a personality flaw) rather than looking at the situation of tester work overload.
  3. Commitment Bias – Continuing to invest in a derailed project because of the sunk costs already committed (time/effort/money.)
    Example: We think: How can we bail on this project after we’ve already spent a year and $500K on it?
  4. Labeling Bias – We (un)consciously describe or label a person/group in a way that influences how we think about them.
    Example:  We think marketing (people) have no clue about technology (so why ask them for input)?
  5. Spin Bias – We accept only one interpretation (ours) of an event or policy to the exclusion of the other.
    Example:  We spin the news (especially when there is a gap):  China banned cryptocurrencies, so it must be because of… (nefarious financial dealings…)
  6. Spurious Correlation Bias – We believe causality (without checking) between two potentially (un)related events or circumstances.
    Example:  Gas prices always go up before a long weekend.
  7. Omission Bias – Leaving out important information/people (certain facts or details) on a project because we don’t see its relevance.
    Example:  We didn’t think of including IT folks in a renovation project meetings (we omitted the importance of moving the network…)
  8. Not like Me Bias – Dismissing information or ideas because of the source or group generating it.
    Example:  We think that new hires have no idea about how our company works (when their input may be fresh and show us new ways to work.)

When we allow these unconscious biases to influence how we send or receive information, we stymie true collaboration.

What biases do you see in yourself and your workplace?  When I, personally realize I have a particular bias, I make a conscious effort to pause before responding.  When I’m prone to say something that would block collaboration (words like “we can’t do that…” or even the seemingly innocent thought in my head like “what on earth?” I try to stop and ask myself why I would think that.  Pausing to reflect and see the part that my unconscious bias plays, really helps me to collaborate better.  Might it work for you?

Would you Play Along?

Over the coming weeks, please join me in a new blog series called “The 5% Social Experiment” where I’ll post some easy social experiments to identify and overcome workplace bias.

Can I count on you to play along?

Carol

Advertisements

What is this Thing – the Heart of Agile – all about?


If you’re an agilist (my word for those who embrace agile concepts to do great work) – you’ll be interested to hear about the Heart of Agile directly from the mouth of its founder Dr Alistair Cockburn.

Join me TOMORROW (Thursday Oct 4, 2018) at 10:00am Eastern Daylight Time as I interview Dr Cockburn in St Petersburg, FL about the new Heart of Agile and where it fits into the landscape of Agile Software Development.

Here’s the link (recording will be posted later if you cannot access Facebook!)

No automatic alt text available.

https://www.facebook.com/events/364530300951242/

p.s. Here’s the link to the website under development:  https://heartofagile.com/

Join me!

Carol

QSM (Quantitative Software Management) 2014 Research Almanac Published this week!


Over the years I’ve been privileged to have articles included in compendiums with industry thought leaders whose work I’ve admired.  This week, I had another proud moment as my article was featured in the QSM Software Almanac: 2014 Research Edition along with a veritable who’s who of QSM.

This is the first almanac produced by QSM, Inc. since 2006, and it features 200+ pages of relevant research-based articles garnered from the SLIM(R) database of over 10,000 completed and validated software projects.

Download your FREE copy of the 2014 Almanac by clicking this link or the image below.

almanac

What’s the Point of Estimating?


Here’s the latest installment of my QSM Ask Carol blog:

Point of estimating

“What’s the Point of (Early) Estimating?”  … click on the image or the title here to read the full blog post.

-Carol

If IT’s important – get a second (or third) opinion!


I’d like to share with you my latest  post on the QSM (Quantitative Software Management) blog – let me know what you think!

-Carol

1st in the series

1st in my new series – here’s the URL:

http://www.qsm.com/blog/2013/ask-carol-if-its-important-get-second-opinion

What Can Goldilocks Teach about Software Estimating?


goldilocks

http://www.qsm.com/blog/2013/what-can-goldilocks-teach-about-software-estimating

Comments?

Fundamentals of Software Estimating: First see the Elephant in the Room- part 1


This is the first in a sequence of four posts (and more to come) on BASIC software estimating concepts that address “WHAT IS IT?” we plan to estimate.

Introduction

Whenever I teach project estimating or Project Management 101 (basics) to software developers, we address the fundamental questions about what we plan to do (once called “The Triple Constraint” in project management circles):

  • How big is it? (Scope management)
  • How much will it cost to build? (Cost estimating and budgets)
  • How long will it take to build? (Scheduling)
  • How good does it need to be? (Quality)

What amazes me, is that while these are great questions that need to be answered when we do an estimate, we start out by ignoring the “Elephant in the Room” – that is, what do we mean by the “it” in the questions above.  Sure, for some people “it” may be obvious that “it” is a project, but bear with me for a minute… misunderstandings about terminology and definitions are often the source of major rework when software construction is involved.

elephantWhat is “it”?

  1. “It” could be the project (and all that the term entails);
  2. “It” could be the resultant product (software and/or hardware);
  3. “It” could be a phase or exploratory R&D effort; or
  4. “It” might be something altogether different...

The question of “what is it?” we are estimating is fundamental to why IMHO (In My Humble Opinion) software projects end up being over budget, late, and out of scope.  If you don’t know definitively WHAT you are estimating, how can any possible estimate be realistic?

This post is Part 1:  “It” is a project

The concept of a “project” is the basis of Project Management 101 (per the Project Management Institute‘s formal PM Body of Knowledge: PMBOK®).

When I taught project management for a major government agency overseeing billions of dollars of software development, it amazed me to find out that many unofficial, self-appointed “project managers” accepted the notion that  a project could be an amorphous concept with a jello-like scope, undefined start/end points (let alone the idea of acceptance criteria that would define when a project is “done”), sketchy objectives, that could go on for years without delivering results.  For many, the end of the project was a often a moving target because new functions were routinely added during construction or “substituted” for things that were no longer needed. Such projects run out of control with schedules that go on forever, a scope that remains ambiguous and ever growing, and budgets that are essentially buckets of money.

Some projects started with an idea (no one could remember why) and as long as it “stayed in the green box”  (meaning that it didn’t raise the ire of management or be outside of acceptable metric limits) it was a “good project.”  The prevailing idea of a project budget was “five people working full time on this stuff until we’re done” and the schedule was “we thought we’d be done by now, but management keeps adding more stuff to the pile of things they want in the system.”

As an engineer, I often compare software development to building construction – with the difference being that software development often starts without “sealed engineering drawings” or the equivalent (formal plans) and either gets canceled along the way or goes on until someone says “stop.”

The Project Management Institute (PMI) defines a project in its PMBOK® 4th Edition as:

A temporary endeavor undertaken to create a Unique product, service or result.

So, if we are estimating a PROJECT, we must minimally be able to define:

  • What is the scope of the project:  Will it include a full lifecycle (i.e., from concept definition and requirements through to working installation of software for the first user?) What is the final result (a product, a concept, a service) and what are the functions to be included?
  • Who (the stakeholders), what (as above), when (how will we know it is done?), where, and why (what’s the objective?)  Note that HOW the project is to be done is outside the definition of the project.
  • What is included (and even more important sometimes – what is NOT included) and what are the start and stop points?  What (explicitly) is outside the boundaries of the project (such as training; corporate rollout; multiple languages; etc.)

Once you’ve defined that “it” is a project that you want to estimate, and have defined what constitutes said project as outlined above, make sure to write them down.  Once you’ve got these minimal things documented, you are ready to at least begin to think about how to estimate the items in the original list I presented at the top of this post (how big, how long, how much, how good.)

For more information and definitions about “Projects” refer to www.pmi.org (The official site of the Project Management Institute.)

Stay tuned for part two: If “it” is a product.

Measurement and IT – Friends or Frenemies ?


I confess, I am a software metrics ‘geek’… but I am not a zealot!  I agree that we desperately need measures to make sense of the what we are doing in software development and to find pockets of excellence (and opportunities for improvement), but it has to be done properly!

Most (process) improvement models, whether they pertain to software and systems or manufacturing or children or people attest to the power of measurement including the CMMI(R) Capability Maturity Model Integration and the SPICE (Software Process Improvement Capability dEtermination) models.

But, we often approach what seems to be a simple concept – “Measure the Work Output and divide it by the Inputs” back asswards (pardon my French!)

Anyone who has been involved with software metrics or function points or CMMI/SPICE gone bad can point to the residual damage of overzealous management (and the supporting consultants) leaving a path of destruction in their wake.  I think that Measurement and IT are sometimes the perfect illustration of the term “Virtual Frenemies” I’ll lay claim to it!) when it comes to poorly designed software metrics programs.  (The concepts can be compatible – but you need proper planning and open-minded participants! Read on…)

Wikipedia (yes, I know it is not the best source!) defines “Frenemy (alternately spelled “frienemy“):

is a portmanteau of “friend” and “enemy” that can refer to either an enemy disguised as a friend or someone who’s both a friend and a rival.[1] The term is used to describe personal, geopolitical, and commercial relationships both among individuals and groups or institutions. The word has appeared in print as early as 1953.

Measurement as a concept can be good. Measure what you want to improve (and measure it objectively, consistently, and then ensure causality can be shown) and improve it.

IT as a concept can be good. Software runs our world and makes life easier. IT’s all good.

The problem comes in when someone (or some team) looks at these two “good” concepts and says, let’s put them together, makes the introduction, and then walks away.  “Be sure to show us good results and where we can do even better!” is the edict.

Left alone to their own devices, measurement can wreak havoc and run roughshod over IT – the wrong things are measured (“just measure it all with source lines of code or FP and see what comes out”), effort is spent measuring those wrong things (“just get the numbers together and we’ll figure out the rest later”), the data doesn’t correlate properly (“now how can we make sense of what we collected”), and misinformation abounds (“just plot what we have, it’s gotta tell us something we can use”).

In the process, the people working diligently (most of the time!) in IT get slammed by data they didn’t participate in collecting, and which often illustrates their “performance” in a detrimental way.  Involvement in the metrics program design, on the part of the teams who will be measured, is often sparse (or an afterthought), yet the teams are expected to embrace measurement and commit to changing whatever practices the resultant metrics analysis says they need to improve.

This happens often when a single measure or metric is used across the board to measure disparate types of work (using function points to measure work that has nothing to do with software development is like using construction square feet to measure landscaping projects!)

Is it any wonder that the software and systems industries are loathe to embrace and take part in the latest “enterprise wide” measurement initiative? Fool me once, shame on you… fool me twice, shame on me.

What is the solution to resolving this “Frenemies” situation between Measurement and IT?  Planning, communication, multiple metrics and a solid approach (don’t bring in the metrics consultants yet!) are the way.

Just because something is not simple to measure does not make it not worth measuring – and measuring properly.

For example, I know of a major initiative where a customer wants to measure the productivity of SAP-related projects to gain an understanding of how the cost per FP tracks on their projects compared to other (dissimilar) software projects and across the industry.

Their suppliers cite that Function Points (a measure of software functionality) does not work well for configurations (this is true), integration work (this is true), and that it can take a lot of effort to collect FP for large SAP implementations (can be true).  However, that does not mean that the productivity cannot be measured at all!  (If all you have is a hammer, everything might look like a nail.)

It will require planning and design effort to arrive at an appropriate measurement approach to equitably and consistently track productivity across these “unique” types of projects. While this is non-trivial, the insight and benefits to the business will far exceed the effort.  Resistance on the part of suppliers to be measured (especially in anticipation of an unfair assessment based on a single metric!) is justified, but a good measurement approach (that will fairly measure the types of effort into different buckets using different measures) is definitely attainable (and desired by the business.)

The results of knowing where and how the money is “invested” in these projects will lead to higher levels of understanding on both sides, and open up discussions about how to better deliver!  The business might even realize where they can improve to make such projects more productive!

Watch my next few posts for further details about how to set up a fair and balanced software measurement program.

What do you think?  Are measurement and IT doomed to be frenemies forever? What is your experience?

Have a good week!
Carol

Estimating Before Requirements with Function Points and Other Metrics… Webinar Replay


On June 7, 2012, I conducted an hour-long webinar on “Estimating Before Requirements with Function Points and Other Metrics” before a worldwide audience spanning a myriad of software development specialties/industries and across many countries.

In the event that you missed the live webinar or would like to listen/view the replay, it is available (at no charge) at http://www.qsm.com/Webinars/Estimating-before-Requirements

At the end of the webinar, I offered to send attendees several papers (also downloadable from this link) as well as a Scope Management primer.  If you are interested, please send an email to me at: dekkers (at) qualityplustech (dot) com.

Let me know what you think of the concepts and the webinar!

Carol

The Death of Brainstorming? Say it isn’t so…


I love the articles in the New Yorker, and the following one caught my eye because of the Brainstorming topic. In leadership courses, I’ve espoused the value of brainstorming when done right (without judgement and analysis) – and I’ve seen positive results.  Could it be that the creative process might actually work better when criticism is allowed to fly?

Say it isn’t so

When I teach brainstorming techniques, I always find it interesting that the creative (right brain dominant) thinkers in the group really love and contribute more during the “Brainstorming” free flow of ideas phase (before analysis sets in), while the linear, engineering style (left brain dominant) thinkers in the group can’t wait for the second phase where the ideas are analyzed and critiqued.  Divergent thinking followed by convergence of ideas.  Made perfect sense to me and the students demonstrated how safe each group felt – depending on which side of the brain dominated their idea flow.

So now, it appears that the “Steve Jobs” style of criticism before acceptance, domineering boss-like, judgment first ways of working have merit, or do they?  Read the article then read on and comment (please!)

Here’s the link to “Groupthink” if you cannot reach it above.

I’m conflicted…

about this latest “research” and given my international experiences in presenting in over 30 countries to technical audiences, I have to say that Information Technology and software development are as much about the people and psychology (trust and communication) as they are about technology and engineering problems.

I’ve seen success with collaborative approaches like Kanban, agile, Rational (Use Cases) – which I believe succeed because we bring disparate viewpoints of the customers and suppliers together and address various learning styles (visual, audio and kinesthetic) to gain the highest levels of understanding.  Brainstorming is one such technique whereby the most dominant (i.e., typically the most critical of all ideas except his/her own) no longer gets to direct the problem solving.

What’s BEEN your experience?

I look forward to your comments – do you agree with these findings? – and to further research… and to hopefully announcing that Brainstorming is NOT dead, in fact, it just needed a wake-up call to re-energize the benefits for a new iPad generation!