Tag Archives: Business


As part of ongoing writings on software sizing and function point analysis, I recently posted the following article on the QSM blog.  Enjoy!

FP as mousetrap

Advertisements

Trust and Verify are the (IT) Elephants in the room


As a party involved in some aspect of software development, why do you think projects are so hard?  Millions of dollars in research work to solve this question, with the result being new models, agile approaches and standards, all intended to streamline software development.

What do you think is the core reason for project success or failure?  Is it the people, process, requirements, budgets, schedule, personalities, the creative process or some combination?

Sure, IT (information technology) is a relatively new industry, plagued by technology advances at the speed of light, industries of customers and users who don’t know what they want, budgets are preset, schedules are imposed, scope is elusive, and, ultimately computer scientists and customers still speak their own language.  Some people argue that it boils down to communication (especially poor communication).  After all, isn’t communication the root cause of all wars, disputes, divorces, broken negotiations, and failed software projects?

I disagree.

I believe that TRUST and VERIFY are THE TWO most important factors in software development

These two elements are the IT elements in the room (so to speak!) I could be wrong, but it seems like the commonly cited factors (including communication) are simply symptoms of the elephants in the room – and no one is willing to talk about them.  Instead, we bring in new methodologies, new tools intended to bring customers and suppliers together, new approaches, and new standards – and all of these skirt the major issues: TRUST and VERIFY.

Why are these so critical?

Trust is the difference between negotiation and partnership – trust implies confidence,  a willingness to believe in (an)other, the assurance that your position and interests are protected, and the rationale that when life changes, the other party will understand and work with you. A partnership means that there is an agreement to trust in a second party and to give trust in return.  Trust is essential in software development.

BUT… many a contract and agreement have gone wrong with blind trust, and that is why VERIFY is as important as trust. Verify means to use due diligence to make sure that the trust is grounded in fact by using knowledge, history, and past performance as the basis.  Verify grounds trust, yet allows it to grow.

President Ronald Reagan coined the phrase “Trust, but Verify” – but I believe it is better stated as “Trust and Verify” because the two reinforce each other.  This also suggests the saying:  “Fool me Once, Shame on You… Fool me Twice, Shame on Me.”

Proof that Trust and Verify are the Elephants in the Room

Software development has a history of dysfunctional behavior built on ignoring that Trust and Verify are key issues. It is easier for both the business (customers) and the engineers (suppliers) to pretend that they trust each other than address the issues once and for all.  To admit to a lack of trust is tantamount to declaring war and accusing your “partners” of espionage.  It simply is not done in the polite company of corporate boardrooms.  And so we do the following:

  • Fixed price budgets are set before requirements are even known because the business wants to lower their risk (and mistrust);
  • Software development companies “pad” their estimates with generous margins to decrease their risk that the business doesn’t know what they want (classic mistrust);
  • Deadlines are imposed by the business based on gut-feel or contrived “drop dead” dates to keep the suppliers on track;
  • Project scope is mistakenly expressed in terms of dollars or effort (lagging metrics) instead of objective sizing (leading metrics);
  • Statements like “IT would be so much easier if we didn’t have to deal with users” are common;
  • Games like doubling the project estimate because the business will chop it in half become standard;
  • Unrealistic project budgets and schedules are agreed to to keep the business;
  • Neither side is happy about all the project meetings (lies, more promises, and disappointment).

Is IT doomed?

Trust is a critical component of any successful relationship involving humans (one might argue that it is also critical when pets are involved) – but so too is being confident in that trust (verify).  Several promising approaches address trust issues head on, and provide project metrics along the way to ensure that the trust remains.

One such approach is Kanban (the subject of this week’s Lean Software and Systems Development conference LSSC12 in Boston, MA).

Kanban for software and systems development was formalized by David Anderson and has been distilled into a collaborative set of practices that allow the business and software developers to be transparent about software development work – every step of the way.  Project work is prioritized and pulled in to be worked on only as the volume and pace (velocity) of the pipeline can accommodate.  Rather than having the business demand that more work be done faster, cheaper and better than is humanly possible (classic mistrust that the suppliers are not working efficiently), in Kanban, the business works collaboratively with the developers to manage (and gauge) what is possible to do and the pipeline delivers more than anticipated.  Trust and verify in action.

Another promising approach is Scope Management (supported by a body of knowledge and a European based certification) – a collaborative approach whereby software development effort is done based on “unit pricing”.  Rather than entertaining firm, fixed price, lose-lose (!!!) contracts where the business wants minimum price/maximum value and the supplier need to curtail changes to deliver within the fixed price (and not lose their shirts), unit pricing actually splits a project into known components can are priced similarly to how home construction can be priced by square foot and landscaping priced by the number of trees.

In Scope Management (see www.qualityplustech.com and www.fisma.fi for more details or send me an email and I’ll send you articles), the business retains the right to make changes and keep the reins on the budget and project progress and the supplier gets paid for the work that the business directs to be done.  Project metrics and progress metrics are a key component in the delivery process.  Again TRUST and VERIFY are key components to this approach.

What do you think? 

Please comment and share your opinion – are TRUST and VERIFY the IT elephants in the rooms at your company?

P.s., Don’t forget to sign up for the SPICE Users Group 2012 conference in 2 weeks in Palma de Mallorca, Spain. See www.spiceconference.com for details!  I’m doing a 1/2 day SCOPE MANAGEMENT tutorial on Tuesday May 29, 2012.

Whats behind Project Success: Process or People?


Depending on who or what you read, most software and systems projects (over 50%) end up as unsuccessful/failures:  over budget, late, and/or fail to meet the user needs.  As a worldwide phenomenon, studies continue to expound on why projects fail (poor requirements, underfunding, overoptimistic estimates, unreasonable schedules, lack of management commitment, etc.) but few studies focus on what it takes for projects to succeed.

What do you think makes a project (of any kind) successful?  What is more important to project success:

1. The processes involved (e.g., formal project management, standards, shortened development life cycles, agility…); or

2. The people involved (e.g., the right team makeup, a good mix of skills, a motivated workforce, engaged users); or

3. Trust (e.g., collaboration rather than negotiation between customers and suppliers, reliance, cooperative teamwork; communication); or

4. Something else (e.g., other factors such as CMMI, tool sets, unlimited budgets, Steve Jobs on the team, …); or

5. Some “magical” combination of the above; or

6. None of these?

Across industries and across the world, is there a difference in what makes a project successful?  Are there certain factors that predispose a project for success (or failure?)

What do YOU think?  Inquiring minds are interested in hearing from you… (please post a comment or send me a private email to dekkers (at) qualityplustech (dot) com).

Thank you!
Carol

Childproof your Metrics Program…


As a new parent years ago, I remember childproofing household dangers like electrical outlets, and raising adult objects above my children’s reach.  Over the years, new hazards appeared and it took planning to stay a step ahead to prevent injury or damage to innocent children.

I remembered this when I thought about what could be done to avoid software metrics failures – perhaps a form of “childproofing” could avoid a few of the dangers involved.

Software measurement is NOT rocket science (despite the claims of a few eager consultants), but neither is it child’s play.  Measurement must be properly planned or it can actually cause more damage than help!

You likely recall Tom DeMarco‘s famous “you can’t manage what you can’t measure” statement, but may not be aware of his later observation that poorly planned metrics can damage an organization.

How to “Childproof” your Metrics Program:

1. Plan a clean metrics program by following the Goal-Question-Metric (GQM) approach. By so doing, metrics are only collected when they serve a specific purpose.

2. Make sure that the measurement team understands the plan.  This will make sure that specific metrics are collected appropriately and that extraneous data are not lying around (or misinterpreted).

3. Pilot the measurement and metrics in a controlled environment before rolling it out. Train the right people to collect and analyze the metrics to make sure the intended results can be achieved.  It is far easier to see dysfunctional behavior (often unintended consequences of measurement) in a controlled environment and minimize potential damage.

4. Communicate, communicate, communicate. Be honest and specific about the project plan: resources, schedule, intended results.  Prepare management not to “shoot the messenger” when early results do not equal their expectations.

5. Limit data access to those who are skilled in data analysis (do not allow management access to raw data).  Proper data analysis and correlation is a critical success factor for any metrics program.

6. Be realistic with management about their expectations.  A program designed to meet champagne tastes (for measurement results) on a beer budget seldom succeeds.  Moreover, sometimes historical data can be collected if available, other times data are impossible to collect after the fact.

7. Recognize that wishful thinking for metrics will not disappear overnight.  Management and staff may not understand that measurement should be implemented as a project:  there will be a need for training (in metrics concepts), planning, design (measurement processes), implementation (initial data collection), training (for the program), communication, etc. As a result, people may give lip-service to the overall initiative without understanding project management is necessary.  Communication and level setting will be an ongoing process.

8. Do not allow data to get into the hands of untrained users.  Often sponsors want to “play with the data” once some measurement data is collected. Avoid the temptation to release the data to anyone who simply asks for access.

9. Do a dry run with any metrics reports before distribution.  After data analysis is done, gather a small focus group together to pilot the results.  It is far easier to address misinterpretations of a data chart with a small group than it is after a large group sees it. For example, if a productivity bar graph of projects leads viewers to the wrong conclusion, it is easy to correct the charts before damage is done.  It only takes one wrong action based on data misinterpretation (i.e., firing a team from an unproductive project when it was actually a problem of tools) to derail a metrics program.

To be effective and successful, software measurement requires planning including a consideration of the consequences (such as dysfunctional behaviors that measurement may cause). Childproofing using the above steps will help to ensure your measurement program achieves success.

Have a great week!

Carol

http://www.qualityplustech.com

Common-sense Leadership: Respond not react…


A big benefit to teaching leadership and communication workshops to adult professionals is continuous learning: every time I teach a class, new revelations come into focus.

One such “aha” moment (where one realizes something that may not have been obvious before) is that Leadership is really about learning to Respond to a situation or stimulus instead of automatically Reacting.  Why is this important?  Responding is the thought intensive process of actively listening, pausing, and then gathering ones “thoughts” before speaking.  Gathering of one’s thoughts involves the neocortex (center) of the brain whereby we override the reptilian (instinctual) brain and the limbic (emotion-induced) brain, and hopefully create a response less prone to immediate and autonomous reactions (based on instinct or emotion).

Considering how eastern cultures (such as Japan) seem to habitually pause before asking questions at a conference or before coming to an agreement gave me “pause” to reflect on how this practice conveys power and respect – and is one often used by practised politicians at press conferences.  This results in less “eating one’s premature words” and less damage control as opposed to when one speaks too hastily or without due thought.

This is a common-sense tip on how to practice better leadership in your own workplace no matter your position:  remember and practice active listening (if you are thinking of what you are going to say – you are not listening!), pausing, gathering your thoughts (and perhaps even saying “please give me 15 seconds to gather my thoughts”) and then thoughtfully responding.

Food for thought – what do you think?  Could this be helpful in your workplace?

Carol

The Most Critical Skills in the 21st Century – are they Hard or Soft?


The past few months I’ve found myself instructing a series of Leadership and Communication workshops for adult professionals across the United States, together with delivering a series of Keynote Conference addresses in Europe on similar topics.

At one particular address in Dublin, Ireland, I emphasized that the beauty of modern software development approaches (such as Kanban) is that the development team can lay bare their work pipeline and ultimately collaborate (through effective leadership and communication skills) with the business. After a series of illustrative exercises (yes, at a keynote address!), attendees by and large embraced the principles of collaboration along with the concept that we need to refrain from treating each other as “machines” at work (formulated along the lines of Margaret Wheatley‘s ideas.) By treating each other as human beings from the kickoff meeting (at least), projects can achieve resounding levels of success.

One particular conference on Quality Assurance and Testing featured not only my keynote (A Soft Skills Toolkit for Testers!) on Leadership and Communication,  but at least three others of similar slant: presentations that emphasized teamwork, respect, and collaboration.  I believe that these are essential components to the success of any project!

One key point I bring home in all of my training and keynotes is that as engineers and computer scientists, we tend to minimize the emerging importance of soft skills such as leadership and communication (I have an entire 16-piece toolkit for this) as “fluff” in favor of what we often see as superior technical “hard skills”.  As an engineer myself, I see the pitfalls of a technically competent workforce that cannot talk outside of its own niche – and many others agreed.

But, it came to fully illustration the evening after one keynote.  A group of us had gathered at a local pub to sample the local beverages when the wife of a conference chair (a science based PhD herself) approached me to comment on what she had heard about my morning keynote:

“Carol, I heard that you gave an entertaining keynote presentation today, “

she started,

“…but it was “entirely without substance.”

What she was in fact saying was that my keynote, in her and her husband’s opinion, had some redeeming entertainment value, but the lack of research-data based charts and advanced equations, rendered it “entirely without substance.”

I did suppress my inclination to applaud and say “thank you for illustrating my point so eloquently” when she said this because I realized it might be a futile discussion.  Instead, I simply smiled, thanked her for her comment, and turned back to the business and beverage at hand.

Now that I am contemplating a series of workshops for future conferences (technical software engineering and quality conferences) to continue the discussions on Leadership and Communication, it occurs to me that calling these skills “soft” may actually diminish their importance – regardless of proof that Leadership, Communication and Collaboration are some of the most important and hardest skills to teach our industry leaders in the 21st century.

What do YOU think?  Are Leadership skills (such as managing to relationships, emotional intelligence, cultural intelligence, diversity, and working with teams and people) considered more as Soft Skills or as Hard Skills (akin to programming in dot net or Java) or a mix of both?

As a technical professional – how important do you think are Leadership and Communication skills to the success of your projects?

I will be awaiting your comments!

Happy holidays!

Carol

p.s., Send me an email if you’d like to see more about the Soft Skills Toolkit for Testers presentation I did in October. I would love feedback and recommendations.

Apples and Oranges work in Fruit Salad, not S/W Measurement!


A colleague once observed at a professional conference that “Common sense is not very common” – and when it comes to the typical approach to software measurement, I have to agree.

Case in point – there are proven approaches to software measurement (such as the Goal/Question/Metric by the Software Engineering Institute, and Practical Software & Systems Measurement out of the Department of Defense) – yet corporations often approach metrics haphazardly as if they were making a fruit salad.  While a variety of ingredients works well in the kitchen, data that seem similar (but really are not) can wreak havoc in corporations.  Common sense should tell us that success with software metrics depends on having comparable data.

If only data were like fruit

– it would be easy to pinpoint the mangoes, apples, oranges, and bananas in company databases and save millions of corporate dollars.

Most Metrics Programs don’t Intend to Lie with Statistics, but many do…

I do not believe that executives and PMO’s (project management offices) have malicious intent when they start IT measurement and benchmarking initiatives.  (Sure, there are those who use measurement to advance their own agenda but this is the topic of a future post.)

Instead, I believe that many people trivialize the business of measurement thinking that measurement is easy to do once one directs people to do it.

The truth is that software measurement takes planning and consideration to get it right.  While Tom DeMarco‘s quote

“You can’t control what you cannot measure”

is often used to justify measurement start-ups, his later observations countered it.

In the 1995 essay, Mad about Measurement, DeMarco states:

“Metrics cost a ton of money.  It costs a lot to collect them badly and a lot more to collect them well…Sure, measurement costs money, but it does have the potential to help us work more effectively.  At its best, the use of software metrics can inform and guide developers, and help organizations to improve.  At its worst, it can do actual harm.  And there is an entire range between the two extremes, varying all the way from function to dysfunction.”

It is easy to Get Started in the Wrong Direction with Metrics…

Years ago, I was working with a team to start a function point based measurement program (function points are like “square feet for software”) at a large Canadian utility company, when an executive approached me.  “We don’t need function points in my group” he remarked, “because we have our quality system under control just by tracking defects.” As he described what his team was doing, I realized that he was swimming upstream in the wrong direction, without a clue that he was doing so.

The executive and his group were tracking defects per project (not a bad thing) and then interviewing the top and bottom performing teams about the defect levels.  Once the teams realized that those who reported high defect levels were scrutinized, the team leads discovered two “work arounds” that would keep them out of the spotlight (without having to change anything they did):

1. Team leads discovered that there was no consistency in what constituted a “defect” across teams (an apples to oranges comparison).  Several “redefined” the term defect to match what they thought others were reporting so that their team’s defect numbers would go down. Without a common definition of a defect, every team reported defects differently.

2. Team leads realized that the easiest way to reduce the number of defects was to subdivide the project into mini-releases.  Smaller projects naturally resulted in a lower number of raw defects. With project size being a contributing factor (larger projects = larger number of defects) it was easy to reduce defect numbers by reducing project size.

As the months ensued, the executive observed that the overall number of defects reported per month went down, and he declared the program a grand success.  While measurement did cause behavioral changes – such changes were superficial and simply altered the reported numbers.  If the program had been properly planned with goals, questions, and consistent metrics, it would have had a chance of success using defect density (defects per unit of size such as function points).  Improvements to the processes in place between teams could have made a positive impact on the work!

Given solid comparable metrics information, the executive could have done true root cause analysis and established corrective actions together with his team.

Instead, the program evaporated with the executive declaring success and the workers shaking their heads at the waste of time.

This was a prime case of “metrics” driving (dysfunctional) behavior, and dollars spent poorly.

Keep in mind that Apples and Oranges belong together in Fruit Salad

not software measurement programs.

Call me or comment if you’d like further information about doing metrics RIGHT, or to have me stop by your company to talk to your executives BEFORE you start down the wrong measurement roadway!

Have a (truly) productive week!

Carol

Share
//

Whose job is IT anyways?


The title was a purposeful play on the acronym “IT” (information technology) because there is often no one person who takes responsibility for failed IT projects. In addition, it is not as if there are not project failures everywhere.

Notwithstanding one of my least favorite (but often quoted) research studies, the Chaos Report cites that about one-third of projects are successful when it comes to IT.  What gets in the way of project success?  Lots of circumstances and lots of people!

When a software intensive project fails, there is no lack of finger-pointing and blame sharing – yet seldom do teams stand up and confess that the failure (over budget and behind schedule and failing to meet user needs) was due to a combination of over and under factors, along with fears:

  • overzealous and premature announcements (giving a date and budget before the project is defined);
  • over optimistic estimates of how quickly the software could be built;
  • under estimation of the subject complexity;
  • assumptions that the requirements were as solid as the customer professed;
  • under estimation of the overall scope;
  • under estimation of how much testing will be needed;
  • under estimation of how much time it takes to do a good job;
  • under estimation of the learning curve;
  • under estimation of the complexity of the solution;
  • under estimation of the impact of interruptions and delays;
  • over anticipation of user participation;
  • over optimism about the availability of needed resources;
  • over optimism about hardware and software working together;
  • over optimism about how many things one can do at once;
  • risk ignorance (“let’s not talk about risk on this project, it could kill it”);
  • over optimism of teamwork (double the team size doesn’t half the duration);
  • fear of speaking up;
  • fear of canceling a project (even if it is the right thing to do);
  • fear of pointing out errors;
  • fear of being seen as making mistakes;
  • fear of not being a “team player”;
  • fear of not knowing (what you think you should);
  • fear of not delivering fast enough;
  • fear of being labeled unproductive;
  • fear of being caught for being over or under optimistic.

Therefore, I ask you, on a software intensive IT project, whose job is it to point out when there are requirements errors, or something is taking longer than it should, or it is costing more than anticipated. In traditional waterfall development because there’s so much work put into the planning and documenting of requirements, pointing out errors are  either no one’s job or the team’s (collective) job which really relates to no one’s job.

Often it is easier (and results in less conflict) to not say anything when the scope or schedule or budge go awry on a software project. Yet it is this very behavior that results in so much rework and so many failed projects.

Agile and Kanban projects are different

Several of the advantages to using Kanban and Lean and Agile approaches to software and systems development is that the methods address the very items outlined above.  Building better software iteratively becomes every developer’s job rather than no one’s:

  • Fear of pointing out errors is removed because the time that goes into a scrum is days and weeks not months (so participants don’t get defensive about change);
  • Over and under optimism remains but is concentrated on smaller and less costly components of work (i.e. days instead of months or years);
  • Risk is not avoided or ignored because we are no longer dealing with elongated and protracted development cycles (spanning seasons);
  • Assumptions come out with better and more frequent discussions;
  • Over optimism about how many things one can do at once is removed because Kanban limits the amount of work-in-progress;
  • Under estimation of the impact of interruptions and delays is minimized because such factors are addressed in Kanban;
  • Over anticipation of user participation is managed through direct user involvement.

What do you think?  Join us at the Lean Software and Systems Consortium conference LSSC11 from May 3-6, 2011 as participants and speakers address the best ways of advancing software and systems methods including Lean, Kanban, Agile and other exciting new ways to deliver high quality software more efficiently and effectively.

These newer approaches make it easier for everyone in IT to make it their job to build better software.

Wishing you a productive week!

Carol
@caroldekkers (twitter)

Share

The importance of Being There (at work)!


Did you know?

Only 26 percent of IT employees in North America are fully engaged at work, while 22 percent are actually disengaged, according to a global study by consulting firm BlessingWhite.

Being there…

At a time when unemployment is at an all-time high, only one-quarter of IT workers are fully engaged or Wowed by their work, while the remaining 75% just go through the paces or don’t care at all.  When you consider specific industries fraught with frustrations of rework (exceeding 40% in some areas) and impossible deadlines such as in waterfall development, I would bet the excitement factor of going to work is even less.

#Kanban, #Lean, and #Agile communities are exceptions

The Agile Manifesto recently celebrated its 10th anniversary last month, and Kanban, Six Sigma, Lean, and Agile methods now share space with waterfall as leading methods in the software and systems development space.  Agile (in my humble opinion) was one of the first to restore a sense of sanity in software development.  In earlier times, a group of  business customers with rapid fire changing requirements would challenge software developers (tired of the constant change and “jello” like demands) for amorphous software products.  The result too often – failure.

It makes sense, in this type of environment, to do iterative development.  It was illogical to do the opposite: long development cycles to produce products already obsolete before they hit desktop computers.

Approaches like Kanban, Lean, Agile, Personal Kanban and others continue to transform our industry and inspire software developers to become “fully engaged” in the work.

Less head banging… but you have to engage

Certainly there is head banging and more job satisfaction in this new world (if “tweet volume” is any indication, the Kanban/ Lean/ Agile communities are a happier lot!) but it takes commitment to show up and be part of the action.

I believe that the Kanban and Lean and Agile communities know the importance of really being present and engaging at work.  We also know it is critical to create a community of like-minded people who meet in-person – at conferences, local meetings, at social events.

LSSC11 is coming soon!

The landmark Lean Software and Systems conference is only 10 weeks away in Long Beach, CA on May 3-6, 2011.  Make your choice of conference to attend in 2011 the LSSC11 (especially if you can only attend one!)  See my related post Top 10 Reasons to attend LSSC11.

Join the movement of people who know the Importance of Being There in software and systems development: The Lean and Kanban and Agile communities.  I hope I will meet you at LSSC11!

Have a Wow! and engaging week at work,

Carol

Share

Pre-flight email checklist: THINK before you click…


I AM OVERWHELMED BY EMAIL!

There I said it, I am overwhelmed with email and I can’t stand it!

I thought I was the only one until I read Tim Tyrell-Smith’s post today: How to reduce the Quantity of Incoming Email and realized that there should be a pre-flight email checklist to save our sanity… and to encourage Thinking before Clicking!

Since joining the world of social media I realize my “connectivity” has grown exponentially, but not all in a good way. Even with my SPAM filters set to high, I get so much email that it is overwhelming!

I feel like I must have ADD (attention deficit disorder) because my day is interruption after interruption (sorry TweetDeck!) and I need help (and I know I am not the only one!)

Pre-flight email checklist (THINK before you click):

  1. If it takes longer to write an email (to one person) than it does to walk across the hall / call the person, don’t write an email. Pick up the phone or get up from your desk.
  2. If multiple people are involved and you need responses, consider whether a one hour meeting would work better than filling up in baskets with back and forth threads for the next 2 weeks.  If so, schedule a tight meeting and solve the issue in one fell swoop.  (Just because it doesn’t take paper doesn’t mean email is green — it can litter cyberspace!)
  3. If 1 and 2 are not possible, consider other options: Twitter or a blog post or an update at a staff meeting might be better than email.
  4. You’ve thought through 1,2 and 3 and decide your message needs an email.  Never negligently click “Reply all!”  unless you’ve gone through these same steps:Make sure you set aside a dedicated time (10 minutes minimum) to THINK before you click:
  • Consider your recipient: Walk for a moment in their shoes and think: what would be your response to this email? Make sure to emphasize the key points (i.e., make the reason for the email crystal clear). Do not “assume” that everyone shares your knowledge so give necessary background.  In the words of Peter Drucker:  It is important to state the obvious otherwise it may be overlooked.
  • If you expect/need a response, be clear about it. Tell recipients what you need from them (each), by when, and how (call, email, comment, decide…).
  • If it is an information only email, say so. No one has time to read your mind.
  • Consider using the subject line as a filing cabinet: Use tags to identify topics and intent. E.g., ABC Department meeting notice, Feb 17, prep material attached; or Dekkers: Blog Marketing draft – comments needed by Feb 20, 2011.  In this way, recipients can quickly find YOUR email from a pile in their in basket.
  • Consolidate information! If the email is about a meeting: include dial-in information (top and center for easy access!), meeting date and time, and  attach all preparatory material all together in a single email. There is nothing worse than having to pull up 3 emails to get ready for a single meeting!
  • Preview before sending: Spellcheck, attach files, check all recipients are included.
  • If there is emotion involved save the draft email and wait a full day (or at least an hour) before doing the doublecheck and send step below.
  • If it’s a regular email (non-emotional), take a one minute break – stand up, look out the window, anything to clear your head. Then go back and re-read your email, double-check attachments, recipients, bcc’s etc.
  • When you are sure it looks right consciously hit “send”. NEVER hit send when you are multi-tasking (i.e., on the phone). Once an email has been sent it is in cyberspace FOREVER (regardless of rescinds!)

I plan to follow this checklist starting today! What do YOU think? Do you have any additions?

p.s., DON’T forget to sign up for my Feb 17, 2011 (11am – 12:30 pm EST)  FREE Webinar:  Navigating the Minefield – Estimating before Requirements.

Register here: http://tinyurl.com/6flgjwr

To your increased productivity!
Carol

Share