Wednesday, 28 April 2010

Assumption: Mother of all Flip-Ups!

 #qa #softwaretesting #testing

 One of my current interests is to do with communication between testers and non-testers.

I recently heard some gems from non-testers, but there's always two sides to the story...

Assumption #1: "It's a known problem!" 
Admittedly, I think I've probably said this in the past. However, my "rose-tinted memory" remembers me putting this into some context - it was in this situation, or it's known by X in your team.

But that's the problem with assumptions, it's such an easy trap, an easy place to fall over. So after a while an assumption is almost a known starting point - "oh, we've seen that so many times by now everybody must know about that."

But this is a great illusion. Just because one or two people have seen this (maybe they're in some fault report triage group) doesn't mean that everyone else has seen it.

When I dug into this particular "known problem", it was hidden in a email and referenced in an old report - neither were discoverable. So I make the distinction between a  problem that is "known to some group of people" and a problem that "is discoverable by everyone".

If I now use the term "known problem" then I'm careful to attach which group of people I think it's known to. If I don't attach such a group (or context) then I assume it's discoverable...

Assumption #2: Troubleshooting - Best when the designer and tester are working side-by-side
The implication here was that they should be located near each other, that not talking face-to-face would not work.

I think this is a bias problem - either someone has experienced "one" way in which this can work. Alternatively, they've seen problems when not communicating face-to-face and assume that's the reason.

It takes maybe one or two additional routines when working remotely from a partner, but there's no reason why it can't happen.

I remember one time being located on a customer site 9 time zones from the designer that was helping me troubleshoot. We sorted our routines and structure out - we optimized the short time we were able to talk each day and the pieces fell into place. So I have no reason to think that it can't work - sure, it's more difficult, but certainly not impossible.

Assumption #3: Troubleshooting - Best when the designer can drive the activity (ie do the test) themselves
This assumption "seems" to take the view that the tester is a robot, that the feedback is exactly what the designer requests.

Maybe some testers work like this but the majority that are worth their salt or want to learn are constantly adding to the equation - making the some of the two (designer + tester) greater than the individual parts!

This assumption may be another bias problem - maybe a bad example that stuck in someone's head.

Overcoming this type of bias is a bit tricker - especially when acting as a third party. It's usually up to the tester to build their brand, but I'm still working on how to approach this problem as a third party!

If you've any assumptions that you've heard recently then drop a comment...

Remember: Assumption is the mother of all fudge-ups! (or something like that!)

What's on my current reading list?

I take inspiration and ideas from a wide range of literature, other testers and sometimes triggered by something from leftfield.

I thought I'd jot down my current reading list (I last did this for my summer09 reading, here) for a couple of reasons:
  • It's good to walk-through what you're currently doing (reading) and why - sometimes you might be reading something obscure, but for a particular reason.
  • If you're like me, I get ideas from what other people are reading - so this is part "here's some tips" for other people, but also a hope that readers will send in tips to complement my reading - so that's your challenge at the end!

The Scientific Corner
After attending the RST course in March I had the urge to rediscover the scientific method. So as a step in that direction I started reading the following.
  • Philosophy of Science: A Very Short Introduction. Well, I'm into science and philosophy, so why not start with "What is Science?" Great stuff! 
  • The Oxford Book of Modern Science Writing. This is a collection of excerpts from some outstanding scientific papers and books of the 20th century. I've devoured a few already - a combination of good writing and interesting science.
  • Can a Robot be Human? As a tester and wanabee-layman-philosopher this is right up my street. There are interesting slants and many questions - the sort that make you think! Great for a critical thinker, lateral thinker, divergent thinker and a tester!

The Systems Thinking Corner
Looking at complex problems and trying to understand their complexity is interesting to me - it also helps in my daily work.
  • What The Dog Saw. This collection of Gladwell articles (I think most can be found on the NewYorker site) is an eclectic mix with his distinctive take on them. Very interesting and insightful.
  • What Every Body is Saying. Observation is a necessity (as a tester) and so why not try it on people around me? I'm certainly no expert but it gives me a few insights and maybe it will help in the odd discussion in the future.
  • The Black Swan. Enjoying this slightly-scholarly book and I'm taking my time with it. Lots of good things for testers to think about. These types of events are all around us - just think volcanos!
  • The User Illusion. I can't seem to finish this book - I've been reading it for years - it has tricky parts - I read and re-read parts. It's an exploration of conciousness and how the unconcious mind processes so much information. It introduced me to entropy, information theory and to Gödel - satisfying the mathematician in me and giving me another insight on testing problems.

The Testing Corner
I have plenty of software testing books - some of which are constant reference material and I've written about before. However, I don't think I've mentioned these two before:
  • Beautiful Testing. I started dipping into this last autumn and it then fell by the wayside - will re-visit before long. Some interesting chapters - I'll wait until I've finished it before giving a verdict.
  • The Art of Software Testing. This is a pure reference material for me. I picked up a second hand first edition last autumn - did the triangle self-test and enjoyed some of the "phycological" aspects of software testing.

The Recently Finished Corner
I think I have more than one, but this is the one that stands out.
  • Blink. Well, I'm into how the mind processes information, why that sometimes works and sometimes doesn't. As a tester I can relate to why people can be hampered by too much information - so I liked the war-games description!

The Not Started-Yet Corner
Next up...

The Weinberg Corner
I'm a latecomer to Jerry's work. After finding his Perfect Software book last summer I'm gradually going through a swathe of his work. The current ones are:
  • Quality Software Management vol II. Great insights into observations and interpretations and the pitfalls that go with them. I'm enjoying working through this book.
  • The Gift of Time. Whilst this is not a Weinberg tome it's related to his body of work with reflections from people that have worked with him or been inspired by his work. Easy to dip into essays. I've nearly read through all and I like the description of the fieldstone method - with possible applications in my day job!
  • Exploring Requirements: Quality Before Design. The test for attentiveness was illuminating. Ongoing! 

My reading is typically driven by my emergent learning and divergent thinking style. I've written about this before, here. Some of these I bought second hand from Alibris - a handy site for getting "hard-to-get" books.

And now - if you're still with me - a couple of challenges for you dear reader, yes you!

Quick Test!
Without looking - How many corners did I mention? Were you paying attention?

Have you got any great recommendations? Let me know!