Monday, 25 January 2010

Mind the Information Gap (Black Swan-style)

Over Christmas I picked up the Black Swan (by Nassim Taleb.) It was tipped in the Guardian top non-fiction podcasts/books list for the noughties. But I think my awareness of the book came some time ago via Michael Bolton's writing (I don't remember if it was blogs or tweets where I first saw it...) - so a special tip of the hat to him!

I'm only about a third of the way through it but so far this is an interesting look at unexpected events and why people are susceptible to the unexpected.

There's a whole range of interesting ideas but one I found immediately interesting was that of silent evidence - the information that is missed or not included in the analysis. The first example that Taleb uses is from Cicero:-
Diagoras, a nonbeliever in the gods, was shown painted tablets bearing the portraits of some worshippers who prayed, then survived a subsequent shipwreck. The implication was that praying protects you from drowning.
Diagoras asked, “Where are the pictures of those who prayed, then drowned?”
This idea could also be a form of the availability heuristic or fallacy of exclusion.

When I was reading this section I started thinking about Bill Bryson's A Short History of Nearly Everything where some of the problems with paleantology are described - one being that fossils only represent a small subset of dinosaurs and other sepecies that could've been fossilised (due to fossilisation requiring favourable conditions over many millions of years) - so we only see a small subset of possible fossils.

Actually, Taleb includes the problem with the fossil record as an example of silent evidence but I think Bryson's description was better. (Interestingly, my copy of Bryson's book was published by Black Swan publishers - fate, coincidence?)


Silent Evidence in Testing
I identify with the idea of silent evidence in my day-to-day testing work. Whether it's a test plan, analysis, report or other finding one of the areas I consider first are the areas that have been excluded, not considered or not covered.

This part of the picture is very important as there is a different motivation and reasoning behind each.

  • Excluded: This can be deemed to be outside the scope of the test area - maybe covered by another team or agreed as not relevant to the test purpose.
  • Not considered: In a report this might reflect an area that was missed, overlooked, discovered late without any judgement about investigation.
  • Not covered: Very common for a time/tool/config constraint to fall into this category, ie the area is considered relevant to test/investigate but it wasn't covered for reason x, y & z.


Note, I'm not trying to be exhaustive with these categories!

Whether you categorise in this or a different way it's very important to understand what is not covered, tested or analysed. This helps make a judgement/recommendation about any risk associated with this "non-coverage".

I commonly use this type of information to weigh up future test direction - maybe something needs following-up soon or putting on the backlog.

It's also this type of information that makes the findings more complete - building the picture of the testing of the product. This information can be reported with or without judgement and recommendation (preferable with some commentary) as this creates a picture of what has been considered, analysed or thought-about during the activity.

Considering the information that we are excluding (or not considering) also helps to understand some of the assumptions (modelling and model reduction) that is being applied to the testing problem - so this is a natural part of the feedback loop - helping to improve future test activities.

So, keep a look out for the silent evidence and "Mind the Information Gap"!

Do you actively think about the areas you're not looking at?

2 comments:

  1. Hi Simon,

    Nice post. It's virtually impossible to do exhaustive testing but as long as we are aware of the gaps in testing, the gaps in knowledge or the gaps in communication then we can be sure we are handling a large percentage of the risks.

    Excellent post.

    Rob..

    ReplyDelete
  2. Hi Rob,

    Good point about communication!

    There are of course gaps there - so just like asking "what has been communicated?", it's equally valid to ask/wonder, "what has not been communicated or passed-on?"

    This is very important too - you might miss a part of the organisation that should/needs to know about some issue/problem or change in activity...

    ReplyDelete