Sunday 19 December 2010

Risk Compensation and Assumptions

 #softwaretesting #thinking #WorkInProgress

This is very much a thought-in-progress post...

Cognitive bias, assumptions, workings of the subconcious. They're all linked for me in one way or another. Then the other day in a twitter chat with Bob Marshall and Darren McMillan I was reminded of a potential link to Risk Homeostasis (sometimes called Risk Compensation).

Risk Homeostasis is the tendency for risk in a system to have some form of equilibrium, so that if you devote more attention to reducing risk in one part of the system you increase the risk in another part. This is a controversial theory, but there is an intuitive part that I like: devoting increased attention to certain risks inevitably implies devoting less attention to other risks. To use economic terminology there is potential for increased risk.

This also ties in with an intuitive feeling that if you are devoting less attention in one area the "risk" components there could increase without you noticing. This is almost a tautology...

I first encountered risk homeostasis in Gladwell's description of the investigation into the Space Shuttle disaster (in What the Dog Saw) and I see the similarities to testing.

Stakeholders and Risk Compensation?
A software testing project example. Taking the project/product/line manager's view about testing:-

A correction; release is coming up and a group related to release testing is performing a release test activity. That, in some minds could be seen as a "safety net" - could the PM be influenced in the level and amount of testing "needed" prior to the release test activity? I think so, and this is risk compensation in action - so-called safety measures sub-conciously allow for increased risk-taking elsewhere.

It could apply to new unit, function/feature testing needed for a "hot fix" - an "isolated" decision about what's needed for this hot fix - PM thoughts: ok, let's do some extra unit/component test as we can't fit in the functional test (which is dependant on a FW update - that can't be done at so short notice), we'll do extra code reviews and an extra end-2-end test. Seems ok?

Then comes the other deliveries in the "final build" - maybe one or more other fixes have had this "isolated decision making". Then, for whatever reason, the test analysis of the whole is missed or forgotten (PM thoughts: it is short-notice and we're doing an end-2-end test) - i.e. putting all these different parts into a context with a risk assessment of their development and "the whole caboodle", from the PM perspective there was a whole bunch of individual testing and retesting, there will be other testing on the whole - that's enough isn't it?

Assumptions
Where do assumptions come into this? Focussing on certain risks (displacing risk) results in some form of mitigation (usually). But, the sheer act of risk mitigation (in conjunction with risk compensation) implies that focus is reduced elsewhere. The danger of assumption here is that the mitigation activity is coping with risk, when in certain cases it's just 'not noticing' risk.

The act of taking action against risk (mitigation) is opening possibilities to trust (and not question) our assumptions. But how do we get fooled by assumptions. There are two forms that spring to mind:

  • The Black Swan. Something so unusual or unlikely that we don't consider it.
  • A form of the availability heuristic. We've seen this 'work' in a previous case (or last time we made a risk mitigation assessment) and so that's my 'reference' - "that's just how it is" - "all is well".

An everyday example of Risk Compensation
Topical in the northern hemisphere just now: Driving in the snow with winter tyres and traction control. I see it a lot these days (even do it myself) - when moving off, don't worry too much about grip - just let the car (traction control) and tyres find the right balance. So the driver is trusting the technology - when I used to drive with winter tyres and no traction control, there was a lot more respect about accelerating and 'feeling' the road, ie using feedback.

Maybe that's where risk compensation plays a part. It reduces the perceived importance of feedback.

Coping with Risk Compensation and Assumptions
The most important aspect in coping or dealing with this is: awareness. Any process that has a potential for cognitive bias is handled well by awareness of the process. Be aware for when feedback is not a part of the process.

Understanding those situations where short-cuts are being taken or where more emphasis is put in one area we should ask ourselves the questions:

  • How does the whole problem look? 
  • Am I missing something? 
  • Am I aware of my assumptions?
  • Am I using feedback from testing or other information about the product in my testing?

Have you noticed any elements of risk compensation in your daily work?

3 comments:

  1. Hi Simon,

    I've been neglecting my Google Reader so sorry for the late reply :-)

    Firstly, what an excellent post! Seriously, I very much enjoyed reading this.

    A good example I can provide from my workplace is a type of testing in itself "Proactive Testing" (http://www.bettertesting.co.uk/content/?p=123) which as you've read previously relies currently upon two main techniques to prevent defects appearing in the code base.

    One of these techniques "Up front test cases" & to a lesser extent "Show and tells" provide a high risk that the developers original test thoughts will be lost from the execution of these techniques. Now this may be okay for some dev's who care less about testing & swing on using their test team as a safety net, however for those that generally do excellent testing already we run a high risk of losing those test idea's by presenting them with our own.

    Luckily for us so far this has worked fantastically & has had the reversed effect in getting developers more interested in their own test ideas. However like anything that’s placed into a working model it runs the risk of having negative side effects.

    Thanks for sharing Simon.

    Cheers,

    Darren.

    ReplyDelete
  2. Hi Darren,

    Yes, good angle, but the fact it's worked so far is no guarantee it will carry on working :)

    Your "show and tell" and "up front tests" could be incorporating feedback into both the coder and tester mindsets - and that's important!

    It's when we slow or reduce that feedback loop from/to our test activities that there's a danger of falling into this trap.

    Thanks for the comments!

    ReplyDelete
  3. @Darren,

    I missed the assumption angle in the reply - yes, there is a danger there. Here, I think it's important to keep thinking (or asking questions) about the subject and what's needed - even questioning the "up front tests" and "show and tell" - from yours and the devs perspectives.

    ReplyDelete