Friday, 15 April 2011

The Certainty Effect and Insurance

" #softwaretesting #testing "

At #swet2 I gave a lightning talk on an aspect of Framing (more on this in future posts) and thought I'd jot down some of the details.

On the way to the peer conference I'd read Tversky and Kahneman's "The Framing of Decisions and the Psychology of Choice" (Science v211, 1981), and was struck by the description of what they described as the certainty effect. Or really the potential application in testing.

The Certainty Effect
This was labelled by Tversky and Kahneman (in a 1979 article in Ecometrica on Prospect Theory) on a paradox observed by French economist Maurice Allais in 1953. Expected utility theory predicts that a choice is made dependent on it's expected probability. The example from the 1979 article is:

Problem 1:
Choose between
Choice A 
A return of 2500 with a probability of .33
A return of 2400 with a probability of .66
A return of 0 with a probability of .01
Choice B
A certain (guaranteed) return of 2400

In a sample of 72 respondents 18% chose A and 82% chose B.

This is in line with expected utility theory. It's also known as a risk averse approach. Now in the second problem the certainty was removed:

Problem 2:
Choose between
Choice C 
A return of 2500 with a probability of .33
A return of 0 with a probability of .67
Choice D 
A return of 2400 with a probability of .34
A return of 0 with a probability of .66
Now in this case, of the 72 respondents 83% chose C and 17% chose D. This is not in line with expected utility theory and is what is labelled as the certainty effect. That is, there was a disproportionate weighting on an outcome when it was certain - as this weighting wasn't reflected when the outcome was not certain.

This is related to how the question is asked (framed).

A further example of this type of framing affecting choice can be seen in attitudes to risk - typically when looking at insurance there is a difference in attitudes that is displayed between risk reduction and risk elimination, for example consider:

A vaccine that is effective in half of cases.
A vaccine that is fully effective in 1 of 2 cases.

Then there is a disproportionate preference for the second formulation.

Applied to Testing?
By default, stakeholders want certainty. They tend to want risk elimination rather than risk reduction. They link this to cost (just like in different levels of insurance) and think that by working with cost they will get closer to certainty.

This is a problem for testers - or really how they are communicating with stakeholders.

Testers can't work in terms of certainty (without very tight restrictions and lots and lots of explanations and assumptions). Therefore, given the two possibilities of talking about risk elimination and risk reduction testers should talk in terms of risk reduction.

Additionally, the certainty effect tells us that typical decisions and choices can be skewed (disproportionately) when the risk or probability moves away from certainty (guarantee).

When handling the message and understanding expectations towards stakeholders consider:
  • Be consistent - never talk in terms of (or give the impression that you can deliver) certainty.
  • Be aware that when something is not certain then attitudes to risk and decision choices don't always follow expected weighting of probabilities.
Certainty, insurance and talking to stakeholders - it's not always logical.


  1. Interesting Simon!
    I almost get it, but still struggle with the Insurance part (it's early in the morning and the coffee hasn't kicked in yet...)
    Can you give some more examples on Insurance applied to testing?

  2. Here's a link to the paper

    The paper points out that A/B becomes C/D by "eliminating a .66 chance of winning 2400 from both prospects under consideration". Seeing how the experiments were related helped me to understand why the comparison was useful.

  3. Thanks James for the paper!

    This is intriguing.
    I especially found it interesting with the discovered pattern of risk aversion for positive prospects and risk seeking for negative ones.
    That to me suggests that a stakeholder would choose the most risky option if two negative prospects are presented.
    But that does not mean entirely risk elimination; it would only mean the most risky of two negative options.

    How can we manage this in test planning/reporting? Any strategies?
    Matchmake risks according to this? :-)

  4. James,

    Thanks for the link.

    The paper on Framing of Decisions states that the certainty effect is typically seen in comparisons where the move away from certainty is by a given factor - in the above example a reduction of probability by .66 in both cases. Other examples include reduction of probabilities by one quarter or one tenth.

  5. Henrik,

    Examples with insurance might be anything connected with test results, problems or perceived product risks.

    With results/problems, then a root cause analysis could be an example (from the stakeholder viewpoint). Root cause analysis - common in "process friendly" organizations - a need to understand or help highlight potential improvements in activities based on customer feedback, but:

    • Sometimes this goes too far - "ok this type of fault must be caught by us" - which can result in types of problems being chased

    ⁃ Can also lead to frame blindness - a focus on the symptom and a willingness to insure (paying some premium with some excess (sj√§lvrisk) ) to prevent the same severity occuring
    ⁃ Can lead to focus on (or reacting to) severity rather than the problem

    In the case of looking at risks that can affect a product then depending how this is done can result in a skewed focus - insuring to the extent of eliminating risk:

    • Common for risk management is to come up with a risk list and then a mitigation strategy - although this is implicitly a risk reduction strategy, stakeholders can use these lists as items to check-off, and then they can become risk elimination objects.
    • Then an effect call risk compensation comes into play - one chases risks and loses the big picture.

    Hope this helps.

  6. Henrik,

    Two negatives - interesting thought!

    These papers are partly describing situations where the framing of the problem affects the choice - it's the framing and not the riskiness that affects the preference.

    It's the relative attractiveness of options that varies when they're presented in different ways (framed differently).

    Comparing a positive and negative alternative is done with a known reference point from which to determine that one is positive and one is negative - if both are negative then a different reference point might apply - so I don't know if the same theory would apply.

    In terms of how to manage the planning and reporting - spontaneous thoughts - managing expectations and understanding the frames in which you report the information and the frames in which the stakeholder is receiving and interpreting the information (far from easy...)

    More thought necessary...