Sunday, 19 December 2010

Thoughts on Test Framing

 #testing #softwaretesting

 Some thoughts on the ideas presented in Michael Bolton's post.

Firstly, I like it. A lot. I commented on the post about a day after it was published. But I knew there was some more thinking I needed to do on the topic. So here we are, over two-and-a-half months later (yes, I think slowly sometimes - sometimes one-dimensionally and sometimes in a very divergent way - meaning I create lots of strands/threads to follow-up - but I do continue thinking, usually! )

Powerful Elements
Showing your thinking is crucial to any activity that involves questioning, observation, hypothesis forming, evaluation, reporting, learning and feeding back under constraints of time, cost and equipment availability (this is testing, if you didn't recognise it already).

Showing the provenance of your test ideas is a powerful communication tool - both for yourself, team and any stakeholder. Doing this with many of your ideas will naturally show up if many of the thoughts/considerations you'd intended to include were indeed included. (What?) Ok, example time.

Pseudo-Example
A simple example here is to construct a test idea around a "happy path" of a function, the most basic and usual case for that function. This will then lead to thoughts around:
  • The use case itself (variants and alternatives);
  • Platform (how many?);
  • Environment (Small with many simulated elements going right through to a full customer acceptance configuration);
  • Application configuration (Based on what? observed or projected?);
  • User data (Based on what?  Supposed, observed or projected?);
  • User behaviour (Based on what? observed or projected?);
I work in situations where these are distinct variables - not so much overlap, but in some shops some of these could be less distinct.
Evaluating and selecting which of these elements to use, include and vary is part of the test design of the test idea in conjunction with the test strategy. So, in this example we can see (hopefully) that there is a range of test ideas that could be distilled.

What More?
In the example I alluded to elements of choice/decision based on risk, priority, test design technique, strategy and availability. Sometimes this is about scoping the area for test, what to include (for now) and exclude (for now).

When I am telling the story of my testing I like to emphasise the elements that could be missed - areas I've skipped and the reasoning behind that, areas that maybe were discovered late that could have an impact on the product and for which it might be useful to recommend additional testing.

So, I want my story to be very explicit about what I didn't do. There is nothing new in this - traditional test reports were including this information over 20 years ago. But, I want to link it into my test framing in a more visible way. If I start talking about all the things I didn't do it's easy for a stakeholder to lose focus about what I did do.

So, as well as the explicit chaining of test framing showing the roots and reasoning behind the test idea I also want to include (explicitly) the assumptions and decisions I made to exclude ideas (or seeds of new ideas). In the context of the test frame this would represent the a priori information in the test frame/chain (all of the framing (so far) is a priori information).

But, there might also be scope to include elements of ideas that would affect the test frame (or seeds for new ideas) with information discovered during the testing. Then from a test frame perspective it could be very useful to include this a posteriori information.

Examples of a posteriori information would be feature interactions that were discovered (not visible before) during testing - which didn't get covered before testing stopped. There might be aspects where test ideas couldn't be 'finished' due to constraints on time, feature performance (buggy?), third-party sw issues (including tools) that couldn't be resolved in time or some other planned activity that didn't get finished (a feature element that needed simulation for example).

I usually think of these aspects that are not 'included' in the testing story as the silent evidence of the test story. Making this information available and visible is important in the testing story.

There are aspects of these assumptions and choices that are implicitly in the test framing article, but for me it's important to lift them forward. (Almost like performing a test framing activity on the test frame post.)

Maps
As a side-note: Test framing as a form of chaining / linking around around test ideas fits well into a mind map, threads and test idea reporting. But that exploration is for another time...

So... Have you been thinking about test framing?

Risk Compensation and Assumptions

 #softwaretesting #thinking #WorkInProgress

This is very much a thought-in-progress post...

Cognitive bias, assumptions, workings of the subconcious. They're all linked for me in one way or another. Then the other day in a twitter chat with Bob Marshall and Darren McMillan I was reminded of a potential link to Risk Homeostasis (sometimes called Risk Compensation).

Risk Homeostasis is the tendency for risk in a system to have some form of equilibrium, so that if you devote more attention to reducing risk in one part of the system you increase the risk in another part. This is a controversial theory, but there is an intuitive part that I like: devoting increased attention to certain risks inevitably implies devoting less attention to other risks. To use economic terminology there is potential for increased risk.

This also ties in with an intuitive feeling that if you are devoting less attention in one area the "risk" components there could increase without you noticing. This is almost a tautology...

I first encountered risk homeostasis in Gladwell's description of the investigation into the Space Shuttle disaster (in What the Dog Saw) and I see the similarities to testing.

Stakeholders and Risk Compensation?
A software testing project example. Taking the project/product/line manager's view about testing:-

A correction; release is coming up and a group related to release testing is performing a release test activity. That, in some minds could be seen as a "safety net" - could the PM be influenced in the level and amount of testing "needed" prior to the release test activity? I think so, and this is risk compensation in action - so-called safety measures sub-conciously allow for increased risk-taking elsewhere.

It could apply to new unit, function/feature testing needed for a "hot fix" - an "isolated" decision about what's needed for this hot fix - PM thoughts: ok, let's do some extra unit/component test as we can't fit in the functional test (which is dependant on a FW update - that can't be done at so short notice), we'll do extra code reviews and an extra end-2-end test. Seems ok?

Then comes the other deliveries in the "final build" - maybe one or more other fixes have had this "isolated decision making". Then, for whatever reason, the test analysis of the whole is missed or forgotten (PM thoughts: it is short-notice and we're doing an end-2-end test) - i.e. putting all these different parts into a context with a risk assessment of their development and "the whole caboodle", from the PM perspective there was a whole bunch of individual testing and retesting, there will be other testing on the whole - that's enough isn't it?

Assumptions
Where do assumptions come into this? Focussing on certain risks (displacing risk) results in some form of mitigation (usually). But, the sheer act of risk mitigation (in conjunction with risk compensation) implies that focus is reduced elsewhere. The danger of assumption here is that the mitigation activity is coping with risk, when in certain cases it's just 'not noticing' risk.

The act of taking action against risk (mitigation) is opening possibilities to trust (and not question) our assumptions. But how do we get fooled by assumptions. There are two forms that spring to mind:

  • The Black Swan. Something so unusual or unlikely that we don't consider it.
  • A form of the availability heuristic. We've seen this 'work' in a previous case (or last time we made a risk mitigation assessment) and so that's my 'reference' - "that's just how it is" - "all is well".

An everyday example of Risk Compensation
Topical in the northern hemisphere just now: Driving in the snow with winter tyres and traction control. I see it a lot these days (even do it myself) - when moving off, don't worry too much about grip - just let the car (traction control) and tyres find the right balance. So the driver is trusting the technology - when I used to drive with winter tyres and no traction control, there was a lot more respect about accelerating and 'feeling' the road, ie using feedback.

Maybe that's where risk compensation plays a part. It reduces the perceived importance of feedback.

Coping with Risk Compensation and Assumptions
The most important aspect in coping or dealing with this is: awareness. Any process that has a potential for cognitive bias is handled well by awareness of the process. Be aware for when feedback is not a part of the process.

Understanding those situations where short-cuts are being taken or where more emphasis is put in one area we should ask ourselves the questions:

  • How does the whole problem look? 
  • Am I missing something? 
  • Am I aware of my assumptions?
  • Am I using feedback from testing or other information about the product in my testing?

Have you noticed any elements of risk compensation in your daily work?

Friday, 3 December 2010

Carnival of Testers #16

November was cold, much colder than normal - but there is no such thing as bad weather, just inadequate clothing (there are several test analogies there!), so whilst deciding on the right clothes I read and have been entertained by several blog posts...

  • First up this month was Michael Kelly with a reminder that sometimes it's necessary to "just say no!"
  • Ever get the feeling that explaining any test-related thinking to a non-techie is tricky and full of traps? If so, then you'll recognise something in this cartoon from Andy Glover.
So, have you recognised anything so far?
  • On an aspect of recognition Albert Gareev wrote about a typical trap that testers can occasionally fall into, inattentional blindness. Recognising it, and understanding it helps your learning.
  • Related to traps, bias and fallacies then black swans have been known to surface. Have a look at this 'humble' story from Pradeep Soundararajan and his sources of learning.
  • Of course, sometimes nothing will disrupt you and your testing. Then, maybe you're in the zone, as Joel Montvelisky encourages us to recognise and learn about the contributing factors.
  • A short note from Peter Haworth-Langford on his first year of blogging. Happy blogging birthday!
  • The guys at the test eye produced a two-page sheet of aspects for consideration when testing a product. It's partly based on earlier work by others, but take a look and see what you recognise and if there's anything new to you. 
  • Communication, communication, communication. Take a look a Pete Walen's post on some communication aspects related to documentation.
  • A nice example of the availability heuristic in Gerald Weinberg's account of The Sauerkraut Syndrome. Recognose it?
  • A view on testing debt and some tips to counteract it came from Martin Jansson.
  • Weekend Testing landed in the Americas during November. Here are some thoughts from one of the organisers, Michael Larsen.
  • Bob Marshall raises some pertinent questions about the state of Agile - thinking way outside the tester's box. Recognise anything?
  • Some more interesting questions raised by Mark Crowther on burndown. You must reconise something here.

I'm sure there was something there that all would recognise, and maybe something new.

Until the next time...