Wednesday, 17 April 2013

Some notes from SWET5 - Small and Intense

Edition #5 of the Swedish Workshop on Exploratory Testing (SWET5)

Place: Högberga Gård, Stockholm
Date: 6-7 April 2013
Theme: Mindmaps and Mindmapping in Testing
Twitter tag: #SWET5

Participants: Henrik Andersson, Mikael Ulander, Simon Morley, Rikard Edgren, Martin Jansson, Michael Albrecht, James Bach

Due to injury, illness and other reasons SWET5 was depleted to 7 participants, but I think each of us had the same response when we saw who was still in the game - "yeah, I'd like to spend a day and a half talking testing with those guys!"

I think it was Martin that remarked that the room had a high density of testing thinkers...

We had one newcomer (Mikael) and the others that knew each other from at least 2 SWET meetings each. During the checkin there was a common theme - yes, this is small, but we all know each other so this is going to be a different (and maybe more intense) type of peer conference.

Talk I
First up was Rikard talking about a testing feature interaction model that he had used to help in his testing of a product. As usual, the real depth came out during open season where we discussed how the model/map/list was constructed, what made it onto the map and what didn't - this was the real use of the map/list. Rikard admitted he was a "list guy" - which was why he drew a map with limited levels (mostly one). The discussion also touched on areas of how the map might look for different SW builds, how the map was personal and that it was an aid to communication an not the communication itself. Compare "mapping" to the "map", "planning" to the "plan", "reporting" to the "report".

It made me think of the model in the communication equation, ref [1],
Communication = Artifact (model) + Discussion (dialogue)
Rikard also mentioned some heuristics, "one more test" and ?

Other parts touched on in the discussion - where we didn't discuss to consensus - were problems with drawing out tacit knowledge from a map. I also started thinking about multi-dimensional clustering of feature risks - but I need to explore this further...

Talk II
Next up was Mikael, who described a per sprint usage of xBTM and the usage of mindmaps to illustrate ideas for TBTM and SBTM. In open season there was some discussion about why the graphical method was particularly appealing. We maybe didn't get to the bottom of that, but we sensed that was a crucial discussion point. Why use a graphical method, why use one form of a mind-map in contrast to a list. There's something there that we didn't conclude...

Another aspect that was touched on during open season was a discussion about when the mind-mapping and xBTM was discontinued (in Mikael's experience) - and here the discussion explored the aspects of PR - discussing and talking about what you're doing and why. If your manager/stakeholder/organisation doesn't buy into your method, then it won't really be adopted - and there is a difference.

Talk III
James talked about an experience of three teams (plus himself) producing mind-maps for a given exercise. We then looked at the resultant differences - that four different maps were produced for, ostensibly  the same mission. There was some discussion on how four maps could be combined into one - that maintained it's readability/usability - probably by appointing a scribe. But the question remained, in which cases would this be a problem. Another area of the discussion touched on the personal/subjective aspect and so whether it mattered how the resultant maps looked. This lead to the report vs reporting issue again - that observing the construction of the mind-map might give more information than the final map.

The discussion moved on to mind-maps' place in the communication - from spoken word, written, lists, pictorial, what next? Specific tools or formats? Intermediary step to where? Do mind-maps do it or are they a gap-filler? No conclusions...

Talk IV
Finally, I gave an experience report on using a mind-map as a basis for a strategy discussion - with the positive and negative experiences from that. The open season delved into areas and reasons why the map or the communication didn't work, some aspects of why they might not have worked. The problems discussed were generic to communication and not specific to mind-maps.

One interesting observation that I made of my mind-mapping experience was that mind-maps tend to channel the description into inherently "positive" or "visible" ways of thinking. What does that mean? Well, my map (and in some ways many that I see), essentially lay a track of "this is the route we'll follow because these are the parts I want to show, discuss, record..." They don't intrinsically show, "these are the things we will do/record PLUS these are the things we won't do/record because they're not useful or productive". In my experience report this would've translated to "the strategy looks like this, because these are the problems it's trying to tackle PLUS these are the issues it will avoid..."

This may mean that - as with any communication means - they are more suited to some tasks rather than others. I think this is an area I need to dig into more.
An aspect not concluded was the cataloguing of those areas and limitations. This could be a useful follow-up.

Other
During the evening/morning there was some discussion about testing and checking. The consensus that we could agree on was the wish to advocate testing as the starting point in a good testing practice.

Check-out was positive and there was a feeling that the smaller group had led to more involved discussions. I enjoyed it, well done to Michael for organizing SWET5. Roll on SWET6!

Reference
[1] The Tester's Headache: Challenges with Communicating Models II http://testers-headache.blogspot.com/2012/07/challenges-with-communicating-models-ii.html

Tuesday, 2 April 2013

Checking and Information Value

It's now over three years ago that I wrote my last post on "testing and checking" - and I wrote three in total. I consider them a little 'rough' but I knew they contained some gems so revisited them recently - refs [1], [2] & [3].

James and Michael wrote a blog recently, clarifying and redefining what they meant by testing and checking, ref [4].

After reading it - and thinking about it - I felt that some emphasis was missing - for me. This doesn't talk directly to the definitions but more the application of them.

Note, James & Michael seem to apply this refined terminology within the frame of the Rapid Software Testing methodology, ref [11], but I think others will use it outside of that also.

Some History re-cap
After reading the original post, ref [5], I started wondering, "why?", what was the motivation behind making the distinction. I didn't experience the problem being described and couldn't see how the distinction might help me. As part of that I started wondering about how checks and checking was being framed and the extent to which it was being wrapped in testing.

My original questions:
  • Why make a check when no testing is involved?
  • How or why would a check be made without testing?
- ie why make a check and not analyse the result; why make a check without any thinking/analysis (testing) beforehand?
Note, the definition didn't say that a check wouldn't be evaluated, but it could be misinterpreted that a check without testing was useful (although I didn't think that was the intention).

So, at the time, I couldn't detach checking from testing - I couldn't see a situation where a check would be made without testing. That was my perspective and how I would advocate checking.

But I'm also very aware that there will be people/organisations that do some form of checking without any testing... How might that happen?
  • "We always run this batch of scripts. When they pass then the SW is ready."
Note, I believe the definitions are trying to enable more accurate observation and discussion about elements that were testing or not.


Potential Trap
A check (or checking) is used in isolation without any test "thinking" (whether as analysis beforehand or afterwards.) This case is a form of the trap mentioned in ref [6].

On to Information Value
So, what was I missing? My early posts were looking for reasons / needs for the distinction - and I accepted that the distinction was useful (although, I admit, not immediately as I didn't see the utility in them in the beginning - that came later.) My remaining question was around emphasizing the importance of testing and the selection of a check as being rooted in testing.

During the last year I started thinking more and more about emphasizing information value of testing. This is partly trying to establish congruence between customer needs and testing goals, ref [7] & [8], partly about introducing test framing, ref [9], and framing checking from a testing perspective, ref [10].

So, where does this bring me?

Checking in isolation
Talking about checking in isolation has value when identifying the activity - whether as something that is planned or an observation in a process or activity.
  • "On the upcoming release we'll reuse some checks from the last release - they will give feedback on basic data configuration - whether the data configured pre-upgrade is still visible after upgrade." (plan)
  • "You're reporting a fault due to an automated script flagging an issue?" (activity) "Ok, has the result and activity been double-checked - that the flagging is correct?" (testing)
  • "There is a bunch of automated scripts that run on every code change - and they never issue any problem indication. Is that ok, or is it time to review their content and purpose?" (process) 
Information Value of Checks and Checking
When considering whether to use a check or checking there is an implicit activity of comparing what you'd like to know about a product, what you can achieve (and how) and what each instance of a check might give. At least, I claim (with only anecdotal proof) that this is the case in many good testers. For example:

Wanted Information about product
Note, this example excludes people not making an evaluation about a check or whether or not to do any checking or what the check outcome says...

Part of this evaluation (sometimes estimation) is to understand the value of making a check - something cheap and easy might be preferable, but something expensive that returns a large information value might also be worth the effort. The key thing is the decision to make the check - because it will provide some information.
Example: In the past I've advocated 169hr tests (checks) - the result might be good/no good (pass/fail), but the value of it might be part regulatory PLUS a lot of additional data that can be mined/explored.
Why it might be useful to check or perform checking is then attempting to work out the value of the information it might give. This is information about the product "under test".

Information value can be estimated beforehand and/or assessed afterwards.

Checking as part of Testing
When information value (or at least its intention) connected to a check or checking is captured then the check or checking becomes part of testing.

A model of testing, checking and information value could be expressed as:
Testing ≥ Checking + Information value

In mathematical terms it could be written:
Testing ⊇ Checking + Information value
This means every instance of "Checking + Information value" is an element of Testing but also that "Testing" does not equal "Checking + Information value".

Checks (with no associated information value) ∉ Testing
Meaning checks without associated information value (either decision to make a check or analysis of a check outcome) are not part of testing.


Conclusion
Whether I think of checks, checking and testing inside the Rapid Software Testing methodology or not I think the terminology is useful. It is useful to identify artifacts of checking in the wild - whether in observed processes, activities or intended actions. It is useful to identify and categorize as any field scientist would categorize their observations.

In talking to people about processes and their actions it is very useful to describe and illustrate when a process was being followed, maybe even identifying assumptions that were not being stated and how they might confuse or muddy communication about what was really happening. It might be useful to highlight the extra work needed to make the activity into good testing.

In my worldview of testing there are no checks or checking in isolation, there is either an intention or idea beforehand about the usefulness of such a check or an analysis afterwards about the usefulness of the check or other assessment about the information it yields. Often, in the cases of good checking (implying good testing) both the beforehand intention/idea/hypothesis and the resulting assessment will both happen. That is my ambition where checks are used.

In the cases where checks are used without pre-determined intention or resultant analysis - and this does happen - my ambition is to raise the bar. In my worldview this is not testing. I want to raise the bar partly by awareness - noticing (and getting others to notice) this happens, partly by contrasting this to how it might look if we connect some information value to those checks. In so doing, understanding the real value (and maybe cost) of good testing. Note, there is no automatic raised cost by adding information value to checks to get testing, but rather a reduction in the overheads of "fake testing" (an activity which was maybe thought to be testing but wasn't).

I wish to raise the bar of checking by emphasizing the associated thinking attached to checking to identify information value (positive or negative), and so raise the bar of testing.

Thoughts, comments?

References
[1] The Tester's Headache: To test or not to test, just checking... http://testers-headache.blogspot.com/2009/08/to-test-or-not-to-test-just-checking.html
[2] The Tester's Headache: Sapient checking? http://testers-headache.blogspot.com/2009/09/sapient-checking.html
[3] The Tester's Headache: More notes on testing & checking http://testers-headache.blogspot.com/2009/09/more-notes-on-testing-checking.html
[4] Satisfice: Testing and Checking Refined http://www.satisfice.com/blog/archives/856
[5] Developsense: Testing vs. Checking http://www.developsense.com/2009/08/testing-vs-checking.html
[6] The Tester's Headache: Another Regression Test Trap http://testers-headache.blogspot.com/2013/03/another-regression-test-trap.html
[7] YorkyAbroad Slideshare: Testing Lessons From The Rolling Stones http://www.slideshare.net/YorkyAbroad/testing-lessons-from-the-rolling-stones
[8] The Tester's Headache: Mapping Information Value in Testing http://testers-headache.blogspot.com/2012/12/mapping-information-value-in-testing.html
[9] Developsense: Test Framing http://www.developsense.com/resources/TestFraming.pdf
[10] The Tester's Headache: Framing: Some Decision and Analysis Frames in Testing http://testers-headache.blogspot.com/2011/08/framing-some-decision-and-analysis.html
[11] Satisfice: Comment on Testing and Checking Refined http://www.satisfice.com/blog/archives/856/comment-page-1#comment-268572