Tuesday, 28 May 2013

Examining the Context-Driven Principles

At Let's Test 2013 James Bach had a keynote about the Context Driven Testing principles, some of their origin, usage and implications of them. An interesting piece of trivia that James revealed was that the basis of the principles were written down very quickly, with some refinement afterwards.

He discussed a little about the implicit meanings behind the 7 principles, but the slides were skipped through so I didn't see the details. However, by this point at least one question occurred to me.
My question in the open season went along the lines, "The principles were mentioned as a litmus test to gauge if someone was context driven. However, adopting a scientific approach (ie one of searching for information and not settling on an assumption), there may well be more than 7. How do we add number 8?"
An underlying question was (for me):

  1. If the principles were written so (comparatively) quickly why haven't they been challenged from testers claiming to follow or adhere to them?
  2. Indeed, isn't this a good exercise for a tester - context-driven or otherwise?

Therefore, I thought I would take a look at them - as a (comparatively) quick exercise. James did mention that Markus Gärtner had extended the principles as an exercise previously. I remember seeing a discussion on the software-testing yahoo list some time ago and checked it out - and sure enough I'd discussed the possible extensions there, and thought it was a useful topic to revisit.

The 7 published principles, ref [1], 
1. The value of any practice depends on its context.
2. There are good practices in context, but there are no best practices.
3. People, working together, are the most important part of any project’s context.
4. Projects unfold over time in ways that are often not predictable.
5. The product is a solution. If the problem isn’t solved, the product doesn’t work.
6. Good software testing is a challenging intellectual process.
7. Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.
My Approach
In assessing the principles I used two approaches, (1) Critical/Logical consistency - do the propositions support conclusions or implications?; (2) Are the principles testable (falsifiable)?

Assessment
#1 [Unchanged] I tried to extend this one, but I think it is rhetorically good - succinct and a good start to the principles. This can not be tested but is inferred by induction.

#2 [Remove] I could think of  removing/replacing this one. It is a derivation of #1. The essential content here is that "Practices evolve over time" - I thought of making this my #2, partly to remove the distraction of "best practices" (that would be a derivation of these principles), and partly to emphasize the evolution of practices. However, I finally settled on having no #2.

#3 [Rephrase] The principle is an assertion. For this one I want to emphasize that the project, the processes and people form a system and that they change over time. The importance here is to illustrate that the system is not static, it is complex because of people (and their emotions) and as such, working with such a system is not a trivial activity. Therefore I would re-phrase.

#4 [Remove] This is a derivation / implication of #3, an application of the principles.

#5 [Unchanged] Again, succinct and difficult to refute.

#6 [Unchanged] An assertion that where good software testing happens there is a high intellectual demand.

#7 [Remove] This is unnecessarily wordy and sticks out as too different from the previous principles. It also drifts into certainty and seems like an untestable principle, and so I would remove it.

Addition
#8 Here I want to tie the system mentioned in the rephrased #3 and good software testing.

Result
1. The value of any practice depends on its context.
2. -
3. Projects and people form part of the system that work together, where products and practices evolve over time
4. -
5. The product is a solution. If the problem isn’t solved, the product doesn’t work.
6. Good software testing is a challenging intellectual process.
7. -
8. Good software testing encapsulates the system of project, people and practices that work on building the product.
So, I ended up with 5 principles. The application of these would then produce variants and clarifications. For instance, the best practice item is derived from #1. Understanding systems of people, emotions and time is derived from these and many, many more.

I stopped there, but if I spent more time could probably refine them somewhat. But they are good enough for this exercise. And as James suggested - if you started from scratch, you might well not end up with the original 7 principles.

So, how would your interpretation look?


Reference
[1] http://context-driven-testing.com/

Friday, 10 May 2013

Peer Conferences -> Conferring -> Learning

This week I held my first internal (in-house/in-shop) peer conference. This is a new concept to many at my shop and I'd been promoting it via short promo talks, e-mail and some gentle prodding.

Concept
For those that have never participated in a peer conference the concept is a cross between a roundtable discussion, an in-depth interrogation and an experience report. After a short presentation, statement or question, then an open session of moderated discussion explores the topic. In my experience, the open session is the real examination and distillation of learning ideas and experiences and it's there that many learnings and insights come forward.

The format is run in different formats including non-time boxed, ref [5], and time-boxed variants, refs [6] & [7].

Concept in academia
I recently discovered some academic support for this type of learning, ref [1], which looks at learning and communication by posing questions and answering them. This can involve the talk-aloud protocol (which is a description and discussion about the experience) - this is one way to help the group get to the real cognitive processes behind the experience.

If the questioning is done in real-time (whilst someone is making a decision or performing a task) then this may be termed the think-aloud protocol. In an open season sometimes the questions turn to "how did you chose that", or, "why did you make that decision", then there is a chance that we relive the think-aloud concept. These are powerful because they can help to show up unintended misunderstandings - either in the reporter or the audience - and so enhance learning.

The talk- and think-aloud protocols, ref [3], help all to discover the implicit ideas behind the experience and for all to build a mental model of the key turning points in the experience. This is important for use and understanding.

This is a way of making hidden knowledge explicit, and Ericsson and Simon, ref [1], refer to the work of Lazarsfeld, ref [2]. This talks about the differences (1) in response to the way a question is phrased and also (2) about tacit assumptions.

An example of (1) is given as, "why did you buy this book" - where the response will depend on where the emphasis is - on "buy" the response might contrast to borrowing the book, on "this" the answer might be about the author, on "book" it might be about opportunity cost, contrasting with a restaurant or cinema visit.

Lazarsfeld's example of Tacit Assumption is from Chesterton's "The Invisible Man", ref [4]:
"Have you ever noticed this--that people never answer what you say? They answer what you mean--or what they think you mean. Suppose one lady says to another in a country house, 'Is anybody staying with you?' the lady doesn't answer 'Yes; the butler, the three footmen, the parlourmaid, and so on,' though the parlourmaid may be in the room, or the butler behind her chair. She says 'There is nobody staying with us,' meaning nobody of the sort you mean. But suppose a doctor inquiring into an epidemic asks, 'Who is staying in the house?' then the lady will remember the butler, the parlourmaid, and the rest. All language is used like that; you never get a question answered literally, even when you get it answered truly."
Note, Lazarsfeld also remarks that people consider quality in products differently!

So what?
The lesson from these sources is that many and varied questions get to the root of the experience and reasoning! Responses will be interpreted differently by different members of the audience. Therefore, it becomes part of their interest to ask and clarify from their perspective.

When you look at it this way the experience report and discussion becomes so much richer and fuller than a written report could hope to achieve. The learning becomes tangible and personal.

A key experience of peer conferences, for me, is that skills (or key decisions) are able to be distilled from the experience report. This means that those skills (or decisions), once isolated, can be investigated and used in practice - once something is recognized it becomes observable and possible to use in future experiments (practice).

So how did it go?
We had a small group (7 of us) which was very good as the concept was new to all, excluding myself and ran according to the LEWT format. The theme was testing skills and we dedicated an afternoon to it.

In the discussion part of my experience report (an experience about usage of combinatorial analysis for initial test idea reduction) we were able to diverge and find "new" uses for combinatorial analysis. We discussed a usage to analyze legacy test suites - a white spot analysis - now although this was something we already do, we hadn't recognized it as that. That means we can do it (potentially) differently by tweaking input parameters to produce different levels of analysis.

I was very pleased to be able to demonstrate this learning / insight "live", as it reinforced the power of the peer conference concept -> ie we discovered it "live".

We finished the afternoon happy and energized with the format and committing to be ambassadors for others. Job done!

Next steps
I started this internal conference as a monthly series of short talks and some deeper discussion. In June there will be another occurrence and I'll also try a variant joining a session in two countries with HD video comms - basically I'm spreading it around the world!

Upcoming
In just over a week there will be a peer conference prior to Let's Test, followed by the eagerly-anticipated Let's Test conference. Those will be experience and learning-rich environments. Pretty cool!

Reference
[1] Protocol Analysis: Verbal Reports as Data (1993; Ericsson, K.A., and H.A. Simon. ; Revised edition. Cambridge, MA: MIT Press.)
[2] The Art of Asking Why in Marketing Research (1935; Lazarsfeld) http://www.jstor.org/discover/10.2307/4291274
[3] Wikipedia: The Think Aloud Protocol http://en.wikipedia.org/wiki/Think_aloud_protocol
[4] G. K. Chesterton, The Invisible Man: http://www.readbookonline.net/readOnLine/15496/

Peer Conference References

[5] SWET: Swedish Workshop on Exploratory Testing
http://thetesteye.com/blog/2010/10/swet1-fragments/
http://testers-headache.blogspot.se/2011/04/swet2-serious-testing-talk-by-serious.html
http://thetesteye.com/blog/2011/11/notes-from-swet3/
http://blog.johanjonasson.com/?p=547
http://testers-headache.blogspot.com/2013/04/some-notes-from-swet5-small-and-intense.html
http://www.satisfice.com/blog/archives/527

[6] DEWT: Dutch Exploratory Workshop on Testing
http://dewt.wordpress.com/

[7] LEWT: London Exploratory Workshop on Testing
http://www.workroom-productions.com/LEWT.html