Monday, 21 November 2011

Some SWET3 thoughts

" #softwaretesting #testing #swet3 #peer #conference "

Date: 19-20 November, 2011
Place: Ringsjöstrand, Skåne, Sweden
Weather: Fog

Attendees: Johan Jonasson, Ola Hyltén, Anders Claesson, Oscar Cosmo, Petter Mattsson, Rikard Edgren, Henrik Andersson, Robert Bergqvist, Maria Kedemo, Sigge Birgisson, Simon Morley

I spent another weekend with a group of thinking testers that made up SWET3 (Swedish Workshop on Exploratory Testing 3). An intense and well-organised weekend - many thanks to Henke & Robban for all the work!

SWET runs according to the LAWST model - although we didn't have any 'rat hole' cards, and white was the new red - confusing enough that meant some wrote 'red' on their white cards - there's a lesson there somewhere.

Rikard's Talk: Teaching Testing Basics with Open Source Applications
First up was Rikard with a talk about some of his experiences with teaching testing to students in Karlstad - an interesting talk and a wide-ranging discussion followed. My notes showed I had the following questions/comments:

  • Which basic techniques do the students learn the easiest?
  • Any reflections on the learning methods the work better or not so well?
  • What is the energy in the students like? Is there any relationship to their enthusiasm?
  • When you work with bug isolation was there any assessment of the student's capabilities before the exercise started?
  • What patterns do you see in the students and how do the patterns compare with testers out in the 'real world'?
  • "They want to be be testers" - Is that the way it is for all from the beginning or have you seen it change during the course?
  • How do you teach them the heuristic test strategy model?
  • How do they learn quality criteria / how do they learn stakeholder relations?
  • What problems does teaching with open source applications give?
  • How do you handle the balance between listening to the teacher (as a beginner) and encouraging a level of scepticism in the students?
  • How do you handle your own teaching needs/learning?
  • Any issues with terminology in the class, eg "tests vs checks"?

Johan's talk: Teaching Testing Online
Johan discussed the AST online courses - both from the perspective of a student and as an instructor. There was a lot of discussion comparing pros and cons of the course, covering a wide range of aspects from it's timing, pace, the involvement of the instructors, content, issues and benefits with peer review, time zones, issues with online presence and more. Some questions, from my notes, were:

  • Have you disagreed with anything in the class material?
  • What parts of the class material would you change even if you didn't disagree with it?
  • Are there any dangers for groupthink?
  • Is there no way to jump straight to the Test Design course if you can demonstrate the right pre-requisite learnings?

My Talk: Mindset Changes: Changing the direction of the oil tanker

My own talk was centred on some of the issues with changing thinking, PR and discussion of and about testing in a large organisation. I may reflect and describe this in more detail in a later post, but some observations were:

  • The tester view/perception and the non-tester view/perception of testing (in large organisations) is linked and influenced by each other (a circle of dependency).
  • Good communication is key: without it it doesn't matter what sort of testing you do.
  • "Many bugs" gets translated into "poor testing" too easily - we need to educate how complex good testing really is.
  • Testing (in general) needs reframing away from a predominantly confirmatory approach to a more investigative approach (which might include a confirmatory aspect).
  • Maps for coverage, just like dashboards, are fallible and heuristic in nature.
  • Test cases and fault/bug reports are heuristics - they may give some useful information under certain conditions but are fallible.
  • I called the approach to changing views as a Machiavelli approach - meaning subtle and hiding the up-front meaning. The reasoning being to avoid labels and the associated cognitive disfluency - which is present (in my experience) in large organisations.

In a discussion about mapping coverage as a form of visual reporting/awareness - I reinforced the point that maps can be wrong, but they can be useful. 

In the following feature map - the components of the feature are drawn out in black, and some test progress in some of the components is indicated in red. I could imagine such a map could leave the observer with a satisfied (or even happy) feeling...

My improvised coverage map

Great discussions, great experiences and lots of food for thought - just the right ingredients for a successful peer conference!


Other SWET Reports:

Notes from SWET3 (Rikard)
SWET3 thoughts through mind maps (Sigge)


Saturday, 12 November 2011

Let's Test!

" #softwaretesting #testing #LetsTest "

Let's Test 2012, 7-9 May, Runö Conference Centre, north of Stockholm


Site: http://lets-test.com/


Five guys got together and formed a committee to organise a test conference. Not just any five guys, and not just any test conference!

Henrik Andersson, Henrik Emilsson, Jonas Jonasson, Ola Hytén and Torbjörn Ryber are the organisers. I know them from SWET1, SWET2 & SWET3 - they don't just know testing, they talk a lot about things you see in "good testing".

They got together and it looks like they've put together a special conference. Three days of tutorials, keynotes, talks, test lab and more. The concept of having the conference and accommodation all in one place - a peninsula in an archipelago just north of Stockholm (of all cool places!) - makes it into an extended, and pretty special, peer conference.

Peer conference: with the emphasis on meeting peers and having contact with them for the whole three days. Conference with the emphasis on conferring! Cool concept, cool location, cool set-up -> cool conference.

Oh, what else is special about this? It's being called the first European context-driven test conference. Some people get scared or worried by the idea of "context-driven testing", but come on: isn't that just a declaration of doing appropriate testing - with your "eyes wide open", brain in gear and eliciting valuable information about the product(?) - yes, it's difficult to do well,  you have to work at it and continue to learn.

So, if you like learning, are partial to good discussion and want to meet like-minded people then I think this is a very interesting conference to attend.

A test conference, by testers for testers - pretty special!

Oh, and there's one week left for submissions (call for papers closes 13 December).


Thursday, 3 November 2011

Best Practices - Smoke and Mirrors or Cognitive Fluency?


" #softwaretesting #testing #cognition "

I saw a post on the STC, here, about best-in-context practices. I started thinking about 'best-in-show' (a la crufts) and best practices in general.

New ideas, terms or phrases usually trigger the same question: 'what problem is this trying to solve?'

BP in General
Best practices. Why tout them? It's a sales gimmick - 'here's my way of doing it and I think everyone should do it. And I can show you why it will solve your problems too'. 

It's a sales pitch - and sometimes the best salesman wins. Or, it's a marketing gimmick - if you create enough exposure to the 'best practice' then it gets assumed to be 'best' because there is so much discussion about it.

Systems Theory
Applying 'systems theory' - creating a best practice is like creating a local optimization - there's a danger it will sub-optimize for the whole system, ref [5]. Best practices are inherently localized.

Think of the Spanish Inquisition (the real one and the monty python version) or '1984' and the thought police, ref [7]. These organisations would probably have been thought to be quite effective to the ones that created them - that's the paradox of local optimizations, there's always at least one angle from which they look very good (or even 'best'). 

Good for the whole though? Here, of course, a lot depends on your idea of good and the frame in which you think of the problem - and the volume of your voice or marketing.

Unfair comparisons? Problem with framing?
Thinking in frames and framing the problem and choice! A problem with 'best practices' could be that:
  • The problem has not been thought through to achieve a consensus of what practice to adopt - then someone makes a choice
  • The problem and it's nature is not understood (ie in the next project the problem is different - this is hard work… - let's stick with what we have)
  • A practice is used and gets labelled 'best' afterwords (maybe not by the original practice decision maker)

In the third case, it's use implies it's a 'best practice' even if it's not. Think from a typical/average manager's perspective - why would they use anything that wasn't best, so if it's in use now and isn't causing huge pain (or seems good enough) then they can be tempted to call it a best practice.

A Snapshot in Time and Good Enough
Personally, I think of best practices as ephemeral - a dragonfly that lives for a short time, and then it's gone - ie 'best' is like taking a still picture with a camera - there (at that timestamp) it's best - but it might not apply anymore….. That snapshot might not be 'best' if I started comparing with other snapshots…..

So, why search for 'best'? Why not search for 'good enough'? Or 'good enough for the problem in front of me'? To me, that would imply active thinking…. But achieving consensus about which practice to use might be simpler than you think - but working on your assumptions is needed. For example:
  • What problem is trying to be addressed with a best practice?
  • Is it a project manager/company that wants to create a 'standard' that they don't want to maintain?
  • Is this a money-saving approach? Again maintenance.
  • Is it telling the people not to think?

Maybe by paying people not to think (lower paid) then a practice needs to be adopted that is low-maintenance. This seem to be a dangerous game… 

Perhaps companies tolerate them as long as the black swan event can't be traced to their choice of practice (or reification of it being 'the best'). Maybe they genuinely don't realise any better. Maybe the difference in practices is judged to be too small to not warrant the need for constant evaluation or judgement. (I suspect this is a default - coupled with cognitive fluency, see below)

But if this was a default option shouldn't there be less advocating/marketing/discussion of best practices? Well, progress implies (to many) re-invention or creation of new ideas, therefore 'discovering' a new best practice is quite marketable.

Smoke and Mirrors
Is the idea of a 'best' practice an illusion? Software development (including testing) is knowledge work - what's the 'best' way to think? 

Is it not the case of an application of an industrial metaphor to knowledge-based work? Mmm!!

Is it a problem with language?
  • "Practice makes perfect." 

Implies there is a 'best' - this is also an example of cognitive ease (see below) and will be more easily remembered as being a 'good' guide.
  • "Practice does not make perfect. Only perfect practice makes perfect."

Not achieving 'the best' is probably politically incorrect in business circles - so there is pressure to say this or that is 'best'. But, remember, this is knowledge work.

Think of any award or recognition in the sciences - we don't say 'X is the best economist/physicist' - they are usually identified for their contribution. In the same way, we should be particular about any 'practice' that we use - what is it good and not so good for??? 

If you can't think of issues with the practice then you probably haven't thought hard enough.

Is it a cognitive problem?
The sticking point is 'best' - it's a superlative. It's natural that any manager would like to advocate and advertise that they are doing the 'best' possible. It becomes even worse if they have had a 'best practice' in the past - then it is harder to move away from such a practice as this is hard work convincing their peers why this is necessary.

Behavioural Economists and Psychologists also have a related theory - it's called cognitive fluency, ref [2], or cognitive ease, ref [3] - and it's the condition when something is easier to think about affects our choice. It has been noted that when something is easier to think about then it might be more readily believed to be true (or correct, or remembered - depending on context) - ie there builds an assumption (bias) that it is true (correct or something seen before). 

This ease of thought can be obtained via familiarity (repeat messages), bolder and easier to read type and even colour. 

So, if anyone (manager/stakeholder) has ever had success (however they gauge that) with a best practice in the past, then they are more likely to think of best practices as being a good thing, and their particular best practice in particular.

It is much easier to take an off-the-shelf practice for use in the next project rather than think it through and check whether it is good enough or not. 

The opposite of cognitive fluency - cognitive disfluency - also holds true, it acts as a warning signal. So, forcing someone to re-evaluate what a 'good enough' practice is is always going to be harder than taking a ready-made practice off the shelf.

Assumptions
'Best practices' don't usually display their magnitude of pre-requisites, conditions, exclusions and terms of use. It's like the various "terms of use" that come with many SW applications - many don't read them. 

Why, then, should a project manager read and be aware of all the conditions of use for a 'best practice'? It's hard work. Managers/stakeholders usually assume (or hope) that knowledge workers are 'doing their best' under the circumstances.

And finally...
The next time someone talks about a best practice, go easy on them - they are following an evolutionary pattern ('if it's familiar it hasn't eaten me') therefore, the onus is on us to highlight the shortcomings of the particular practice (if there are any to be found) or judging why it's fit for purpose. 

That activity is important - it's like going through a pre-flight checklist - you see the potential warning signs when you can still act upon them!

Also, never present anything as 'best' for knowledge work - cognitive fluency will mean it becomes a best practice next time. Avoid superlatives (best) and indications of certainty or insurance, ref [4].

References

[3] Thinking, Fast and Slow (Kahneman, 2011, FSG)
[5] Thinking in Systems: A Primer (Meadows, 2008, Chelsea Green Publishing)
[6] Flikr: Crufts Best in Show Roll of Honour

Wednesday, 2 November 2011

Carnival of Testers #27

" #softwaretesting #testing "

October, Halloween, pranks, scary stuff, ritual and various forms of deity-worship is nothing new to software testing.

But, hopefully, in the selection below there are no ghouls, just good writing! Judge for yourselves...

Natural?

  • Phi Kirkham made a series of interviews with contributing authors of a new software testing-related book. Here, was a good example with Catherine Powell.
  • Assumptions, pre-conceptions and hidden interpretations are rife in any social interaction. They're amplified by software. Adam Knight gave an example, here, of how simple it is to overlook pre-conceptions.
  • The first GATE workshop took place at the beginning of October and got a good write-up from Markus Gärtner, here. Eusebiu Blindu's thoughts on it, here.
  • If you're in the vicinity Phil Kirkham was announcing an STC meetup in Chicago, here.
  • Jeff Nyman had some perspectives on quality assurance and what's obvious, here.
  • A comprehensive report from PNSQC by Nathalie de Vries, here.
  • Duncan Nisbet, here, gave a run-down of his last 12 months - actual against expectations. Interesting reflections.
  • A good post about the beta book program, here, from Michael Larsen.
  • Alan Richardson lists his inspiration for his model of software testing, here.
  • Retrospectives, de-briefs and reflections are important in any team or testing activity. Lisa Crispin describes how fortune cookies were used as a trigger in a team retrospective, here.

Unnatural?


  • Regression testing - done well, done badly, done at all, done with any thought? Anne-Marie Charrett explored some issues with aspects of regression testing, here.
  • "All that testing is getting in the way of quality"? A halloween prank? Eric Jacobson wrote his reflections on the STARWest talk, hereBen Kelly made a good walk-through of the points in Eric's post, here. Good comments in both!


Supernatural?

  • Halloween. Is quality dead or undead? Trish Khoo had some thoughts, here.
  • Men in black or bogey men? Mr TestSheep has some thoughts on them, here(Well done, NZ in RWC 2011!)
  • And finally, to wrap-up October and all things ghoulish, Claire Moss wrote about the undead attributes of quality, here.

Until the next time....