Monday, 21 November 2011

Some SWET3 thoughts

" #softwaretesting #testing #swet3 #peer #conference "

Date: 19-20 November, 2011
Place: Ringsjöstrand, Skåne, Sweden
Weather: Fog

Attendees: Johan Jonasson, Ola Hyltén, Anders Claesson, Oscar Cosmo, Petter Mattsson, Rikard Edgren, Henrik Andersson, Robert Bergqvist, Maria Kedemo, Sigge Birgisson, Simon Morley

I spent another weekend with a group of thinking testers that made up SWET3 (Swedish Workshop on Exploratory Testing 3). An intense and well-organised weekend - many thanks to Henke & Robban for all the work!

SWET runs according to the LAWST model - although we didn't have any 'rat hole' cards, and white was the new red - confusing enough that meant some wrote 'red' on their white cards - there's a lesson there somewhere.

Rikard's Talk: Teaching Testing Basics with Open Source Applications
First up was Rikard with a talk about some of his experiences with teaching testing to students in Karlstad - an interesting talk and a wide-ranging discussion followed. My notes showed I had the following questions/comments:

  • Which basic techniques do the students learn the easiest?
  • Any reflections on the learning methods the work better or not so well?
  • What is the energy in the students like? Is there any relationship to their enthusiasm?
  • When you work with bug isolation was there any assessment of the student's capabilities before the exercise started?
  • What patterns do you see in the students and how do the patterns compare with testers out in the 'real world'?
  • "They want to be be testers" - Is that the way it is for all from the beginning or have you seen it change during the course?
  • How do you teach them the heuristic test strategy model?
  • How do they learn quality criteria / how do they learn stakeholder relations?
  • What problems does teaching with open source applications give?
  • How do you handle the balance between listening to the teacher (as a beginner) and encouraging a level of scepticism in the students?
  • How do you handle your own teaching needs/learning?
  • Any issues with terminology in the class, eg "tests vs checks"?

Johan's talk: Teaching Testing Online
Johan discussed the AST online courses - both from the perspective of a student and as an instructor. There was a lot of discussion comparing pros and cons of the course, covering a wide range of aspects from it's timing, pace, the involvement of the instructors, content, issues and benefits with peer review, time zones, issues with online presence and more. Some questions, from my notes, were:

  • Have you disagreed with anything in the class material?
  • What parts of the class material would you change even if you didn't disagree with it?
  • Are there any dangers for groupthink?
  • Is there no way to jump straight to the Test Design course if you can demonstrate the right pre-requisite learnings?

My Talk: Mindset Changes: Changing the direction of the oil tanker

My own talk was centred on some of the issues with changing thinking, PR and discussion of and about testing in a large organisation. I may reflect and describe this in more detail in a later post, but some observations were:

  • The tester view/perception and the non-tester view/perception of testing (in large organisations) is linked and influenced by each other (a circle of dependency).
  • Good communication is key: without it it doesn't matter what sort of testing you do.
  • "Many bugs" gets translated into "poor testing" too easily - we need to educate how complex good testing really is.
  • Testing (in general) needs reframing away from a predominantly confirmatory approach to a more investigative approach (which might include a confirmatory aspect).
  • Maps for coverage, just like dashboards, are fallible and heuristic in nature.
  • Test cases and fault/bug reports are heuristics - they may give some useful information under certain conditions but are fallible.
  • I called the approach to changing views as a Machiavelli approach - meaning subtle and hiding the up-front meaning. The reasoning being to avoid labels and the associated cognitive disfluency - which is present (in my experience) in large organisations.

In a discussion about mapping coverage as a form of visual reporting/awareness - I reinforced the point that maps can be wrong, but they can be useful. 

In the following feature map - the components of the feature are drawn out in black, and some test progress in some of the components is indicated in red. I could imagine such a map could leave the observer with a satisfied (or even happy) feeling...

My improvised coverage map

Great discussions, great experiences and lots of food for thought - just the right ingredients for a successful peer conference!

Other SWET Reports:

Notes from SWET3 (Rikard)
SWET3 thoughts through mind maps (Sigge)

1 comment:

  1. Hi Simon

    Nice write-up, and nice image!

    Some of your questions for the first subject weren't addressed, I think.

    We don't do any assessments beforehand, focus is on learning, not measuring it.

    I think they all learn best by practical exercises, but the absorption time differs for persons and/or type of area. Almost impossible to have the right timing for everyone.

    Easier to learn than what I believed: scenario testing.
    More difficult: bug reporting.