Friday, 29 April 2011

SWET2 - Serious Testing Talk by Serious Testers

" #softwaretesting #testing #swet2 "

Place: Hönö Hotell
Date: 9-10 April 2011
Twitter hashtag: #SWET2 (an odd one with #SVET2)



Antendees (seating order): Torbjörn Ryber, Martin Jansson, Simon Morley, Henrik Emilsson, Sigge Birgisson, Ola Hyltén, Steve Öberg, Johan Jonasson, Rikard Edgren, Azin Bergman, Christin Wiedemann, Henrik Andersson, Robert Bergqvist, Saam Koororian, Fredrik Scheja

SWET - Swedish Workshop on Exploratory Testing
This was the second installment of a peer conference on exploratory testing held by Swedish testers, organised by Rikard, Henrik E and Martin. Great job!

The main focus this time was the analysis, planning and status communication aspects related to exploratory testing. I thought this was a great idea - an area that sometimes doesn't get the discussion time and an area ripe for discussion.


Presentations
Johan, Tobbe, Fredrik & Saam did a fine job with their presentations - I won't go into the details as they've been described in the references below.

Johan had the marathon session - about 15-20mins of presentation followed by nearly 4 hours of discussion. It's that discussion that really searches into the experience report and picks up on different aspects and looks at them closely.

In this session there was an exploration of aspects of new testers versus experienced testers in exploratory testing. This got a lot of exploration and I think many drew or highlighted interesting lessons here. Some notes I made (this is a mixture of my own and other's thoughts and observations):
  • Attitude (from project or team lead) towards testers/teams is decisive.
  • Attitude of testers/teams is decisive (in good testing).
  • Importance of the team lead that "protects" testers from the project and lets them test.
  • But how to handle project leaders where that shield doesn't exist?
  • Comfort zone warning signs in testers: Ways-of-working, templates, same-same attitude and identity. 
  • Can be a danger that testers can use test reporting as an advert for testing, i.e. a danger that we're missing the point of testing. (This was a good point from Martin!)
  • Dialogue between tester/team and the project lead - no surprises.
  • Test planning: Trust between tester/team and project leader is important (if not vital).
  • Actively build up this trust.
  • It's the planning that is important and not the plan!
  • Domain expert vs Right attitude?

Tobbe was next up with an experience report on the role of a tester that goes outside the traditional and incorporates aspects of project management. Some observations:
  • Is there a danger in the combined tester and project leader role. Danger for double-personality complex (amongst others)?
  • How was the conflict between lots of testing issues and reporting to a stakeholder on the project handled?
  • Did the double-role affect the content and detail of the reporting?

Fredrik was third with a report on using some aspects of TMap as a starting point for their work. The story wasn't really about TMap but more how they'd used it as a starting point. Some of my notes/questions:
  • Test analysis & planning has a small timebox - is this optimal after a few iterations, some other basis or some pre-requisite knowledge before application?
  • How do you examine and question your working methods and approach for the purpose of improvement?
  • Having a "high quality expectation" - what does this mean and imply? (what does it leave out?)

Saam was the fourth and final presentation. Unfortunately, we only managed just over an hour of discussion after the presentation. Some of my notes and observations:
  • Working with distributed sites - there are different cultural aspects here. How did they impact the reporting?
  • The idea of "1 requirement per test" - how is that counteracted?
  • "Confidence levels" in reporting is a good starting point - but is there a need to normalize or baseline them for stakeholders? Education for stakeholders?
  • How are faults in the coverage map handled?
  • What was the stakeholder reaction to low coverage on parts of the map?
  • Map: confidence vs security vs risk vs tool for communication (and not communicated).
  • Map: Needs to be dynamic.
  • Map: How are assumptions and limitations displayed? Do they need to be?
  • Map: Does the map change with different assumptions and limitations?
  • Use the map to solve the team problems and not as a method of communication.

My Lessons
I practised thinking about how I would behave and react in the situations presented in the experience reports. 
The observations and questions (above) are really some of the questions I might pose in similar situations - so I'm learning from the presented experience and testing my ideas out. And that's good practice - valuable.

It was fantastic being in a group of intelligent testers - some from SWET1 and some new aquantancies. All had many different perspectives on issues that many of us face on a daily basis. Great to compare notes and discuss topics of the day. Talking of which...


Best Practices
As usual there were plenty of late-night discussions. The topic of best practices arose. As I remember, participants in the discussion were Rikard, Ola, Henrik A and myself (maybe I missed someone?)

The discussion was on the phrase "best practice" - notice it wasn't the topic of "best practices" but rather the phrase.

Rikard said that he thought it was curious how usage of the words can induce such strong reactions. There is a state in which someone can think of an idea or topic as their best so far, and consider something as "best" [to them, so far].

Henrik talked about how problematical it was to use those two words together.

My view was that it was incomplete - giving something a label of "bp" is really avoiding dialogue. It's an incomplete phrase as there's no time or any other contextual aspect - therefore using the term is lazy and imprecise.


Lightning Talks
This was a great initiative! Five minutes open floor - talk for five minutes or less if you want to allow questions. All those not presenting got up and gave an energetic five minutes.


Other references:

Passionate is a word that sometimes gets overused when describing testers ("... I have a proven track record ... yada yada ... oh and I'm a passionate tester also...") - but in this grouping passion fits. Engaged discussion, relentless searching and ultimately improving ones thinking and reasoning. All in their spare time. Now that's passion!

Cool! Roll on SWET3...

Tuesday, 26 April 2011

My Slides from Iqnite Nordic 2010

" #softwaretesting #testing "

Long overdue, but I finally got around to uploading the slides from my presentation at Iqnite Nordic 2010.

As ever, the slides only give part of the story - you miss out on the presentation ramble, posturing, double-takes and gesturing - but maybe that's a good thing. It's a little academic in parts and there are some parts I've modified when giving this since, but there is still a good message in there.

Anyway, enjoy:


Oh, and of course feedback is welcome!

Friday, 15 April 2011

The Certainty Effect and Insurance

" #softwaretesting #testing "

At #swet2 I gave a lightning talk on an aspect of Framing (more on this in future posts) and thought I'd jot down some of the details.

On the way to the peer conference I'd read Tversky and Kahneman's "The Framing of Decisions and the Psychology of Choice" (Science v211, 1981), and was struck by the description of what they described as the certainty effect. Or really the potential application in testing.

The Certainty Effect
This was labelled by Tversky and Kahneman (in a 1979 article in Ecometrica on Prospect Theory) on a paradox observed by French economist Maurice Allais in 1953. Expected utility theory predicts that a choice is made dependent on it's expected probability. The example from the 1979 article is:

Problem 1:
Choose between
Choice A 
A return of 2500 with a probability of .33
A return of 2400 with a probability of .66
A return of 0 with a probability of .01
Choice B
A certain (guaranteed) return of 2400

In a sample of 72 respondents 18% chose A and 82% chose B.

This is in line with expected utility theory. It's also known as a risk averse approach. Now in the second problem the certainty was removed:

Problem 2:
Choose between
Choice C 
A return of 2500 with a probability of .33
A return of 0 with a probability of .67
Choice D 
A return of 2400 with a probability of .34
A return of 0 with a probability of .66
Now in this case, of the 72 respondents 83% chose C and 17% chose D. This is not in line with expected utility theory and is what is labelled as the certainty effect. That is, there was a disproportionate weighting on an outcome when it was certain - as this weighting wasn't reflected when the outcome was not certain.

This is related to how the question is asked (framed).

Insurance
A further example of this type of framing affecting choice can be seen in attitudes to risk - typically when looking at insurance there is a difference in attitudes that is displayed between risk reduction and risk elimination, for example consider:

A vaccine that is effective in half of cases.
vs
A vaccine that is fully effective in 1 of 2 cases.

Then there is a disproportionate preference for the second formulation.

Applied to Testing?
By default, stakeholders want certainty. They tend to want risk elimination rather than risk reduction. They link this to cost (just like in different levels of insurance) and think that by working with cost they will get closer to certainty.

This is a problem for testers - or really how they are communicating with stakeholders.

Testers can't work in terms of certainty (without very tight restrictions and lots and lots of explanations and assumptions). Therefore, given the two possibilities of talking about risk elimination and risk reduction testers should talk in terms of risk reduction.

Additionally, the certainty effect tells us that typical decisions and choices can be skewed (disproportionately) when the risk or probability moves away from certainty (guarantee).

When handling the message and understanding expectations towards stakeholders consider:
  • Be consistent - never talk in terms of (or give the impression that you can deliver) certainty.
  • Be aware that when something is not certain then attitudes to risk and decision choices don't always follow expected weighting of probabilities.
Certainty, insurance and talking to stakeholders - it's not always logical.

Friday, 8 April 2011

Scripts: Scripted Tester or Scripted Executor?

" #softwaretesting #testing "

I had a brief and interesting twitter exchange with Michael Bolton last week:



My Frame
First I'll give the background of where I'm coming from when I think of "scripted testing".

I started thinking about the checking vs testing distinction a while back (here, here and here). If you read them (rough and ready as they were) you'll see that my ideas about checks are wrapped into testing - i.e. you can't use checks without testing - testing being the information gathering exercise where you use the information gathered to help determine which other type of information to gather - that (to me) is one of the aspects of "good testing".
Note on Earlier Posts on Testing vs Checking
Those posts were a little rough - peer review would definitely tighten them up.It's actually interesting looking back on these posts (for me) as I can see areas where I've developed and refined some of my thinking. But in essence the check is an artifact and the test is the activity
So it was in that frame that I then triggered the question to Michael above.

Testing?
Am I turning scripts into tests? Maybe.

As Michael pointed out (which was quite a helpful clarification!) - I'm using them as an input - and as I referred to in the posts - I can't use a check without thinking about it's use beforehand and it's results afterwards - so in that sense it's very much a part of my testing.

Who Cares?
Some people - but definitely not everybody!

I've noticed a tendency to dismiss scripts or to detach them from testing. Well, as Michael also helpfully pointed out - there are aspects of scripting in most testing (whether it's a mission, goal or hunch) and that most good testing can't be constrained by a script.

And Finally?
For me:
  • Scripted Testing = Having a script wrapped up in testing, an input into the information gathering exercise.
  • Scripted Execution = The script on it's own. The activity is constrained by the script. Mmm, what can you do with that on it's own? Not much.
So, have you noticed when you practised scripted testing in the past?
Have you noticed when you've been constrained by the script? Did you let the script constrain you?

Friday, 1 April 2011

Carnival of Testers #20

" #softwaretesting #testing "

"Beware the ides of March."
(Shakespeare)

Lots of fine writing again this month.

Enjoy...
  • Zeger Van Hese's post about James Randi and software skeptics. How apt!
  • Markus Gärtner's post against thought control, here.
  • Yet another good book review from Michael Larsen, here.
  • Marlena Compton's post on testing, testers, congruence and Lady Gaga (not necessarily in that order!)

Entaggle
I thought about writing a post about Entaggle, how I'd discovered it and my experience but I was beat to it in different ways....
  • Brian Osman was the first the make a post alerting folks to its existence.
  • Darren McMillan made quite an in-depth (3 part) analysis of the site, some business case and usability aspects.
  • Then I loved Phil Kirkham's gaming of the site. Check it out, here.
  • Some early experience via weekend testing was given, here, by Mohinder Khosla.

Value
  • Interesting post from Brad Swanson on the 7 deadly sins of agile testing, here.
  • The value of shortened feedback loops was highlighted nicely by Lisa Crispin with the aid of her donkeys.
  • A well-worked example of the value (or otherwise) of quantative reporting was written by Del Dewar, here
  • Testers add value to projects in many ways - sometimes by the things they find. James Bach and Michael Bolton looked into some of these findings, here and here.
  • Good writing from Aaron Hodder on systems testing.

Thoughts
  • Tony Bruce wrote about his latest meetup organisation efforts - this time a conference! Looking very good! Take a look, here.
  • I enjoyed Eusebio Blindu's post on principles for testers, here.
  • Good thoughts on change management from Luísa Baldaia, here.

STC Nottingham Meetup
Two very good write-ups, both worth a look.
  • Adam Brown wrote a round-up with some videos, here.
  • Mohinder Khosla made another round-up with a different video, here.

"What's good about March? Well for one thing it keeps February and April Apart."
(Walt Kelly)

Until the next time...