Monday, 21 November 2011

Some SWET3 thoughts

" #softwaretesting #testing #swet3 #peer #conference "

Date: 19-20 November, 2011
Place: Ringsjöstrand, Skåne, Sweden
Weather: Fog

Attendees: Johan Jonasson, Ola Hyltén, Anders Claesson, Oscar Cosmo, Petter Mattsson, Rikard Edgren, Henrik Andersson, Robert Bergqvist, Maria Kedemo, Sigge Birgisson, Simon Morley

I spent another weekend with a group of thinking testers that made up SWET3 (Swedish Workshop on Exploratory Testing 3). An intense and well-organised weekend - many thanks to Henke & Robban for all the work!

SWET runs according to the LAWST model - although we didn't have any 'rat hole' cards, and white was the new red - confusing enough that meant some wrote 'red' on their white cards - there's a lesson there somewhere.

Rikard's Talk: Teaching Testing Basics with Open Source Applications
First up was Rikard with a talk about some of his experiences with teaching testing to students in Karlstad - an interesting talk and a wide-ranging discussion followed. My notes showed I had the following questions/comments:

  • Which basic techniques do the students learn the easiest?
  • Any reflections on the learning methods the work better or not so well?
  • What is the energy in the students like? Is there any relationship to their enthusiasm?
  • When you work with bug isolation was there any assessment of the student's capabilities before the exercise started?
  • What patterns do you see in the students and how do the patterns compare with testers out in the 'real world'?
  • "They want to be be testers" - Is that the way it is for all from the beginning or have you seen it change during the course?
  • How do you teach them the heuristic test strategy model?
  • How do they learn quality criteria / how do they learn stakeholder relations?
  • What problems does teaching with open source applications give?
  • How do you handle the balance between listening to the teacher (as a beginner) and encouraging a level of scepticism in the students?
  • How do you handle your own teaching needs/learning?
  • Any issues with terminology in the class, eg "tests vs checks"?

Johan's talk: Teaching Testing Online
Johan discussed the AST online courses - both from the perspective of a student and as an instructor. There was a lot of discussion comparing pros and cons of the course, covering a wide range of aspects from it's timing, pace, the involvement of the instructors, content, issues and benefits with peer review, time zones, issues with online presence and more. Some questions, from my notes, were:

  • Have you disagreed with anything in the class material?
  • What parts of the class material would you change even if you didn't disagree with it?
  • Are there any dangers for groupthink?
  • Is there no way to jump straight to the Test Design course if you can demonstrate the right pre-requisite learnings?

My Talk: Mindset Changes: Changing the direction of the oil tanker

My own talk was centred on some of the issues with changing thinking, PR and discussion of and about testing in a large organisation. I may reflect and describe this in more detail in a later post, but some observations were:

  • The tester view/perception and the non-tester view/perception of testing (in large organisations) is linked and influenced by each other (a circle of dependency).
  • Good communication is key: without it it doesn't matter what sort of testing you do.
  • "Many bugs" gets translated into "poor testing" too easily - we need to educate how complex good testing really is.
  • Testing (in general) needs reframing away from a predominantly confirmatory approach to a more investigative approach (which might include a confirmatory aspect).
  • Maps for coverage, just like dashboards, are fallible and heuristic in nature.
  • Test cases and fault/bug reports are heuristics - they may give some useful information under certain conditions but are fallible.
  • I called the approach to changing views as a Machiavelli approach - meaning subtle and hiding the up-front meaning. The reasoning being to avoid labels and the associated cognitive disfluency - which is present (in my experience) in large organisations.

In a discussion about mapping coverage as a form of visual reporting/awareness - I reinforced the point that maps can be wrong, but they can be useful. 

In the following feature map - the components of the feature are drawn out in black, and some test progress in some of the components is indicated in red. I could imagine such a map could leave the observer with a satisfied (or even happy) feeling...

My improvised coverage map

Great discussions, great experiences and lots of food for thought - just the right ingredients for a successful peer conference!


Other SWET Reports:

Notes from SWET3 (Rikard)
SWET3 thoughts through mind maps (Sigge)


Saturday, 12 November 2011

Let's Test!

" #softwaretesting #testing #LetsTest "

Let's Test 2012, 7-9 May, Runö Conference Centre, north of Stockholm


Site: http://lets-test.com/


Five guys got together and formed a committee to organise a test conference. Not just any five guys, and not just any test conference!

Henrik Andersson, Henrik Emilsson, Jonas Jonasson, Ola Hytén and Torbjörn Ryber are the organisers. I know them from SWET1, SWET2 & SWET3 - they don't just know testing, they talk a lot about things you see in "good testing".

They got together and it looks like they've put together a special conference. Three days of tutorials, keynotes, talks, test lab and more. The concept of having the conference and accommodation all in one place - a peninsula in an archipelago just north of Stockholm (of all cool places!) - makes it into an extended, and pretty special, peer conference.

Peer conference: with the emphasis on meeting peers and having contact with them for the whole three days. Conference with the emphasis on conferring! Cool concept, cool location, cool set-up -> cool conference.

Oh, what else is special about this? It's being called the first European context-driven test conference. Some people get scared or worried by the idea of "context-driven testing", but come on: isn't that just a declaration of doing appropriate testing - with your "eyes wide open", brain in gear and eliciting valuable information about the product(?) - yes, it's difficult to do well,  you have to work at it and continue to learn.

So, if you like learning, are partial to good discussion and want to meet like-minded people then I think this is a very interesting conference to attend.

A test conference, by testers for testers - pretty special!

Oh, and there's one week left for submissions (call for papers closes 13 December).


Thursday, 3 November 2011

Best Practices - Smoke and Mirrors or Cognitive Fluency?


" #softwaretesting #testing #cognition "

I saw a post on the STC, here, about best-in-context practices. I started thinking about 'best-in-show' (a la crufts) and best practices in general.

New ideas, terms or phrases usually trigger the same question: 'what problem is this trying to solve?'

BP in General
Best practices. Why tout them? It's a sales gimmick - 'here's my way of doing it and I think everyone should do it. And I can show you why it will solve your problems too'. 

It's a sales pitch - and sometimes the best salesman wins. Or, it's a marketing gimmick - if you create enough exposure to the 'best practice' then it gets assumed to be 'best' because there is so much discussion about it.

Systems Theory
Applying 'systems theory' - creating a best practice is like creating a local optimization - there's a danger it will sub-optimize for the whole system, ref [5]. Best practices are inherently localized.

Think of the Spanish Inquisition (the real one and the monty python version) or '1984' and the thought police, ref [7]. These organisations would probably have been thought to be quite effective to the ones that created them - that's the paradox of local optimizations, there's always at least one angle from which they look very good (or even 'best'). 

Good for the whole though? Here, of course, a lot depends on your idea of good and the frame in which you think of the problem - and the volume of your voice or marketing.

Unfair comparisons? Problem with framing?
Thinking in frames and framing the problem and choice! A problem with 'best practices' could be that:
  • The problem has not been thought through to achieve a consensus of what practice to adopt - then someone makes a choice
  • The problem and it's nature is not understood (ie in the next project the problem is different - this is hard work… - let's stick with what we have)
  • A practice is used and gets labelled 'best' afterwords (maybe not by the original practice decision maker)

In the third case, it's use implies it's a 'best practice' even if it's not. Think from a typical/average manager's perspective - why would they use anything that wasn't best, so if it's in use now and isn't causing huge pain (or seems good enough) then they can be tempted to call it a best practice.

A Snapshot in Time and Good Enough
Personally, I think of best practices as ephemeral - a dragonfly that lives for a short time, and then it's gone - ie 'best' is like taking a still picture with a camera - there (at that timestamp) it's best - but it might not apply anymore….. That snapshot might not be 'best' if I started comparing with other snapshots…..

So, why search for 'best'? Why not search for 'good enough'? Or 'good enough for the problem in front of me'? To me, that would imply active thinking…. But achieving consensus about which practice to use might be simpler than you think - but working on your assumptions is needed. For example:
  • What problem is trying to be addressed with a best practice?
  • Is it a project manager/company that wants to create a 'standard' that they don't want to maintain?
  • Is this a money-saving approach? Again maintenance.
  • Is it telling the people not to think?

Maybe by paying people not to think (lower paid) then a practice needs to be adopted that is low-maintenance. This seem to be a dangerous game… 

Perhaps companies tolerate them as long as the black swan event can't be traced to their choice of practice (or reification of it being 'the best'). Maybe they genuinely don't realise any better. Maybe the difference in practices is judged to be too small to not warrant the need for constant evaluation or judgement. (I suspect this is a default - coupled with cognitive fluency, see below)

But if this was a default option shouldn't there be less advocating/marketing/discussion of best practices? Well, progress implies (to many) re-invention or creation of new ideas, therefore 'discovering' a new best practice is quite marketable.

Smoke and Mirrors
Is the idea of a 'best' practice an illusion? Software development (including testing) is knowledge work - what's the 'best' way to think? 

Is it not the case of an application of an industrial metaphor to knowledge-based work? Mmm!!

Is it a problem with language?
  • "Practice makes perfect." 

Implies there is a 'best' - this is also an example of cognitive ease (see below) and will be more easily remembered as being a 'good' guide.
  • "Practice does not make perfect. Only perfect practice makes perfect."

Not achieving 'the best' is probably politically incorrect in business circles - so there is pressure to say this or that is 'best'. But, remember, this is knowledge work.

Think of any award or recognition in the sciences - we don't say 'X is the best economist/physicist' - they are usually identified for their contribution. In the same way, we should be particular about any 'practice' that we use - what is it good and not so good for??? 

If you can't think of issues with the practice then you probably haven't thought hard enough.

Is it a cognitive problem?
The sticking point is 'best' - it's a superlative. It's natural that any manager would like to advocate and advertise that they are doing the 'best' possible. It becomes even worse if they have had a 'best practice' in the past - then it is harder to move away from such a practice as this is hard work convincing their peers why this is necessary.

Behavioural Economists and Psychologists also have a related theory - it's called cognitive fluency, ref [2], or cognitive ease, ref [3] - and it's the condition when something is easier to think about affects our choice. It has been noted that when something is easier to think about then it might be more readily believed to be true (or correct, or remembered - depending on context) - ie there builds an assumption (bias) that it is true (correct or something seen before). 

This ease of thought can be obtained via familiarity (repeat messages), bolder and easier to read type and even colour. 

So, if anyone (manager/stakeholder) has ever had success (however they gauge that) with a best practice in the past, then they are more likely to think of best practices as being a good thing, and their particular best practice in particular.

It is much easier to take an off-the-shelf practice for use in the next project rather than think it through and check whether it is good enough or not. 

The opposite of cognitive fluency - cognitive disfluency - also holds true, it acts as a warning signal. So, forcing someone to re-evaluate what a 'good enough' practice is is always going to be harder than taking a ready-made practice off the shelf.

Assumptions
'Best practices' don't usually display their magnitude of pre-requisites, conditions, exclusions and terms of use. It's like the various "terms of use" that come with many SW applications - many don't read them. 

Why, then, should a project manager read and be aware of all the conditions of use for a 'best practice'? It's hard work. Managers/stakeholders usually assume (or hope) that knowledge workers are 'doing their best' under the circumstances.

And finally...
The next time someone talks about a best practice, go easy on them - they are following an evolutionary pattern ('if it's familiar it hasn't eaten me') therefore, the onus is on us to highlight the shortcomings of the particular practice (if there are any to be found) or judging why it's fit for purpose. 

That activity is important - it's like going through a pre-flight checklist - you see the potential warning signs when you can still act upon them!

Also, never present anything as 'best' for knowledge work - cognitive fluency will mean it becomes a best practice next time. Avoid superlatives (best) and indications of certainty or insurance, ref [4].

References

[3] Thinking, Fast and Slow (Kahneman, 2011, FSG)
[5] Thinking in Systems: A Primer (Meadows, 2008, Chelsea Green Publishing)
[6] Flikr: Crufts Best in Show Roll of Honour

Wednesday, 2 November 2011

Carnival of Testers #27

" #softwaretesting #testing "

October, Halloween, pranks, scary stuff, ritual and various forms of deity-worship is nothing new to software testing.

But, hopefully, in the selection below there are no ghouls, just good writing! Judge for yourselves...

Natural?

  • Phi Kirkham made a series of interviews with contributing authors of a new software testing-related book. Here, was a good example with Catherine Powell.
  • Assumptions, pre-conceptions and hidden interpretations are rife in any social interaction. They're amplified by software. Adam Knight gave an example, here, of how simple it is to overlook pre-conceptions.
  • The first GATE workshop took place at the beginning of October and got a good write-up from Markus Gärtner, here. Eusebiu Blindu's thoughts on it, here.
  • If you're in the vicinity Phil Kirkham was announcing an STC meetup in Chicago, here.
  • Jeff Nyman had some perspectives on quality assurance and what's obvious, here.
  • A comprehensive report from PNSQC by Nathalie de Vries, here.
  • Duncan Nisbet, here, gave a run-down of his last 12 months - actual against expectations. Interesting reflections.
  • A good post about the beta book program, here, from Michael Larsen.
  • Alan Richardson lists his inspiration for his model of software testing, here.
  • Retrospectives, de-briefs and reflections are important in any team or testing activity. Lisa Crispin describes how fortune cookies were used as a trigger in a team retrospective, here.

Unnatural?


  • Regression testing - done well, done badly, done at all, done with any thought? Anne-Marie Charrett explored some issues with aspects of regression testing, here.
  • "All that testing is getting in the way of quality"? A halloween prank? Eric Jacobson wrote his reflections on the STARWest talk, hereBen Kelly made a good walk-through of the points in Eric's post, here. Good comments in both!


Supernatural?

  • Halloween. Is quality dead or undead? Trish Khoo had some thoughts, here.
  • Men in black or bogey men? Mr TestSheep has some thoughts on them, here(Well done, NZ in RWC 2011!)
  • And finally, to wrap-up October and all things ghoulish, Claire Moss wrote about the undead attributes of quality, here.

Until the next time....

Sunday, 2 October 2011

Carnival of Testers #26

" #softwaretesting #testing "

This month there were many flavours on communication...

Joe Strazzere encountered an interesting problem, here, with re-communication of his words without crediting Joe. An important issue that all should be on the lookout for.

A mindmap is always an interesting communication medium. James Bach's CAST2011 mindmap. Here, Adam Knight maps his Eurostar talk.

Rikard Edgren used book form to communicate his ideas and influences around test design, here. Well worth a read!

Good thoughts around conferences, gathering and communicating here from Phil Kirkham.

Communicating about the part testing plays is at the heart of Elisabeth Hendrickson's post, here.

Simon Schrijver compiled a list of free testing magazines, here.

Ben Kelly's take on framing and communication is worth reading, here.

Some interesting points that Eric Jacobson noted, here, on an internal presentation around more effective tester communication.

Testing asshole? Marlena Compton gives some hints to the communication imperatives that will be part of her talk at PNSQC.

Some interesting articles from Joris Meets and how he relates them to software testing, some of the issues discussed are (at the root) about how we relate to (and communicate about) software testing.

Michael Bolton wrote about representing testing problems as testing results as a way to communicate about the product.

Until the next time...

Sunday, 25 September 2011

Our survey said...

" #interpretation #fun #context #cognition "

Whilst compiling material for some other work I stumbled across some old Family Fortunes and Family Feud 'funny/strange' answers on YouTube*.

I've recently being doing a lot of thinking around framing, ref [5], and the problems it can cause and solve and I started thinking about different causes for the unexpected answers.

For communication analysis I use two types of exercise, (1) frame analysis and (2) word and meaning substitution.

Frame analysis
  • What are the aspects that might be important to each person involved in the communication? This usually revolves around situational context of either the one asking the question (presenting the problem) or the one answering the question (presenting a solution). Here there is scope for a range of cognitive and interpretation mistakes.
Word and meaning substitution
  • A well-known example of this is the "Mary had a little lamb" exercise, described in "Are you lights on", ref [1], and is a demonstration of how changing the emphasis of a word in a sentence, or replacing a word with a similar meaning (from a dictionary or thesaurus), can change the meaning of the sentence. Therefore, if both parties in the communication intend different word emphasis (in word placement or interpretation) then there is a possibility of confusion.
So, what can appear as confusing or even amusing answers can, with the right perspective, have a certain logic. In the list** below I've made an attempt at finding the perspective behind the answer, in red.

The questions are typically prefixed with "We asked 100 people to name..."

Q: Something a husband and wife should have separate of
A: Parents
Logical answer(?) but maybe not along the intended lines of the questioner.

Q: A planet you recognize just by looking at a picture of it
A: The Moon
Confusion of definition of planet with 'celestial body' (something in space with an orbit)

Q: A month of spring
A: Summer
Slip of the ear, of->after(?), ref [4]

Q: A word that starts with the letter Q
A: Cute
Q. Name a part of the body beginning with 'N'
A. Knee
Phonetic interpretation

Q: The movie where John Travolta gave his most memorable performance
A: The John Travolta Biography
'Most memorable performance' to the questioner had a potentially different meaning. To the answerer either it was interpretted as a film where he featured the most, or he wanted to give an amusing answer.

Q: Something you wouldn't use if it was dirty
A: Toilet paper
Amusing and logical answer(?)

Q: A signer of the Declaration of Independence
A: Thomas Edison
Slip of the tongue, specifically noun substitution, ref [4]

Q: Something that comes in twelves
A: Dozens 
Could be logical interpretation but not something the questioner was intending(?)

Q: A sophisticated city.
A: Japan
Misinterpretation (or even slip of the ear) of city for destination.

Q: A kind of bear
A: Papa Bear
Recency effect(?) - had recently been reading or exposed to children's stories (?)

Q. Name a number you have to memorise
A. 7
Misinterpretation of 'memorise' as 'favourite' or 'memorable'(?)

Q. Name something in the garden that's green
A. Shed
Context-specific to the answerer(?)

Q. Name something that flies that doesn't have an engine
A. A bicycle with wings
'Logical' and specific answer - but the questioner could have maybe clarified the question with a 'commonly known item'.
Or, recency effect - flugtag, ref [6].

Q. Name something you might be allergic to
A. Skiing
'Alergic' -> 'don't like'(?)

Q. Name a famous bridge
A. The bridge over troubled waters
Interpreted as 'something well-known with bridge in it'(?)

Q. Name something you do in the bathroom
A. Decorate
Specific to the answerer's context.

Q. Name an animal you might see at the zoo
A. A dog
Generics. Potential that the answerer has not interpreted the the question as 'generally seen and residing in the zoo'.

Q. Name a kind of ache
A. Fillet 'O' Fish (?)
Brain-freeze or 'slip of the ear'(?)

Q. Name a food that can be brown or white
A. Potato
Answerer framed the question as a food which could be presented as brown or white(?)

Q. Name a famous Scotsman
A. Jock
'Slip of the ear' -> 'a common nickname'(?)

Q. Name a non-living object with legs
A. Plant
Maybe thinking of a plant on a plant stand(?)

Q. Name a domestic animal
A. Leopard
Misinterpretation of 'domestic'(?)

Q. Name a way of cooking fish
A. Cod
'way' misinterpreted as 'type'(?)

Analysis Notes
  • Context - some answers are specific to the answerer and not the questioner. Example traps might be (1) Understanding and interpretation, (2) Word association problems or (3) Relating everything to ones own experience or circumstances.
  • Recency effects, ref [3] - the interpretation associated with a word was used in a different context, giving a skewed answer. In testing this occasionally results in skewed emphasis of the risk determination - see tester framing problems in ref [5].
  • Skipping and changing words in sentences - to actually hear a different question - sometimes grouped under 'slips of the ear'. In testing this might result in an incorrect solution application, similar to framing problems but can also be 'straightforward' slips that result in some faulty analysis - missing some key input parameter for example.
  • Other framing effects can be caused by the previous question, previous answer or even some realisation that a previous answer was wrong/silly and so inducing more stress in the answerer.
  • Stress can mean that sometimes when you're trying to react you don't actually listen to the whole message or question. This can be time pressure or other stresses. Be aware of this potential problem.
  • Anchoring effects, ref [2] - focusing on a word and giving an association with that word (rather than focusing on the whole question). In testing this typically results in confirmatory testing.
  • Generic statements can create confusion. These are generic statements as part of the answers - this is where the question can be confused between giving an example of a specific kind and categorizing the answer into a grouping. Opposite of the answerer-specific problem. More on this in another post...
  • Don't rule out brain-freezes either - these can be multi-word substitution or paragrammatism, ref [4], which result in nonsense responses.
And finally...

This is a good exercise and quite instructive for those working in software testing - it's a good illustration of how what might be seen as an obvious or simple answer can actually diverge from the expectations of the stakeholder or even customer.

Be alert for not just for confusing messages but also the potential for confusing answers. In this way you might know when to re-affirm your interpretation back to the stakeholder or customer.


References

* If you want to see the clips you can search youtube for "family fortunes answers" or "family feud answers" or "game show stupid answers".
** Lists compiled from
http://www.funny-haha.co.uk/Joke.asp?J=283
http://www.businessballs.com/familyfortunesanswers.htm
http://www.stupidgsa.com/american/family-feud/

[1] Are Your Lights On?: How to Figure Out What the Problem Really Is (Gause and Weinberg, Dorset House, 1990)
[2] Anchoring effects: http://en.wikipedia.org/wiki/Anchoring
[3] Recency effects: http://en.wikipedia.org/wiki/Serial_position_effect
[4] Um...: Slips, Stumbles, and Verbal Blunders, and What They Mean (Erard, Panteon, 2007)
[6] http://en.wikipedia.org/wiki/Red_Bull_Flugtag

Saturday, 17 September 2011

Book Frames and Threads Updated

" #softwaretesting #testing "

Earlier in the year I tracked some of the influences to my software testing learning, here. I say some, as I exclude blogs and online articles - that is an update for a later time.

The map is intended to track the influences - whether it is something I have in my active reference library or something that sits in the anti-library, reminding me that however much I absorb and use there is always a great deal more out there that I haven't looked at that could be useful. It's a very physical way of reminding me that I don't know everything - and that's good!

In the map I try to track
  • whether the publication was discovered during my own research or via the test community (either recommendation or discussion), 
  • if the book/article was directly influenced/referenced by another publication (linked by a blue line)
  • if it's ongoing whether it is active or not
  • changes from the last version (in pick/read) -whether it has moved (with a line to track) or a new addition

This is a useful reference for me to
  • track some elements of my learning, influences and research
  • track what has been current (the changes under the latest period and what that might mean for me)
  • remind me that I don't track all my influences (blogs, some other online material and publications)
  • remind me that there is still plenty to research
  • show that items are added and removed from the anti-library

Future additions that I'm thinking about:
  • include some dates (addition and/or completion)
  • include more community influences
  • include more notes around the publication (what I learnt or felt about it - you can learn without agreeing with the publication)
  • include a frequency of re-use (how often I re-use it for reference) - a type of star rating for use/re-use

The map is below - I'm more than happy to receive comments on recommendations for publications and sources and even additional information that I might find useful to include.

If it encourages you to do something similar I'd love to hear about it.


Enjoy!

Sunday, 4 September 2011

Testing: Do you train like an athlete?

" #softwaretesting #testing "

I just read this analysis piece on the BBC site about the pressures involved in sprinting, especially the start and run-up to the start.

As I read through it I found myself mentally ticking off links to the testing world:
  • There is no perfect start
  • Appearance and presentation is part of the message
  • Pressure kills concentration
  • Reacting at the right time
  • Distractions affecting focus
No perfect start?
The interview contrasts two sprinters with different techniques and how their physical make-up presents different problems at starting and how they handle that. In the software development world this translates to there is no best practice. Every problem and solution is unique - what works for one athlete (or product) does not necessarily work for another.

Presentation of the message
In the article the example is given of Linford Christie displaying his superior physique to other competitors before a race. The intention was partly - I'm prepared and ready.

In the testing world, the message and form in which we give that message is important. Successful message styles are usually truthful, not unduly biased by numbers and consistent. Part of this goes towards building your brand.

Building this trust in your team and stakeholders is vital to successful story-telling.

Pressure kills concentration
All athletes respond and cope with pressure in different ways. Some pressure is exerted by other athletes, some is created by the athlete themselves (their own expectations) and their surroundings.

In the testing world - sometimes the pressure is external - created by teams or stakeholders, sometimes with unrealistic expectations of testing and sometimes because they don't handle/distribute the pressure so well.

This reminded me of the chapter in Pragmatic Thinking and Learning, "pressure kills cognition" which gives examples of overt pressure disrupting thinking. Be on the look-out for this - it's not the easiest thing to deal with when you're on the receiving end, but if anything, don't pass on the pressure - or at least understand what effect it might have.

Reaction times
In athletics the false start is feared - especially for the sprints. The athletes are coiled like a spring and are trying to react as quickly as possible. But, they don't want to react too quickly, and not too late either - they want something that's good enough for them. The interview demonstrates the difficulty here with the game of slapsies (look here and here).

In testing, the similarity might be when to raise or highlight a problem, call in help for the investigation. You don't want to be crying wolf for every issue, just as you don't want to be doing some lone investigation "too long" (and potentially getting stuck). Here is where pairing or having a colleague (tester or developer), that you can run something by, is very useful. Even at a daily stand-up meeting mention what you're currently investigating - sometimes there is someone who says "check X" or "have you got Y set in your configuration".

There is a routine and balance to find here - something that takes both practice and goes hand in hand with your message / brand.

Distractions affect focus
Athletes get distracted by the antics of other athletes, sometimes by the crowd or acoustics in the stadium. They have different ways of dealing with this - some try to get into and stay 'in the zone', some go and lie down and stare at something, whilst others continuously move around to avoid tensing up.

Distractions play a big part in any work environment also, as well as the ways we try and remedy them. It might be the problem of multi-tasking - getting many high-priority demands on your attention and not being able to decide which to devote attention to - or really how to stay focussed on the task at hand.

Sometimes, it's more efficient to simplify the problem into smaller component parts, and so have a feeling of manageability and being able to see progress quicker. Small wins keep attention and focus rather than long drawn-out slogs wading through mud.

Sometimes it's about removing distractions - don't check email for the next hour, close unnecessary browsers (with their flash animations that catch the eye - or use a flash blocker) - reduce the number of open windows to the minimum needed for the task.

Sometimes it's right to de-focus on the problem, step back and look at the wider picture. This helps you relate the problem to it's situational context, re-evaluate why you're doing something, maybe even bring in a fresh pair of eyes to help. This sometimes gives new information or re-affirms the original scope, then you can re-focus on the problem (with any new insights and information).

Are you a person that whilst talking to someone must always answer a phone call (no matter who it's from)? Or do you treat phone calls like someone coming up to you in the corridor whilst you're already in a conversation - usually they'd wait to interrupt your ongoing conversation - so why should it be different with a phone call? (The exemption here is if you're waiting for some urgent or important information which would lead you to interrupt the conversation.)

Awareness
There are lessons to learn all around - especially from non software testing disciplines. The key is to be able to recognise your potential problem areas - whether it's to do with message presentation, knowing when to react, handling distractions or being aware that pressure can have a detrimental affect on performance.

Awareness is an important first step in problem solving - whether you can solve the issue or not - understanding factors that affect your "testing performance" is key!

Friday, 2 September 2011

Carnival of Testers #25

" #softwaretesting #testing "

Wow!

August was busy - partly due to CAST2011 and Agile2011 conferences triggering the creative and writing juices...

Journey and Influence
Many testers draw influences and experience from outside of the traditional software testing field.

  • Rob Lambert pondered the Peltzman effect and how it might apply to testing, here.
  • A journey so far in exploratory testing is given, here, by Albert Gareev.
  • John Stevenson reviewed different views on the need for product knowledge (or not) when exploratory testing.
  • The Bach Brother's Legion of Test Merit premiered this month, with the first recipients presented by James Bach here.
  • Brian Osman looked at some of the areas lacking in good testing, here.
  • Two interesting perspectives, here, on success and failure from Joel Montvelisky.
  • A crime series triggered Henrik Emilsson to start thinking about testers and criminals(!), here.
  • I started thinking about the chapter in Slack when I read Joe Strazzere's post on being fungible (or not), here. Good points!
  • The what-if heuristic from Daniel Berggren, here, shows how useful a tool it can be for a tester.
  • Good testing journey in search for mushrooms from Zeger Van Hese, here.
  • Original and very interesting ideas on creating testing perspectives from Shmuel Gershon, here.
  • Parimala Shankaraiah continued journal on public speaking is here. Good observations!
  • Oliver Erlewein writes about a new proposed testing standard on a new collaborative blog, here. Interesting new site!
  • Important points from Alan Page on test design for automation, here.

CASTing an eye...
Many testers jotted down their thoughts and observations, some being:

  • The use of Ustream allowed Claire Moss to make some notes on Paul Holland's lightning talk, here. Daniel Woodward made some interesting observations, here.
  • Day 1 summaries from Pete Walen and Michael Larsen, here and here.
  • Good rapid blogging from Markus Gärtner, here and here.
  • Post-conference reflections from Jon Bach, here, Elena Houser, hereAjay Balamurugadas, here, Michael Hunter, here, Ben Kelly, here, Matt Heusser, here, and Pete Walen, here. Some important observations in them all.   
  • Some more important point were made by Eric Jacobson, here.
  • Important observations from James Bach on the test competition, here
  • And finally, some tips on presenting at conferences, here, from Lanette Cramer.
Until the next time...

Monday, 22 August 2011

Framing: Some Decision and Analysis Frames in Testing


" #softwaretesting #testing "

What is a Frame?
The following is from Tversky and Kahneman's description of a decision frame, ref [1],:
We use the term "decision frame" to refer to the decision-maker's conception of the acts, outcomes, and contingencies associated with a particular choice. 
The frame that a decision-maker adopts is controlled partly by the formulation of the problem and partly by the norms, habits, and personal characteristics o f the decision-maker
When using a decision frame to analyze a problem and come to a decision they call this framing. So, I'll refer to a frame as relating to a decision (or analysis) frame.

Factors at play
As mentioned, many different factors affect how we analyze problems, including:

Temperament/Emotions
  • Anger
  • Happy
  • Optimistic
  • Pessimistic
  • Tiredness
  • Fear (e.g. of failure)
Experience
  • Lessons from past situations - own experience and feedback
  • What has been learnt recently
  • Complacency due familiarity
Strategy
  • Your own vs someone else's
  • Aggressive
  • Military campaign - lots of detailed planning
  • Reactive
The factors and the weight given to them might be different for:
  • Stakeholder view ("Upgrade needs to be reliable", "Of the new feature set only x is live in the quarter")
  • Tester view ("Which risks are most important to look at now?")
  • Developer view ("I did a fix, can you test that first?")

The stakeholder, developer, tester and any other role in the project has a set view on priorities and aims with the project - agendas maybe - and one challenge is in trying to tie these together, or at least understand the differences and how they impact our communication. They all may have similar product goals but their interpretations to their work may be different - their influences and micro-decisions will be different meaning that transparency in communication is important. 

But, there's a catch - the way we present information can affect its interpretation - depending upon the frame that a stakeholder is adopting.

Think of a frame as a filter through which someone looks at a problem - they're taking in lots of data but only the data that gets through the filter gets attention (the rest may end up in the subconscious for later or isn't absorbed), "I have my show-stopper filter on today so I don't notice the progress the team has made…" 

So, being aware of the potential different types of frames that each project member might have as well as some traps associated with frame formulation is important.

Stakeholder Frames
Might include:
  • Emphasizing minimum time for product delivery
  • Emphasizing short iteration times and delivering quickly
  • Trying to minimize cost of product development (cost of testing?)
  • Emphasizing future re-use of the development environment (tendency to worship automation?)
  • Aiming for a reduced future maintenance cost
Tester Frames
Might include:
  • Emphasizing the favourite test approach
  • Emphasizing areas of greatest risk (to?)
  • Emphasizing the last successful heuristic that found a show-stopper bug
  • Emphasizing focus on the data configuration that found the most bugs the last time
  • Emphasizing conformance to a standard over a different aspect of the product
  • Emphasizing the backlog item that seems the most interesting
  • Emphasizing widespread regression as "fear of failure / breaking legacy" affects analysis
  • Emphasizing feature richness over stability
Note, in isolation, some of these frames may be good, but they might not necessarily be good enough.

Framing Problems in Testing

Functional Blindness or Selective Perception

Russo and Schoemaker called it Functional Blindness. This is the tendency to frame problems from your own area of work or study. Dearborn and Simon called this Selective Perception, ref [3], where they noted that managers often focus their attention on areas that they are familiar with - sales executives focussing on sales as a top priority and production executives focussing on production.

In testing this may translate into:
  • Testers with mainly performance test experience focussing on those areas
  • Recent customer support experience leading to a preference to operational and configuration aspects
  • A generalist spreading effort evenly in many areas
Sunk-Cost Fallacy

This is the tendency to factor in previous investments to the framing of the problem, link. A good example is James Bach's Golden Elephant Syndrome, ref [4].

In testing this may translate into:
  • The latest favourite tool or framework of the execs must be used as there has been so much investment in it.
Over-Confidence

As we've seen above there can be many different ways of framing the problem. It's important to be aware of this. There is a trap that testers can think they've done everything they need - their model/s was the most adequate in this situation. 

Here the warning is against complacency - re-evaluate periodically and tell the story against that assessment. It may be that an issue you find during testing affects some of your initial assumptions - the approach might be good, but maybe it could be better. 
(It might be that you can't change course/approach even if you wanted to, but that's good information for reporting to the stakeholder - areas for further investigation.)
Whatever your model, it's one model. Is it good enough for now? What does it lack - what product blind spots does it have?

Measurements and Numbers

Decision frames and framing sometimes uses a way of measuring whether the frame is good or useful - or whether alternatives are equal. There is a danger here when numbers and measurements get involved.

In business and everyday life there can be occasions when figures and measurements are presented as absolutes  and other times when they're presented are relative figures. They can be misleading in both cases, especially when not used consistently. 

Project stakeholders are probably very familiar with looking at project cost and overrun in absolute and relative terms - depending on how they want the information to shine.

So it's very easy for testers to be drawn into the numbers game - and even play it in terms of absolute or relative figures.
  • "This week we have covered 50% of the product"
  • "Our bug reports have increased 400% compared to the previous project"
  • "The number of tests to run is about 60% of the last project"
  • "5 bug reports have been implemented in this drop"
  • "All pre-defined tests have been run"
As you can (hopefully) see this is just data - not necessarily information that can be interpreted. So, beware of number usage traps in the problem analysis and formulation - both in those given to you and in those you send out,

Another aspect of problems with numbers and decision framing can be thought of as the certainty effect, ref [6]. This can affect how we frame problems - and even how we should communicate.

Frames and Framing Need Maintenance

Analyze and periodically check that your assumptions are correct. Sometimes the emphasis of the project changes - the problem to solve changes. Is the frame still right or correct? Are the parameters of the problem still the same, are the reference points and ways in which to measure or judge the frame - are they the same - if not, time to re-evaluate.

Working with Frames
  • What frames do you and your project / organization start with? (Subconcious default)
  • Are there alternative frames to consider? How many were investigated?
  • Look at what each frame includes and excludes
  • What is the best frame fit for the situation / project? (Do all involved agree on the 'good enough' frame?)
References
[1] The Framing of Decisions and the Psychology of Choice (Tversky & Kahneman, Science Vol 211, No. 4481)

[2] Decision Traps: The Ten Barriers to Brilliant Decision-Making and How to Overcome Them (Russo, Schoemaker, Fireside,1990)

[3] Selective Perception: A Note on the Departmental Identifications of Executives (Dearborn, Simon, Sociometry Vol 21, No 2, June 1958)

[4] James Bach "Golden Elephant" Syndrome (Weinberg, Perfect Software: And Other Illusion about Testing, Dorset House, 2008, p. 101)

[5] Calculated Risks: How to Know When Numbers Deceive You (Gigerenzer, Simon and Schuster, 1986)

Sunday, 14 August 2011

Taylorism and Testing

" #softwaretesting #testing "

I've had this in draft for a while, but was triggered to "finish" it by reading Martin Jansson's recent posts on working with test technical debt and nemesis of testers.

Taylorism
When I think about Taylorism I'm referring to the application of scientific principles to management that direct division of labour, "process over person" and generally anything that is the result of a time and motion study (however that might be dressed up).

The result of this is categorising "work" into divisible units, estimating the time for each unit and the skills required to do each unit. Once you have these, then plugging them into a gantt chart is a logical management step.

Estimating
Estimating work using some estimation guide is not the problem here. The problem is when that guide becomes the truth - it becomes some  type of test-related "constant". It's used as such and, more importantly, interpreted as a constant.

Problems with constants might occur if you discuss with your stakeholder items such as:
  • Time to write a test plan
  • Time to analyse a new feature
  • Time to review a requirement
  • Time to write/develop a test case
  • Time to execute a test case
  • Time to re-test a test case
  • Time to write a test report
Traps?
Stakeholders don't usually have time to go into all the details of the testing activity, therefore it's important as testers to not let the items that affect any estimation be considered as constants. This is highlighting to the stakeholder that the assessment of the question depends on the specific details of the project, organisation and problem at hand.

So, re-examining the above items might give some additional questions to help, below. (This is just a quick brain-dump and can be expanded a lot)
  • First questions:
    • How will the answers be used?
    • How much flexibility or rigidity is involved in their usage?
  • Time to write a test plan
    • Do we need to estimate this time?
    • What's the purpose of the plan?
    • Who is it for?
    • What level of detail needs to be there now?
    • What level of detail needs to be there in total?
    • Am I able to do this? What do I need to learn?
  • Time to analyse a new feature
    • Do we need to estimate this time?
    • How much do we know about this feature?
      • Can we test it in our current lab?
      • New equipment needed?
      • New test harnesses needed?
    • Am I able to do this? What do I need to learn?
  • Time to review a requirement
    • Do we need to estimate this time?
    • Are the requirements of some constant level of detail?
    • How high-level is the requirement?
    • Are the requirements an absolute or a guide of the customer's wishes?
    • How often will/can the requirements be modified?
    • What approach is the project taking - waterfall or something else?
  • Time to write/develop a test case
    • Do we need to estimate this time?
    • Do we all have the same idea about what a test case means?
    • Do we mean test ideas in preparation for both scripted and ET aspects of the testing?
    • Do we need to define everything upfront?
    • Do we have an ET element in the project?
    • Even if the project is 'scripted' can I add new tests later?
    • What new technology do we have to learn?
  • Time to execute a test case
    • Do we need to estimate this time?
    • In what circumstances will the testing be done?
      • Which tests will be done in early and later stages of development?
      • Test ideas for first mock-up? Keep or use as a base for later development?
    • What is the test environment and set-up like?
      • New aspects for this project / feature?
      • Interactions between new features?
      • Do we have a way of iterating through test environment evolution to avoid a big-bang problem?
    • What are the expectations on the test "case"?
    • Do we have support for test debugging and localisation in the product?
    • Can I work with the developers easily (support, pairing)?
    • What new ideas, terms, activities and skills do we have to learn?
  • Time to re-test a test case
    • Do we need to estimate this time?
    • See test execution questions above.
    • What has changed in the feature?
    • What assessment has been done on changes in the system?
  • Time to write a test report
    • Do we need to estimate this time?
    • What level of detail is needed?
    • Who are the receivers?
      • Are they statistics oriented? Ie, will there be problems with number counters?
    • Verbal, formal report, email, other? Combination of them all?
Not all of these questions would be directed at the stakeholder.

Depending on the answers to these questions will raise more questions and take the approach down a different route. So, constants can be dangerous.

Stakeholders and Constants
When stakeholders think in terms of constants then it's very easy for them to think in taylorism terms, think of testing as a non intellectually challenging activity and ultimately think of testing as a problem rather than an opportunity for the project.

Some questions that might arise from taylorism:
  • Why is testing taking so long?
  • Why did that fault not get found in testing?
  • Why can't it be fully tested?
Working against Taylorism in Stakeholders

The tester needs to think and ask questions, both about the product and what they're doing. Passive testers contribute to mis-understanding in stakeholders - the tester is there to help improve the understanding of the product.

The relationship of a tester to a stakeholder changes as the tester adds value to project, product and organisation. So, ask yourself the question if and how you're adding value. It's partly about building your brand, but also it might be about understanding the problems of the stakeholder.

The stakeholder frames a problem and presents that to you in a way which might be different from how the customer intended. A good starting point with some questioning is to think in terms of context-free questioning (see Michael Bolton's transcript from Gause and Weinberg, here).

Build your brand, add value to the organisation and product and ultimately the problem with Taylorism will recede.


References
  1. Wikipedia intro on Scientific Management 
  2. Wikipedia intro on Time and Motion Studies
  3. Building your Test Brand
  4. Transcription of Gause and Weinberg's Context-Free Questions

Sunday, 7 August 2011

Reflections on "Done": Regression & Interaction Testing




" #softwaretesting #testing "

If you haven't read it then go and read Michael Bolton's post and comments regarding the usage of "done", here.

From some of the comments, discussions and opinions elsewhere there is a notion that "done" is the complete package. This was the point of Michael's post. But with each sprint, delivery or increment the product evolves - and so any definition of done will be susceptible to problems with definition.

It's very easy to have a mission that includes "existing features work as before" - but there is also a risk of missing something here….

The product evolves - interactions between features and within the system change. Not all of this can be anticipated in advance.

Interactions
So, however you might define "done" for your delivery there will (inevitably) be an area of unknowns that may* be investigated.

It's not as simple as saying this is covered by the exploratory testing (ET) part of the sprint. The ET may be for the new feature, but there is also an element that could* be devoted to learning about interactions between the feature and existing product features or components, and even understanding how the whole behaves (again**).

Of course, a good first step here is to separate out the ET missions for the new feature and the ET missions for interactions and existing features.***

Regression
Some of this might be covered in some "regression" and characteristics measurement and monitoring. But the problem with "regression" is that it doesn't necessarily have an element of learning about how some existing test suite (especially applicable to automated suites) works with the "new" (evolved) system.

An automated regression suite usually has a notion of "this is a fixed reference point" - this script should**** usually work. But the testing that is missing is usually the evaluation of "if and how" it works in the new system. This evaluation is commonly limited to looking at failures and how they should be fixed.

Some shops wrap this up as a "regression" element into the increment's mission (or even definition of done) - but wrapping things up (implying) in some other activity is exactly the original problem with "Done" - it doesn't necessarily reflect the focus of the problem in front of you at a given time.

Now, being able to reflect "exactly" the problem in front of us is something that can't be done "exactly" - that's why we need testing to help evaluate. So, dimensioning (or estimating) an activity should be wary of this.

Ultimately, a lot of this comes down to good testing - separating the assumptions (and implications) from the problem in front of you and illuminating these assumptions and implications with the stakeholder as early as possible.

Good testing starts early!

* even "should" or "must" depending on the context
** understanding how the system behaves is an iterative task.
*** Note, this is all depending on the specific context - it's not automatic to say "do this for every iteration" - each delivery must be assessed on its needs and usage.
**** "should" is one of the most over-and-misused words within software development!


Friday, 5 August 2011

How is your Map Usage?


" #softwaretesting #testing #models "

Preamble
On a recent trip abroad we hired a car which had satnav. The party that we joined also had a range of different satnav products - different makers and no doubt different map versions. 

On one occasion we had a convoy heading to a destination some 35km away, 3 cars (A, B & C) using their respective satnav.  The cars departed for the destination in order ABC but arrived in order BAC each having taken different routes.

First Observation
Now there were a range of different parameters that could affect the route calculation:
  • Shortest route
  • Fastest route
  • Most economical route
  • Route avoid certain points
  • Accuracy of map
  • Plus other factors that are not to do with the map itself - wrong turnings, not driving according to the assumed parameters (eg speed), traffic accidents, etc, etc
After more usage of the satnav I realised some other points...

Second Observation
We tended to notice less en route to different places when using (relying on) the satnav. If we hadn't had it we would probably have noticed landmarks and features in more detail. But now we were in a 'different world' - following someone else's view of the world.

Third Observation
Reliance on the map guidance became less as familiarity of the surroundings increased. We were in the areas for a week and reliance on the map became less over time.


These observations are directly comparable to my software testing experience. The map is just a model - a certain level of detail used for a certain purpose. Change the detail or the purpose and it is a different model.

Also, the use of a map (model) can change over time. On one occasion it might be used as a main guide to get to a destination and after a period of familiarity it is used as a waypoint on course to a destination. Something to help orienteer along the way. I've touched on some of these re-use ideas before when walking in the woods.

Notice the parallels with use and purpose between maps and model usage in software testing.

Some maps are old, some new, some sightseeing, some for walking & hiking. Some use maps as a history - tracking where they have been - they can even use someone else's map and make notes about features they have seen


Points to note with your map (model) usage:

Purpose
  • Is it a normative or informative reference? Does it describe the contours of the landscape (main claims of the product) in a lot of detail, or is it a hand-sketch of the area in much less detail (key features to look at)?
  • Is it a record of progress (as in coverage of features)?
  • Is it a partial plan (X marks the spot - lets look for treasure and other interesting things along the way)?
  • Is it a mission objective?
  • Is it a sightseeing guide (testing tour)?
  • Is it a record of danger (places to avoid or see, marshes and bogs not suitable for walking - bug clusters in the product)?

Representation
  • One thing to note - maps (models) get old. Sometimes that's ok - some features do not change (ancient monuments, town locations, product features). 
  • Sometimes it's not - you want the latest sightseeing information (tourist attraction) or danger areas (buggy areas of the product).
  • Ultimately a map (model) is a view of a landscape (or product). There might be a lot of personal views on the map (model) - what is included, to what detail, what is left out or even omitted.
  • Whether new or old, the use of it should be able to be dynamic - that is, your use of it is the aspect that adds value to the journey (testing session/s).


Caution for Map (Model) use in Software Testing 

From the first observation, and the points above, the models (of product or problem space) should always have a known purpose - fitting the right model to the right objective - and should (ideally) be used in a dynamic way. 

From the second observation this is a caution to question the assumptions connected with the model. If you're in unfamiliar territory then you may need to rely on the model for guidance, but use it dynamically. Is the information correct, what information can I add? Don't just follow - question (even internally) as you go. Even question about how the terrain would look if the map/model was wrong - to give yourself clues about what to look for.

If you rely too much on the map - whether it's someone else's or your own - then you there is a potential to lose touch with your testing objectivity/eye/mindset  - something that is needed for good testing.


Still to come

From the third observation - there are aspects of time perception and the amount of information processing we make (or shortcut) and aspects of focused and stimulated attention - this is an area I'm still researching with potential implications for scripted and exploratory testing (more to come). 


So, how is your map reading and usage? How is your model usage?

Wednesday, 3 August 2011

Carnival of Testers #24


" #softwaretesting #testing "

July was a month for a wide variety of blog posts, with many tidbits….


Ambiguous Words
  • Michael Bolton looked at problems with using the word "Done", here.
  • Pete Houghton on his 2 minutes using Bing Maps and thoughts about consistency and convention.
  • Good points from Eric Jacobson about what is testable, here.


Experience
  • There has been some good blogging on the STC. Jeff Lucas gives some views on scripted automation, here.
  • Joel Montvelisky with some experiences on finding the right testing effort for the product and the organization. Also, here, with some points about thinking before measuring.
  • Still on metrics, Rob Lambert gave a good example of how using measurements as absolutes will give the wrong conclusion.
  • Markus Gärtner's thoughts on a Pomodoro approach to testing.
  • The problem of spam and moderating discussions was highlighted by Rosie Sherry, here.
  • Are you "doing agile"? Some thought-provoking points from Elisabeth Hendrickson, here, on checking how you're agile implementation looks. (This was re-published in July)
  • Good learning experiences from Pradeep Soundararajan about rapid software testing in action, here.
  • Part 3 of Martin Jansson's look at testing debt is worth checking out.
  • Catherine Powell nearly rants, but then reflects on learning, here.


Other
  • Lisa Crispin's review of the Clean Coder sounds like an interesting read.
  • CloudTest Lite got some encouraging words from Scott Barber, here, and Fred Beringer, here


And finally….



Until the next time...