SWET - Swedish Workshop on Exploratory Testing
This was the second installment of a peer conference on exploratory testing held by Swedish testers, organised by Rikard, Henrik E and Martin. Great job!
The main focus this time was the analysis, planning and status communication aspects related to exploratory testing. I thought this was a great idea - an area that sometimes doesn't get the discussion time and an area ripe for discussion.
Johan, Tobbe, Fredrik & Saam did a fine job with their presentations - I won't go into the details as they've been described in the references below.
Johan had the marathon session - about 15-20mins of presentation followed by nearly 4 hours of discussion. It's that discussion that really searches into the experience report and picks up on different aspects and looks at them closely.
In this session there was an exploration of aspects of new testers versus experienced testers in exploratory testing. This got a lot of exploration and I think many drew or highlighted interesting lessons here. Some notes I made (this is a mixture of my own and other's thoughts and observations):
- Attitude (from project or team lead) towards testers/teams is decisive.
- Attitude of testers/teams is decisive (in good testing).
- Importance of the team lead that "protects" testers from the project and lets them test.
- But how to handle project leaders where that shield doesn't exist?
- Comfort zone warning signs in testers: Ways-of-working, templates, same-same attitude and identity.
- Can be a danger that testers can use test reporting as an advert for testing, i.e. a danger that we're missing the point of testing. (This was a good point from Martin!)
- Dialogue between tester/team and the project lead - no surprises.
- Test planning: Trust between tester/team and project leader is important (if not vital).
- Actively build up this trust.
- It's the planning that is important and not the plan!
- Domain expert vs Right attitude?
Tobbe was next up with an experience report on the role of a tester that goes outside the traditional and incorporates aspects of project management. Some observations:
- Is there a danger in the combined tester and project leader role. Danger for double-personality complex (amongst others)?
- How was the conflict between lots of testing issues and reporting to a stakeholder on the project handled?
- Did the double-role affect the content and detail of the reporting?
Fredrik was third with a report on using some aspects of TMap as a starting point for their work. The story wasn't really about TMap but more how they'd used it as a starting point. Some of my notes/questions:
- Test analysis & planning has a small timebox - is this optimal after a few iterations, some other basis or some pre-requisite knowledge before application?
- How do you examine and question your working methods and approach for the purpose of improvement?
- Having a "high quality expectation" - what does this mean and imply? (what does it leave out?)
Saam was the fourth and final presentation. Unfortunately, we only managed just over an hour of discussion after the presentation. Some of my notes and observations:
- Working with distributed sites - there are different cultural aspects here. How did they impact the reporting?
- The idea of "1 requirement per test" - how is that counteracted?
- "Confidence levels" in reporting is a good starting point - but is there a need to normalize or baseline them for stakeholders? Education for stakeholders?
- How are faults in the coverage map handled?
- What was the stakeholder reaction to low coverage on parts of the map?
- Map: confidence vs security vs risk vs tool for communication (and not communicated).
- Map: Needs to be dynamic.
- Map: How are assumptions and limitations displayed? Do they need to be?
- Map: Does the map change with different assumptions and limitations?
- Use the map to solve the team problems and not as a method of communication.
I practised thinking about how I would behave and react in the situations presented in the experience reports.
The observations and questions (above) are really some of the questions I might pose in similar situations - so I'm learning from the presented experience and testing my ideas out. And that's good practice - valuable.
It was fantastic being in a group of intelligent testers - some from SWET1 and some new aquantancies. All had many different perspectives on issues that many of us face on a daily basis. Great to compare notes and discuss topics of the day. Talking of which...
As usual there were plenty of late-night discussions. The topic of best practices arose. As I remember, participants in the discussion were Rikard, Ola, Henrik A and myself (maybe I missed someone?)
The discussion was on the phrase "best practice" - notice it wasn't the topic of "best practices" but rather the phrase.
Rikard said that he thought it was curious how usage of the words can induce such strong reactions. There is a state in which someone can think of an idea or topic as their best so far, and consider something as "best" [to them, so far].
Henrik talked about how problematical it was to use those two words together.
My view was that it was incomplete - giving something a label of "bp" is really avoiding dialogue. It's an incomplete phrase as there's no time or any other contextual aspect - therefore using the term is lazy and imprecise.
This was a great initiative! Five minutes open floor - talk for five minutes or less if you want to allow questions. All those not presenting got up and gave an energetic five minutes.
Passionate is a word that sometimes gets overused when describing testers ("... I have a proven track record ... yada yada ... oh and I'm a passionate tester also...") - but in this grouping passion fits. Engaged discussion, relentless searching and ultimately improving ones thinking and reasoning. All in their spare time. Now that's passion!
Cool! Roll on SWET3...