Saturday, 7 September 2013

On Being Replaceable, Role Traps and Adding Value

I said to one of my managers several years ago that one my main goals towards the team and organisation was to make myself replaceable.

He nearly fell off his chair....

As I was a leader within the organisation - if I become replaceable what does it mean for the position I have or the value of the work I'm doing? Oh, I say this to most managers (partly as a test :) )

It is important to distinguish how people are viewed within teams, projects and organisations. In this context there are two main views:
  1. What they bring in terms of personal skill, knowledge and capability.
  2. What they bring in terms of operational responsibility or ability.
Where teams and organisations have roles or specialized people (e.g. testers) then there is a risk of attaching someone's behaviour to that of the role.
Or the other way around, 
The skill of a role is often thought of in terms of a person/s that perform it.
(or have performed it). 
Experience is implicitly labelled as an exemplar. 
In the past I have sat in a coordination meeting with a team and asked an incisive question such as, "Customer X have feature Y adapted/changed in their network and so your development of feature Z should also look into interactions between features Y & Z". This means that a potential risk is reduced - or at least some investigation is started to reduce a potential risk.

Now, if I miss a similar meeting with another team, there might be such a question not asked that might mean such a risk is not caught or it is thought about later, meaning some additional work. (Note: These coordination meetings might be some form of reference/project or expert aid to the teams.)
Question: Who was the target audience for my question? 
Many people would think it was the team doing development of feature Z. My target audience is everyone though - it's not just about the question, it's also why it is asked (some might call that the context) - in this case to help all realize something they hadn't seen beforehand.

I don't ever see myself as the person who asks questions others don't - but more as someone who might help others ask better/different questions next time. If I haven't passed on some of that capability then I've failed.

Adding Value
There are two main ways I add to the team, project or organisation.

  • I point out the differences between what I am bringing as me, and what I am bringing as "performing role X". They might overlap at times, and not at others - and it's important to help others with that difference.
  • I ask "why?" a lot. Not to be a pain (even if that's how it might sometimes be seen), but to help understanding.
But, how is this "adding value"? It's highlighting behaviours that others can adopt that are not "owned" by a role.

The "Why?" Question
One of the most common questions I ask is "why?". It's a sign of wanting to learn what someone else is thinking. It's a sign of wanting to flush out, and clarify, assumptions. The "why" question is one of the cogs of dialogue and understanding.

So, if I am the only one who is looking out for dialogue and understanding then there might be a problem in the organisation. However, in my experience, someone has attached the "why" question or "the types of question that I ask" to the role I had - so it becomes "role ABC asks those type of questions".

That's where I need to remind people around me that these are not my questions, I don't get territorial about such questions, and that if I'm not around and the question pops into your head, go ahead and ask it.

Learning Organisations
Organisations, teams or projects that want to grow and learn must be very careful about roles -> sure, if someone is the designated decision maker let them make it. Until that point, there's usually plenty that can be done without the decision-maker - including asking questions.

Sometimes people (teams) need to be given permission to think for themselves - strange as that may sound.

  • It's not what skills you bring to the table, it's what you leave behind for others after you've gone.
  • It's not what attitude you bring to the party, it's the positive change in attitude you leave behind that's important. (Sometimes, that means more people are prepared to ask, "why?") 
A typical question I get about achieving replaceability is: don't you do yourself out of a job, or no one needs you after you've improved others?

In a team, or organisation, that has a constant ambition of improvement that is never a problem - there is always a new problem to work on. As Weinberg said (I think), when the most important problem gets solved, problem #2 gets a promotion. Sure, it's a different problem and may take you out of your comfort zone, but ultimately, that's how you improve.

Some people treat knowledge as power and hold onto it. Unfortunately, those are the folks that can become one-trick ponies or get bypassed by progress...

And finally...
So, to me, being replaceable is positive - it means I've added value - it means I've given others a tool for thinking more clearly - it means I can carry on improving, learning and adding value.

Are you adding value? 
Are you leaving something on the table for others when you move on?

Tuesday, 27 August 2013

Joining the ISST

Some weeks ago I was asked to join the ISST ( as a founding member. The group was presented as a force for the spread of context-driven testing* - it's mission and objectives were something I saw as a force for good. (* I've never called myself a context-driven tester - and some may think I have my own cdt principles - anyway….)
Why Join?
I didn't know who else was in the organization more than the board - it wasn't sold to me as though, "Hey Dave Merenghi is joining so what about you?" - so why did I join?

I knew the board, I liked the mission - and I knew that these were probably guys that could achieve that mission - that was a differentiator (for me).
No brainer...
Yes, working for these causes was a no-brainer (for me), but as in most things I do, I occasionally let my tester brain off the leash. I asked questions about the mission, key differentiators for the organization, fees (their use and what a prospective member might get for it), how members and founding members could contribute and organizational structure
Good enough...
I got answers - and answers that satisfied me. But the key part for me - I could be in this organization and so help drive the direction of it. 
Got a question?
I think the board has been very open about answering questions from members and potential members. Need an answer? Try the contact page:
Free lunch...
The fee? Well, it's a non-profit organization and - as I understand it - the fee will be used to facilitate work attached to the mission and objectives. I know the board, so it's maybe easier for me to accept that than some that don't know them. I've yet to see a real free lunch though - there's always payment (or an agenda/politics/favour) somewhere. I joined and paid (technically, paid first) so I can be on the journey.
Positive, not negative...
Do you want to win the lottery - well you have to be in it to win it. Do you want to engage and work for something that can be hugely positive - I'm in - I'm not going to bash anything non-cdt, I'm going to work for good testing - rooted in value advocacy for testing skill.
Knowledge work -> original problems...
I want to be onboard for the journey - the route is not mapped out -> just like most original problems. 

The board, founding members and new members (I've seen via twitter so far) comprise a great set of minds - I want to work with those testers in tackling some of the key problems that testers have today -> whether that is highlighting the importance of tester skill, helping communicate the value of testing to higher execs and so relate tester value to business value (as well as vice versa) [BTW, who else is working on that today?] Or why not introducing a bit of common sense along the way…. 
My take on one outcome I can envisage -> Reframing the testing problem, reframing the business problem and see how testing and testers can help the business - an exec might call that "good common sense".

In the next year, I want to commit to the cause, be part of the energy that everyone joining brings. The organization needs the energy of the members to help achieve its mission and objectives. I'm in. 
Crystal ball time...
If this grouping hasn't achieved anything in one year's time then it could be called a failure, but right now, I'm confident with the people on board - and there's room for other intelligent testers that are maybe not part of it, yet - it won't be.
Already joined?
If you're part of the journey too, I'd be interested in your take.

Tuesday, 28 May 2013

Examining the Context-Driven Principles

At Let's Test 2013 James Bach had a keynote about the Context Driven Testing principles, some of their origin, usage and implications of them. An interesting piece of trivia that James revealed was that the basis of the principles were written down very quickly, with some refinement afterwards.

He discussed a little about the implicit meanings behind the 7 principles, but the slides were skipped through so I didn't see the details. However, by this point at least one question occurred to me.
My question in the open season went along the lines, "The principles were mentioned as a litmus test to gauge if someone was context driven. However, adopting a scientific approach (ie one of searching for information and not settling on an assumption), there may well be more than 7. How do we add number 8?"
An underlying question was (for me):

  1. If the principles were written so (comparatively) quickly why haven't they been challenged from testers claiming to follow or adhere to them?
  2. Indeed, isn't this a good exercise for a tester - context-driven or otherwise?

Therefore, I thought I would take a look at them - as a (comparatively) quick exercise. James did mention that Markus Gärtner had extended the principles as an exercise previously. I remember seeing a discussion on the software-testing yahoo list some time ago and checked it out - and sure enough I'd discussed the possible extensions there, and thought it was a useful topic to revisit.

The 7 published principles, ref [1], 
1. The value of any practice depends on its context.
2. There are good practices in context, but there are no best practices.
3. People, working together, are the most important part of any project’s context.
4. Projects unfold over time in ways that are often not predictable.
5. The product is a solution. If the problem isn’t solved, the product doesn’t work.
6. Good software testing is a challenging intellectual process.
7. Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.
My Approach
In assessing the principles I used two approaches, (1) Critical/Logical consistency - do the propositions support conclusions or implications?; (2) Are the principles testable (falsifiable)?

#1 [Unchanged] I tried to extend this one, but I think it is rhetorically good - succinct and a good start to the principles. This can not be tested but is inferred by induction.

#2 [Remove] I could think of  removing/replacing this one. It is a derivation of #1. The essential content here is that "Practices evolve over time" - I thought of making this my #2, partly to remove the distraction of "best practices" (that would be a derivation of these principles), and partly to emphasize the evolution of practices. However, I finally settled on having no #2.

#3 [Rephrase] The principle is an assertion. For this one I want to emphasize that the project, the processes and people form a system and that they change over time. The importance here is to illustrate that the system is not static, it is complex because of people (and their emotions) and as such, working with such a system is not a trivial activity. Therefore I would re-phrase.

#4 [Remove] This is a derivation / implication of #3, an application of the principles.

#5 [Unchanged] Again, succinct and difficult to refute.

#6 [Unchanged] An assertion that where good software testing happens there is a high intellectual demand.

#7 [Remove] This is unnecessarily wordy and sticks out as too different from the previous principles. It also drifts into certainty and seems like an untestable principle, and so I would remove it.

#8 Here I want to tie the system mentioned in the rephrased #3 and good software testing.

1. The value of any practice depends on its context.
2. -
3. Projects and people form part of the system that work together, where products and practices evolve over time
4. -
5. The product is a solution. If the problem isn’t solved, the product doesn’t work.
6. Good software testing is a challenging intellectual process.
7. -
8. Good software testing encapsulates the system of project, people and practices that work on building the product.
So, I ended up with 5 principles. The application of these would then produce variants and clarifications. For instance, the best practice item is derived from #1. Understanding systems of people, emotions and time is derived from these and many, many more.

I stopped there, but if I spent more time could probably refine them somewhat. But they are good enough for this exercise. And as James suggested - if you started from scratch, you might well not end up with the original 7 principles.

So, how would your interpretation look?


Friday, 10 May 2013

Peer Conferences -> Conferring -> Learning

This week I held my first internal (in-house/in-shop) peer conference. This is a new concept to many at my shop and I'd been promoting it via short promo talks, e-mail and some gentle prodding.

For those that have never participated in a peer conference the concept is a cross between a roundtable discussion, an in-depth interrogation and an experience report. After a short presentation, statement or question, then an open session of moderated discussion explores the topic. In my experience, the open session is the real examination and distillation of learning ideas and experiences and it's there that many learnings and insights come forward.

The format is run in different formats including non-time boxed, ref [5], and time-boxed variants, refs [6] & [7].

Concept in academia
I recently discovered some academic support for this type of learning, ref [1], which looks at learning and communication by posing questions and answering them. This can involve the talk-aloud protocol (which is a description and discussion about the experience) - this is one way to help the group get to the real cognitive processes behind the experience.

If the questioning is done in real-time (whilst someone is making a decision or performing a task) then this may be termed the think-aloud protocol. In an open season sometimes the questions turn to "how did you chose that", or, "why did you make that decision", then there is a chance that we relive the think-aloud concept. These are powerful because they can help to show up unintended misunderstandings - either in the reporter or the audience - and so enhance learning.

The talk- and think-aloud protocols, ref [3], help all to discover the implicit ideas behind the experience and for all to build a mental model of the key turning points in the experience. This is important for use and understanding.

This is a way of making hidden knowledge explicit, and Ericsson and Simon, ref [1], refer to the work of Lazarsfeld, ref [2]. This talks about the differences (1) in response to the way a question is phrased and also (2) about tacit assumptions.

An example of (1) is given as, "why did you buy this book" - where the response will depend on where the emphasis is - on "buy" the response might contrast to borrowing the book, on "this" the answer might be about the author, on "book" it might be about opportunity cost, contrasting with a restaurant or cinema visit.

Lazarsfeld's example of Tacit Assumption is from Chesterton's "The Invisible Man", ref [4]:
"Have you ever noticed this--that people never answer what you say? They answer what you mean--or what they think you mean. Suppose one lady says to another in a country house, 'Is anybody staying with you?' the lady doesn't answer 'Yes; the butler, the three footmen, the parlourmaid, and so on,' though the parlourmaid may be in the room, or the butler behind her chair. She says 'There is nobody staying with us,' meaning nobody of the sort you mean. But suppose a doctor inquiring into an epidemic asks, 'Who is staying in the house?' then the lady will remember the butler, the parlourmaid, and the rest. All language is used like that; you never get a question answered literally, even when you get it answered truly."
Note, Lazarsfeld also remarks that people consider quality in products differently!

So what?
The lesson from these sources is that many and varied questions get to the root of the experience and reasoning! Responses will be interpreted differently by different members of the audience. Therefore, it becomes part of their interest to ask and clarify from their perspective.

When you look at it this way the experience report and discussion becomes so much richer and fuller than a written report could hope to achieve. The learning becomes tangible and personal.

A key experience of peer conferences, for me, is that skills (or key decisions) are able to be distilled from the experience report. This means that those skills (or decisions), once isolated, can be investigated and used in practice - once something is recognized it becomes observable and possible to use in future experiments (practice).

So how did it go?
We had a small group (7 of us) which was very good as the concept was new to all, excluding myself and ran according to the LEWT format. The theme was testing skills and we dedicated an afternoon to it.

In the discussion part of my experience report (an experience about usage of combinatorial analysis for initial test idea reduction) we were able to diverge and find "new" uses for combinatorial analysis. We discussed a usage to analyze legacy test suites - a white spot analysis - now although this was something we already do, we hadn't recognized it as that. That means we can do it (potentially) differently by tweaking input parameters to produce different levels of analysis.

I was very pleased to be able to demonstrate this learning / insight "live", as it reinforced the power of the peer conference concept -> ie we discovered it "live".

We finished the afternoon happy and energized with the format and committing to be ambassadors for others. Job done!

Next steps
I started this internal conference as a monthly series of short talks and some deeper discussion. In June there will be another occurrence and I'll also try a variant joining a session in two countries with HD video comms - basically I'm spreading it around the world!

In just over a week there will be a peer conference prior to Let's Test, followed by the eagerly-anticipated Let's Test conference. Those will be experience and learning-rich environments. Pretty cool!

[1] Protocol Analysis: Verbal Reports as Data (1993; Ericsson, K.A., and H.A. Simon. ; Revised edition. Cambridge, MA: MIT Press.)
[2] The Art of Asking Why in Marketing Research (1935; Lazarsfeld)
[3] Wikipedia: The Think Aloud Protocol
[4] G. K. Chesterton, The Invisible Man:

Peer Conference References

[5] SWET: Swedish Workshop on Exploratory Testing

[6] DEWT: Dutch Exploratory Workshop on Testing

[7] LEWT: London Exploratory Workshop on Testing

Wednesday, 17 April 2013

Some notes from SWET5 - Small and Intense

Edition #5 of the Swedish Workshop on Exploratory Testing (SWET5)

Place: Högberga Gård, Stockholm
Date: 6-7 April 2013
Theme: Mindmaps and Mindmapping in Testing
Twitter tag: #SWET5

Participants: Henrik Andersson, Mikael Ulander, Simon Morley, Rikard Edgren, Martin Jansson, Michael Albrecht, James Bach

Due to injury, illness and other reasons SWET5 was depleted to 7 participants, but I think each of us had the same response when we saw who was still in the game - "yeah, I'd like to spend a day and a half talking testing with those guys!"

I think it was Martin that remarked that the room had a high density of testing thinkers...

We had one newcomer (Mikael) and the others that knew each other from at least 2 SWET meetings each. During the checkin there was a common theme - yes, this is small, but we all know each other so this is going to be a different (and maybe more intense) type of peer conference.

Talk I
First up was Rikard talking about a testing feature interaction model that he had used to help in his testing of a product. As usual, the real depth came out during open season where we discussed how the model/map/list was constructed, what made it onto the map and what didn't - this was the real use of the map/list. Rikard admitted he was a "list guy" - which was why he drew a map with limited levels (mostly one). The discussion also touched on areas of how the map might look for different SW builds, how the map was personal and that it was an aid to communication an not the communication itself. Compare "mapping" to the "map", "planning" to the "plan", "reporting" to the "report".

It made me think of the model in the communication equation, ref [1],
Communication = Artifact (model) + Discussion (dialogue)
Rikard also mentioned some heuristics, "one more test" and ?

Other parts touched on in the discussion - where we didn't discuss to consensus - were problems with drawing out tacit knowledge from a map. I also started thinking about multi-dimensional clustering of feature risks - but I need to explore this further...

Talk II
Next up was Mikael, who described a per sprint usage of xBTM and the usage of mindmaps to illustrate ideas for TBTM and SBTM. In open season there was some discussion about why the graphical method was particularly appealing. We maybe didn't get to the bottom of that, but we sensed that was a crucial discussion point. Why use a graphical method, why use one form of a mind-map in contrast to a list. There's something there that we didn't conclude...

Another aspect that was touched on during open season was a discussion about when the mind-mapping and xBTM was discontinued (in Mikael's experience) - and here the discussion explored the aspects of PR - discussing and talking about what you're doing and why. If your manager/stakeholder/organisation doesn't buy into your method, then it won't really be adopted - and there is a difference.

Talk III
James talked about an experience of three teams (plus himself) producing mind-maps for a given exercise. We then looked at the resultant differences - that four different maps were produced for, ostensibly  the same mission. There was some discussion on how four maps could be combined into one - that maintained it's readability/usability - probably by appointing a scribe. But the question remained, in which cases would this be a problem. Another area of the discussion touched on the personal/subjective aspect and so whether it mattered how the resultant maps looked. This lead to the report vs reporting issue again - that observing the construction of the mind-map might give more information than the final map.

The discussion moved on to mind-maps' place in the communication - from spoken word, written, lists, pictorial, what next? Specific tools or formats? Intermediary step to where? Do mind-maps do it or are they a gap-filler? No conclusions...

Talk IV
Finally, I gave an experience report on using a mind-map as a basis for a strategy discussion - with the positive and negative experiences from that. The open season delved into areas and reasons why the map or the communication didn't work, some aspects of why they might not have worked. The problems discussed were generic to communication and not specific to mind-maps.

One interesting observation that I made of my mind-mapping experience was that mind-maps tend to channel the description into inherently "positive" or "visible" ways of thinking. What does that mean? Well, my map (and in some ways many that I see), essentially lay a track of "this is the route we'll follow because these are the parts I want to show, discuss, record..." They don't intrinsically show, "these are the things we will do/record PLUS these are the things we won't do/record because they're not useful or productive". In my experience report this would've translated to "the strategy looks like this, because these are the problems it's trying to tackle PLUS these are the issues it will avoid..."

This may mean that - as with any communication means - they are more suited to some tasks rather than others. I think this is an area I need to dig into more.
An aspect not concluded was the cataloguing of those areas and limitations. This could be a useful follow-up.

During the evening/morning there was some discussion about testing and checking. The consensus that we could agree on was the wish to advocate testing as the starting point in a good testing practice.

Check-out was positive and there was a feeling that the smaller group had led to more involved discussions. I enjoyed it, well done to Michael for organizing SWET5. Roll on SWET6!

[1] The Tester's Headache: Challenges with Communicating Models II

Tuesday, 2 April 2013

Checking and Information Value

It's now over three years ago that I wrote my last post on "testing and checking" - and I wrote three in total. I consider them a little 'rough' but I knew they contained some gems so revisited them recently - refs [1], [2] & [3].

James and Michael wrote a blog recently, clarifying and redefining what they meant by testing and checking, ref [4].

After reading it - and thinking about it - I felt that some emphasis was missing - for me. This doesn't talk directly to the definitions but more the application of them.

Note, James & Michael seem to apply this refined terminology within the frame of the Rapid Software Testing methodology, ref [11], but I think others will use it outside of that also.

Some History re-cap
After reading the original post, ref [5], I started wondering, "why?", what was the motivation behind making the distinction. I didn't experience the problem being described and couldn't see how the distinction might help me. As part of that I started wondering about how checks and checking was being framed and the extent to which it was being wrapped in testing.

My original questions:
  • Why make a check when no testing is involved?
  • How or why would a check be made without testing?
- ie why make a check and not analyse the result; why make a check without any thinking/analysis (testing) beforehand?
Note, the definition didn't say that a check wouldn't be evaluated, but it could be misinterpreted that a check without testing was useful (although I didn't think that was the intention).

So, at the time, I couldn't detach checking from testing - I couldn't see a situation where a check would be made without testing. That was my perspective and how I would advocate checking.

But I'm also very aware that there will be people/organisations that do some form of checking without any testing... How might that happen?
  • "We always run this batch of scripts. When they pass then the SW is ready."
Note, I believe the definitions are trying to enable more accurate observation and discussion about elements that were testing or not.

Potential Trap
A check (or checking) is used in isolation without any test "thinking" (whether as analysis beforehand or afterwards.) This case is a form of the trap mentioned in ref [6].

On to Information Value
So, what was I missing? My early posts were looking for reasons / needs for the distinction - and I accepted that the distinction was useful (although, I admit, not immediately as I didn't see the utility in them in the beginning - that came later.) My remaining question was around emphasizing the importance of testing and the selection of a check as being rooted in testing.

During the last year I started thinking more and more about emphasizing information value of testing. This is partly trying to establish congruence between customer needs and testing goals, ref [7] & [8], partly about introducing test framing, ref [9], and framing checking from a testing perspective, ref [10].

So, where does this bring me?

Checking in isolation
Talking about checking in isolation has value when identifying the activity - whether as something that is planned or an observation in a process or activity.
  • "On the upcoming release we'll reuse some checks from the last release - they will give feedback on basic data configuration - whether the data configured pre-upgrade is still visible after upgrade." (plan)
  • "You're reporting a fault due to an automated script flagging an issue?" (activity) "Ok, has the result and activity been double-checked - that the flagging is correct?" (testing)
  • "There is a bunch of automated scripts that run on every code change - and they never issue any problem indication. Is that ok, or is it time to review their content and purpose?" (process) 
Information Value of Checks and Checking
When considering whether to use a check or checking there is an implicit activity of comparing what you'd like to know about a product, what you can achieve (and how) and what each instance of a check might give. At least, I claim (with only anecdotal proof) that this is the case in many good testers. For example:

Wanted Information about product
Note, this example excludes people not making an evaluation about a check or whether or not to do any checking or what the check outcome says...

Part of this evaluation (sometimes estimation) is to understand the value of making a check - something cheap and easy might be preferable, but something expensive that returns a large information value might also be worth the effort. The key thing is the decision to make the check - because it will provide some information.
Example: In the past I've advocated 169hr tests (checks) - the result might be good/no good (pass/fail), but the value of it might be part regulatory PLUS a lot of additional data that can be mined/explored.
Why it might be useful to check or perform checking is then attempting to work out the value of the information it might give. This is information about the product "under test".

Information value can be estimated beforehand and/or assessed afterwards.

Checking as part of Testing
When information value (or at least its intention) connected to a check or checking is captured then the check or checking becomes part of testing.

A model of testing, checking and information value could be expressed as:
Testing ≥ Checking + Information value

In mathematical terms it could be written:
Testing ⊇ Checking + Information value
This means every instance of "Checking + Information value" is an element of Testing but also that "Testing" does not equal "Checking + Information value".

Checks (with no associated information value) ∉ Testing
Meaning checks without associated information value (either decision to make a check or analysis of a check outcome) are not part of testing.

Whether I think of checks, checking and testing inside the Rapid Software Testing methodology or not I think the terminology is useful. It is useful to identify artifacts of checking in the wild - whether in observed processes, activities or intended actions. It is useful to identify and categorize as any field scientist would categorize their observations.

In talking to people about processes and their actions it is very useful to describe and illustrate when a process was being followed, maybe even identifying assumptions that were not being stated and how they might confuse or muddy communication about what was really happening. It might be useful to highlight the extra work needed to make the activity into good testing.

In my worldview of testing there are no checks or checking in isolation, there is either an intention or idea beforehand about the usefulness of such a check or an analysis afterwards about the usefulness of the check or other assessment about the information it yields. Often, in the cases of good checking (implying good testing) both the beforehand intention/idea/hypothesis and the resulting assessment will both happen. That is my ambition where checks are used.

In the cases where checks are used without pre-determined intention or resultant analysis - and this does happen - my ambition is to raise the bar. In my worldview this is not testing. I want to raise the bar partly by awareness - noticing (and getting others to notice) this happens, partly by contrasting this to how it might look if we connect some information value to those checks. In so doing, understanding the real value (and maybe cost) of good testing. Note, there is no automatic raised cost by adding information value to checks to get testing, but rather a reduction in the overheads of "fake testing" (an activity which was maybe thought to be testing but wasn't).

I wish to raise the bar of checking by emphasizing the associated thinking attached to checking to identify information value (positive or negative), and so raise the bar of testing.

Thoughts, comments?

[1] The Tester's Headache: To test or not to test, just checking...
[2] The Tester's Headache: Sapient checking?
[3] The Tester's Headache: More notes on testing & checking
[4] Satisfice: Testing and Checking Refined
[5] Developsense: Testing vs. Checking
[6] The Tester's Headache: Another Regression Test Trap
[7] YorkyAbroad Slideshare: Testing Lessons From The Rolling Stones
[8] The Tester's Headache: Mapping Information Value in Testing
[9] Developsense: Test Framing
[10] The Tester's Headache: Framing: Some Decision and Analysis Frames in Testing
[11] Satisfice: Comment on Testing and Checking Refined

Saturday, 30 March 2013

Another Regression Test Trap

I mentioned in a previous post, ref [1], a trap to be aware of when thinking about regression testing. Time for another...

There is a lot of "rubbish" written about regression testing. As with other aspects of testing it is sometimes commoditized, packaged and "controlled".

Regression Testing is the act of re-running tests - that have previously been developed, designed or executed. Period.
The trap is that it stops there...
Sometimes, there is a contextual aspect included - which is addressed by stating that analysis of changes in the product, including the processes that were used for that change should be made. But in many cases - and indeed, in several pieces of literature discussing regression testing - the extent of this analysis is about setting the regression test scope from the previously developed or designed test ideas.

Hmmm, seems reasonable? Or does it? Let's look at a couple of commonly available sources of information.

Other Sources on Regression Testing:

Wikipedia, ref [2]: "Regression testing can be used to test a system efficiently by systematically selecting the appropriate minimum set of tests needed to adequately cover a particular change."

So, selecting a minimum set of tests - systematically - is an efficient test of a system, if that set provides adequate "coverage" of a change in the system. Mmm, good luck with that. This is bordering on pseudoscience, ref [4], - it's an untestable and vague claim. Having said that, the statement is qualified by "can" - hmm, ok, not so helpful though.

ISTQB glossary, ref [3]: "regression testing: Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is

I can almost agree with this. Almost. Well, ok, not really. What's a big sticking point for me? "Ensure". My dictionary (New Oxford American to hand just now) states that ensure is either a guarantee or the action of making sure... Ok, so how can my testing guarantee or make sure that a defect has not been introduced. Why would it want to make sure that a defect hasn't been uncovered - wouldn't you really want to know about this????

Note, it's possible to find similar problematic definitions in testing text books.


Ok, am I splitting hairs? Am I dissecting these definitions more than I should? Well, what level of clarity should I hold them too? Should I be satisfied that they give "surface" explanations and that I should not examine or look too deeply? The thing is, if a definition falls apart under scrutiny is it worth using? I can't use it and I don't advocate it they can't define something then the definition falls apart (or becomes ambiguous) under scrutiny is it worth using????

What's the problem?

My issues with typical regression scopes are connected to two often unstated assumptions:

  1. An assumption that the regression scope that "worked" once is sufficient now - where the system has changed. (This means that the number of test ideas covered previously - whether expressed as instances (test cases) or not - are good enough for a legacy scope now.)
  2. An assumption that a previous regression scope is somehow a conformance scope. (This means that an instance of a test idea - a test case - is correct now, in the new/modified system.)

Note, if these assumptions are made they should be stated. It could be adequate in some cases to assess the previous scope and determine that it is sufficient for the current needs. It could be useful to state that the previous regression scope is being treated as a conformance scope (or reference scope, or oracle) - if that's what the assessment of the current problem/system produces.

I think a post on how I'd approach a regression test problem is in order...

Tester Challenge
On a train journey returning from SWET4 to Stockholm I sat with James Bach and he tested me on "regression testing". He suggested there was one case where a previously executed test campaign could be repeated as a regression testing campaign. I gave him two.

I'll leave these as an exercise for the reader...

[1] Regression Test Trap #492
[2] Wikipedia: Regression testing (phrase checked 29 April 2013)
[3] ISTQB: Standard glossary of terms used in Software Testing, version 2.2
[4] Wikipedia: Pseudoscience

Bonus Reference
[5] Checklist for identifying dubious technologies

Tuesday, 5 February 2013

Regression Test Trap #492

I didn't call this "the" trap, as I think there are several. This is one aspect.

In June last year I had a little twitter rant/exploration (or transpective investigation, [1]) about regression testing. It was triggered when I saw a LinkedIn skill labelled as "Regression Testing". I wanted to know which skills that people considered specific to regression testing.

Now, one can take some of the LinkedIn labels with a pinch of salt, but I wanted to test the water about this one.
Note, I really enjoy these "bursts of activity" I have on twitter. There are many great minds that immediately question, probe, ask for explanation or example. I love it and am very appreciative to anyone that asks me to explain myself. It makes me stronger and clarifies my thoughts!
The twitter discussion dump, via twimemachine, in reverse order:
@Arborosa you have that problem in non-regression-specific testing also Thu Jun 28 06:54:20 +0000 2012@jamesmarcusbach Oh, the NAP is a great resource - was looking at this the other day Wed Jun 27 10:00:53 +0000 2012@jamesmarcusbach That's great, thanks - I need to spend more time on it - reminds me of Göranzon's work-> must dig that out again Wed Jun 27 09:49:44 +0000 2012@jamesmarcusbach True, it's one of my aids to avoid the trap of going from "application in context" to "application in general" Wed Jun 27 09:42:56 +0000 2012@jamesmarcusbach I try to distinguish between skills and application of skills, reminds me of "Dialogue Skills and Tacit Knowledge" Wed Jun 27 09:25:44 +0000 2012@jamesmarcusbach I call the workshops a feature walkthrough, see slides 20-27 in - usual health warning with ppt slides Wed Jun 27 08:50:05 +0000 2012@jamesmarcusbach In my context, these skills are not specific to regression testing Wed Jun 27 08:37:44 +0000 2012@jamesmarcusbach Test idea maintenance - not specific to regression testing 4me - can be triggered by feature or test harness change/growth Wed Jun 27 08:36:42 +0000 2012@jamesmarcusbach Interviewing and analysis - I do this as part of feature interaction analysis - sometimes 1-2-1, sometimes as a workshop Wed Jun 27 08:36:00 +0000 2012@DuncNisbet yes, start from a picture of what you want to learn + where risks are - then let that point to what's "interesting" to know Tue Jun 26 22:08:01 +0000 2012@DuncNisbet I think of tests with a potential to find a problem as "interesting" - ideas to pursue/try out Tue Jun 26 22:00:29 +0000 2012@rahul_verma me too - I thought your questions were good and relevant! Tue Jun 26 19:10:03 +0000 2012@rahul_verma look forward to it Tue Jun 26 19:02:17 +0000 2012@rahul_verma yes, that's what I aim for Tue Jun 26 19:01:48 +0000 2012@rahul_verma and how many other types and application of testing were they no good at? :) Tue Jun 26 19:00:47 +0000 2012@rahul_verma I think people mix up activities, skills, application and results - and makes our job (communicating about a product) harder :( Tue Jun 26 18:57:20 +0000 2012@rahul_verma applying a skill successfully in an activity (RT) does not make that a skill specific to that activity Tue Jun 26 18:56:15 +0000 2012RT @rvansteenbergen: Want tweets about the latest news from testblogs? Follow @testingref #softwaretesting Tue Jun 26 18:45:00 +0000 2012@jamesmarcusbach I'd love to hear about the other things Tue Jun 26 18:14:31 +0000 2012@rahul_verma agree! Tue Jun 26 18:11:32 +0000 2012@rahul_verma on a certain level performance testing and regression testing do have an overlap of ambition Tue Jun 26 18:10:10 +0000 2012@rahul_verma performance testers focus on an element of a product's characteristics - to measure, monitor, feedback, recommend and help tune Tue Jun 26 18:09:02 +0000 2012@rahul_verma exact knowledge is tricky to define.... Tue Jun 26 18:03:28 +0000 2012@rahul_verma I'd say the purpose distinguishes it as regression testing, but not necessarily the skills used Tue Jun 26 18:02:42 +0000 2012@rahul_verma Are you asserting that regression testing is done by regression testers? Tue Jun 26 13:05:39 +0000 2012@rahul_verma What skillset in "regression testing" is specific to regression testing? Tue Jun 26 13:03:13 +0000 2012@rahul_verma Think skillset rather than domain knowledge Tue Jun 26 13:02:15 +0000 2012@rahul_verma Is that separate from good testing? Tue Jun 26 12:55:56 +0000 2012@thetestmile And is that separate from good testing? Tue Jun 26 12:55:34 +0000 2012@rahul_verma Are you saying you can be a good tester and not a good regression tester, or vice versa? Tue Jun 26 12:55:08 +0000 2012Now where's the "black viper testing" skillset button.... Tue Jun 26 12:42:55 +0000 2012"Regression testing" is a "skill" in linkedin. Really? Discuss... #faketestingmeasurement Tue Jun 26 12:41:39 +0000 2012
The trap is to think of the skills associated with good regression testing as being separate from good testing.
In my context and experience, I consider good testing to take into account aspects of new features that interact with legacy (existing) features or systems. This implies that "regression testing" is included. But if I think of specific skills needed for regression testing then these are just as applicable to "non-regression testing":
  • Investigative skills about the domain (system)
  • Discussing development and the existing system with developers/BA's/stakeholders
  • Test Idea / Environment Analysis & Maintenance
  • Reporting and discussing the status of the system
  • etc, etc

So, is a "regression testing" skill separate or unique to regression testing? I assert not, and that it can be just as common in "non-regression testing" or any good testing.

To Be Continued
I have another trap about regression testing - coming soon.

[1] Satisfice: Transpective Dialogues for Learning

Other "traps":

Thursday, 31 January 2013

Book Frames and Threads - Jan 2013 Update

This is an update to the map I use to track some of the book and reference material influences.

Changes since the last edition, ref [1], are marked in dark grey. Some changes I've put in this version:

  • Some online resources included
  • A re-use indication (books/texts that I refer back to are in larger font - the font isn't proportional to its popularity, but maybe for the next update)
  • Some author references are included
  • Some articles are referenced - but I'm aware that many are not included
This map will be printed out (to replace the previous version) on an A3 sheet by my desk. It sometimes generates curiosity and questions - which is then a useful medium for discussion about testing, and the many many influences from outside of software development.

Any recommendations for additions - whether books, publications or format?



Saturday, 5 January 2013

Identifying Information Value in Testing

In a previous post, ref [1], I wanted to re-make the case for focusing (or re-frame) testing on information that was valuable to the stakeholder, customer and so, ultimately, of value to the business.

From a testing perspective trying to identify which information is important or valuable to a customer and/or stakeholder is difficult. This is complicated when there are several stakeholders involved - usually each with differing agendas and priorities. It can be complex when the customer isn’t visible or available for consultation.

Information that appears valuable to the tester might not appear valuable to a customer.

How to tackle these problems? One way is to make the needs (value) visible. This will help align needs of the stakeholder (and business) with the priorities of the tester and vice versa.

In many cases, this may happen organically via iterative correction, but there are risks that the priorities and needs of the business are not the same as the development organisation (teams and testing).

Therefore, to help me make some of the decision making, needs for decision making, and the value of those decisions to an organisation visible I tried to model aspects of identifying information value. A model for this is below.

A Decision Model: FICL
Decisions about which information to look for in a product (which ultimately dictates the testing goals) is helped by using the FICL model. This model is based on the work of Russo & Schoemaker, refs [2] & [3], and I use this model in many situations.

The FICL model (as I use it) is a combination of activities for Framing (F), Gathering Information (I), Coming to Consensus (C) and reflecting on lessons learnt (L). I explore some of the frames common to testers and teams in ref [4].

These elements can run serially or in parallel. Sometimes the information gathering is the first element, followed by framing to align and adjust. I would like to think that the learning aspect is always present - to reflect and update the other three aspects (which may iterate between framing, information gathering and consensus or information gathering and consensus).

Feature Walkthroughs
An important part of understanding aspects of the feature - which might be described at a high level use case - is to gather the team, with stakeholder and technical expert representatives - and walk through the feature.

There are many advantages of a feature walkthrough, including:-

  • An early understanding of the feature development
  • A common understanding of the feature development
  • Overcoming the lack of available documentation
  • Helping to highlight assumptions, implied and unwritten requirements
  • Raising awareness about processes and strategies that live in the organisation rather than in a document (so called tacit knowledge).

Ref [6] gives an example of this feature walkthrough, which is partly based on the dialogue work of Göranzon [7] to help the organisation make tacit knowledge explicit. Collins, ref [8], gives a very good academic view on tacit and explicit knowledge.

A Model for Identifying Information Value

A Model for Identifying Information Value

An Example Application
Let’s take an example of a feature for prototype development and demonstration to a customer (could be internal or external). The feature will not go live and doesn’t need robust upgrade possibilities.

An Example for Identifying Information Value

Problems or Opportunities?
In some ways these activities put extra overheads on the business - there is a lot more work, discussion, questioning before the testing starts...

That is a perception. Actually, testing (which can comprise a significant part of a software development budget) is starting early, focusing and aligning its goals to those of the business.

Something I want to explore is how a team (or tester) working with details during development handshakes or checks whether they are still contributing to business value. That is, how they answer the question, “how does this test idea add value to the business need?”

A more complex example is probably useful - at least to me.

As I mentioned in the first post, ref [1], this is work in progress. Comments and questions are very welcome!

[1] Tester’s Headache: Mapping Information Value in Testing
[2] Decision Traps: The Ten Barriers to Brilliant Decision-Making and How to Overcome Them (Russo, Schoemaker, Fireside, 1990)
[3] Winning Decisions: Getting It Right First Time (Russo, Schoemaker; Crown Business, 2001)
[4] Tester’s Headache: Framing: Some Decision and Analysis Frames in Testing
[5] Agile Testing Days 2011: Experiences with Semi-Scripted Exploratory Testing
[6] Dialogue, Skill and Tacit Knowledge (Göranzon, Ennals, Hammeron; Wiley, 2005)
[7] Tacit and Explicit Knowledge (Collins; University of Chicago Press, 2010)

Wednesday, 2 January 2013

A Test Strategy

Test strategies are in one sense personal - all testers should have one, or be able to relate to one. Testers should be able to identify one, or identify components that make a test strategy important to them.

I am currently re-evaluating a test strategy and writing this down with appropriate influences, partly to document a work-in-progress and partly to aid reflection and feedback.

The strategy communication (which is different from the strategy) is simplified as a mind map below - with all the usual restrictions with mind maps, simplifications and presentations.

Test Strategy Representation

Do you have a personalized or adapted test strategy?

Feedback, comments and questions are very welcome.

Some References:
[] Kaner: General Test Design references
[] Tester's Headache: Mapping Information Value of Testing
[] Tester's Headache: Framing: Some Decision and Analysis Frames in Testing
[] Tester's Headache: The Documented Process Trap
[] Tester's Headache: The Linear Filter Trap
[] Tester's Headache: Silent Evidence in Testing
[] Tester's Headache: Challenges with Communicating Models II
[] Tester's Headache: Challenges with Communicating Models III