Sunday, 19 June 2016

Communication Brevity and Context

You might've seen something like this:
"it works != it will work"
"it doesn't work != it won't work"
"The product can work != the product [does|will] work"
Hmm, let's extend them a bit:
"The product doesn't work != the product will never work"
"The product doesn't work != the product won't work for you and how you use it"
"It works != it works forever"
"It works != release it, no worries"
"It works != no more work is needed with this product"
These statements seem to be very context-detached - it's not easy to see who is involved or what is really meant.

With so little context how much meaning can be drawn? Do airlines work like that? Do they say a plane can work or that it can reach the destination but refuse to say that it will reach the destination?

What's happening here? "it works" and "it doesn't work" are conveying headline messages - maybe something I'd think of as feelings about a product. And maybe specific to what someone means - which also means that you can't read more into it if you don't know that person or know what they intend with specific words and phrases.

Can they be interpreted as something else? Yes, of course! But that applies to a majority of statements - there's always a flip side that can be emphasised.

Communication & Brevity
The words we use and with whom will (or should) differ. The interpretation will also differ. They are probably influenced by who we are talking with, shared history, how synchronised we are, dependent on which trading zone we find ourselves in and what level of interactional expertise each party has.

I'd assert brevity of communication works best when the people are "tuned in to each other". I'd also assert that this type of communication is a heuristic summary / short-hand.

Brevity
Brevity of communication may have a place. But it's not a universal place. This doesn't mean it's bad, it means you should not lift it out of context and generalise as something always "good" or always "bad".

It's a form of short-hand communication and typically (to me) an invitation to discussion or dialogue.

Example
When I was working as a system integrator in the "noughties" (2005-9) I would be testing product deliveries or part deliveries together in different combinations for them to progress to a next stage of work. I'd often remark on certain combinations to my project manager as, "good to go", "no, no good". These are quite similar to "this combination works" and "this combination doesn't". In the context of that relationship and product development that was perfectly ok - with that manager and the teams that I knew.


Note:
1. The two "OK"'s sit in different trading zones of language - different contexts and different meanings.
2. "OK" & "Not OK" are headlines - they don't really contain all communication. They would be accompanied by some form of report - whether verbal or not - of what was seen to work, what was not tested and where concerns might lie.
The people I worked with wouldn't be satisfied with "ok" or "not ok" on it's own. And I wouldn't be satisfied with giving such a simple message in total - i.e. it might be ok as a headline but not the whole story. Naturally, a team would want to know why something wasn't "ok" upon hearing "Not OK" and very often a team would want to know if something wasn't "ok" upon hearing "OK".

To me these are characteristics of each party in the communication chain.

This example might not look likes it's really relevant in the "devops" world but wherever there is a producer and receiver of a product (or information) I suspect the model applies. The content ("OK" or "Not OK" or "<something else>") is highly dependent on the context (who is involved, the different trading zones and different levels of interactional expertise).

Lessons:
1. Brevity of communication is very context-specific, and I'd assert has a personal element.
2. Communication content is driven by context - i.e. the vocabulary you use might well be different depending on how well you know someone.
3. Brevity of communication shouldn't be the totality of communication. It should be an intro to a discussion. So it can only ever be a headline.
4. Lifting brevity of communication into a generalised form is - to me - misusing and misapplying the example. I.e. it's incorrect to say if particular words between two parties isn safe or not.
5. Using a concise form of words detached from context as a good/bad usage example is misleading - and one should beware that it might be a a form of straw man argument, or lucid fallacy.
Important: Content over Labels: Context drives Content
So, to me, in communication - the words and phrases used are extremely context-dependent. They depend on the parties communicating. The exchange of ideas and intent should be dictated by the people in the conversation, discussion or dialogue, an appreciation of their respective contexts and level of interactional expertise.

Related Posts
Communication, Paradigms and Interactional Expertise
Lessons in Clarity: "Is this OK with you?"
On Test Results and Decisions about Test Results
Testing Chain of Inference: A Model
There is no test for "it works, every time, always"


Sunday, 5 June 2016

Communication, Paradigms and Interactional Expertise

Over several years I have been looking at communication involving testing, here and here are examples from 2010,  and it never ceases to amuse (or concern) me how little we communicate well; then come the knots we tie ourselves up in to cope with this lack of information and content (and indeed communication, discussion and relation to context).

Occasionally the problem is brevity - information (context) is omitted. Other times it is a different paradigm - a different way of thinking about contexts. Sometimes it is a framing problem - not realising the background argument (or concern) someone has.

I have constant contact with senior managers and I know that they are always interested in content and context - what does it (some piece of information) mean in our product's and company's situation? A stakeholder or manager wants to know how the information you have found relates to their business context.

The last thing I want from a manager or stakeholder is an appreciation of "no added value" due to brevity, lack of content or context or because I don't engage their language or understanding of the situation. Something alone the lines of the Billy Madison clip:



I will look at some issues with stripped out information (brevity) in a following post. In this one let's take a look at the differing views and paradigms that can make us stumble or not notice how communication is failing.

Paradigms & Different Views
Paradigms - these are where different people hold different world views of something, for example what testing is, it's place in product development and what it can, can't and should do. We have probably met people with very different views and appreciation of the work we each do and where it fits into the bigger scheme of things.

There are a few ways to approach this problem. One is to educate people into your way of thinking. This might be the optimal goal, but it isn't always realistic - especially with senior managers. It's not always possible if someone doesn't understand that there might be a different way to look at the topic.

Another approach is to find common ground - a trading zone for language.

Consider the case with three actors: How do three people communicate, one with an ISTQB-view of "words", another with a non-ISTQB view of "words" and a stakeholder (business leader) with his own understanding / view of words? Do you convert everyone to one view, or do you look for an alternative?

To talk to a business manager about testing do you insist he must know all the terminology you wish to use, or do you look for an alternative?

In 2007 Collins, Evans & Gorman [1] explored the idea of trading zones and interactional expertise. On trading zones:
"Peter Galison introduced the term `trading zone’ to the social studies of science. His purpose was to resolve the problem of incommensurability between Kuhnian paradigms: How do scientists communicate if paradigms are incommensurable?"
I.e. how do the three actors communicate with each other in a reasonable way when they each operate in different views of the world (paradigms)? Hence the concept of a trading zone for language.
"Two groups can agree on rules of exchange even if they ascribe utterly different signifcance to the objects being exchanged; they may even disagree on the meaning of the exchange process itself. Nonetheless, the trading partners can hammer out a local coordination, despite vast global differences."
And one important form of these trading zones is interactional expertise.
"Interactional expertise is the product of a successful linguistic socialization. Although expressed as language alone, it cannot be too heavily stressed, interactional expertise is tacit knowledge-laden and context specific. "
Interactional Expertise
Applying Collins’ view [2] of interactional expertise from the sociologist-scientist relation to a tester/developer-stakeholder relation [italics and strikethrough are my application]
“… where interactional expertise is being acquired, there will be a progression from “interview” to “discussion” to “conversation” as more and more of the science [business/stakeholder context] is understood.” 
“ Above all, with interactional expertise, conversation about technical matters has a normal lively tone and neither party is bored. As things develop the day may arrive when, in response to a technical query, a respondent [stakeholder] will reply “I had not thought about that,” and pause before providing an answer to the sociologist’s [developer/tester’s] technical question. When this stage is reached, respondents will start to be happy to talk about the practice of their science [business context] and even give studied consideration to critical comments. Eventually respondents [stakeholders] will become interested in what the analyst [developer/tester] knows about the field because he or she will be able to convey the scientific [business context] thoughts and activities of others in a useful way. ”
“Where there is no developing interactional expertise […] the conversations never become interesting to either party, the analyst [developer/tester] can never transmit information, take a devil’s advocate position or, crucially, distinguish between “pat” answers and real conversational interchange, nor between jokes and irony on the one hand and serious responses on the other. Worse still, though a field might be riven with controversy […] the analyst cannot understand what the protagonists are disagreeing about, nor how deep the disagreements run, nor, with any certainty, who disagrees with whom! ”
Note, this relationship goes both ways - stakeholder <-> developer/tester. Over time the stakeholders / business leaders worth their salt will develop their interactional expertise to talk with the product developers / testers.



Importance of these concepts in your daily work
Some important points to highlight:
  • Acknowledging (or being aware) that language trading zones exist between different people or stakeholders then a natural way to cope with this is to advocate a greater need for interactional expertise.
  • Plainly: Testers need greater interactional expertise in dealing with stakeholders and business leaders. This means understanding their concerns, how they think about problems, listening for signals in which pieces of information they react to (and don't) - why are some pieces interesting. To a certain extent it means translating information into their language. What do the risks in your testing mean to them and the customers? How might they word or translate it?
  • What type of information do they need to translate and why. It's not usually just about - can a product version release in the next quarter, sometimes it's also about understanding their risk picture - and what other contingencies they might need to prepare for.
  • In fact, if you can talk with and listen to your business leaders more and more - understand how they think and talk and what their concerns are - you will get familiar with the trading zone language. For example, why do they emphasise certain ideas and concepts - and how does my work relate to that?
  • Tip: Ask your stakeholders or business leaders if they are using language or concepts you don't understand.
  • Corollary#1: Business leaders naturally need to listen to their product development organisation - for information about how they talk, which information they emphasise. In general, business leaders are more adept (used to) doing this - as they need to communicate cross-business.
  • Corollary#2: There is no one right way to talk about a subject if two people are located in different paradigms (or even namespaces) - without acknowledging the need of language trading zones and even understanding which context the other person is grounded.




Potentially Related Posts
Communication Patterns
The Conway Heuristic

References
[1] Collins, Harry, Evans, Robert and Gorman, Michael. 2007; Studies in History and Philosophy of Science Part A -- Trading Zones and Interactional Expertise
[2] Collins. 2007. Rethinking Expertise (Chap 1, “Origins of Interactional Expertise”)

Saturday, 28 May 2016

Some Communication Patterns

Communication is fundamental. I've been visiting it recently (see related posts below).

Good product or software development generally has good communication involved. Yes, you will alway find outliers that are specialists at something that have poor communication skills. But, in my experience, the best managers, the best developers, the best architects, the best testers have good to very good communication skills.

In the late nineties I read "The User Illusion" [1] - a difficult read but one I found enlightening. One of the main points I took away from it was how we communicate, how we exchange and discard information and some of the pre-requisites to information exchange.

There is an implicit need to synchronise before we really exchange information and communicate. Synchronise - to understand (to some extent) the context of the person(s) one is communicating with.
Tip: Remember this!
In the last 6-7 years I've encountered other models - the Satir Interaction model [2], the idea of dialogues as a means to understanding [3] and even ideas around idea recording [4]

Communication Skills or System of Communication?
Communication skills are not about broadcasting messages, or being loud - prepared to stand on a soapbox or just being talkative or even argumentative.

Here, for communication skills read: the set of skills to help someone be understood - discussing an idea or message in a way appropriate to the other person/people, and also to listen and reflect on what the other person/people are saying.

A system of communication is the interaction - where an idea gets refined or examined and how. So people can have great communication skills but the communication (system) doesn't work for a number of reasons.

Spotting dysfunctional communication

Some Communication Anti-Patterns [and Antidotes]

- Being fixed on one explanation.
[Make alternative explanations visible or ask for alternative explanations. See Weinberg's "rule of three"]
- Listening for a gap rather than listening to understand. Waiting to speak, make your own statement rather than digging into what the person is saying, exploring and understanding.
[Try, "Tell me more", "why do you say that?", "should I try to explain my reasoning?", "shall we follow my train of thought so we understand what it's grounded in?", "shall we take this discussion at another time?"]
- Not asking for explanations or giving examples.
[Try, "Can you give an example?", "Shall I try to give an example in use?", "Is it clear or could it be misinterpreted without example?" ]
- Hidden frames, assumptions or agendas. This can be the case that someone is following a particular line of thought or argument and doesn't want to divert from that.
[Try, "can you explain the concern or importance for this particular idea". Tip: what is your own set of frames, assumptions and implications? Are they clear or hidden?]
- Dismissive elements, e.g. "that's a rubbish/stupid idea/thought".
[This is probably a symptom or result of one of the above patterns. Tip: take a break, pause and reflect.]
A Communication Checklist

- Did I understand what the other means?
[Are we "synchronised", do I understand their context?]
Tip: Ask for help? "I don't understand", "can you give an example"]
- Is there a value (or risk) in this person's idea/opinion?
[What's their frame of reference? Can they contribute something I've missed?]
Wait, that's rather a short checklist?????? If you combine these and iterate on the communication anti-patterns above, it might be all you need. [Challenge: add & refine this!]

Implications
The motivation (goal) should be to understand rather than change. If you start from the premise that, "this person is wrong" you're probably not open to the signals (consequences) of a particular line of thought.

Communication is to exchange, spread and refine ideas. And I assert that "healthy" communication is subject to the scientific approach.

If you take a "scientific" approach then you are examining data/ideas and understanding if they change your own ideas or ways in which they should be expressed. There's no way you can know your ideas are correct for ever. Anyone can bring a point of view, perspective or consequence that hasn't been examined before.

If someone brings an idea that has "bad" consequences (from your perspective) then point it out - demonstrate it.

But, what about the "crazies" or people who won't listen to reason? Well, if you've pointed to your reasoning and demonstrated your case (check: have you?) and are still convinced that your idea/opinion is correct or better --> then either walk-away or stick around and put up with it (if the value/potential gain of sticking around is greater than the risk of walking away).
This applies to personal communication between friends/colleagues and between you and your manager/stakeholder in the workplace.  
Note: I listen out for bad ideas - not necessarily to confirm the correctness of my own ideas (there is always a risk for this). Rather, it can be a useful tool to look for flaws in ideas and arguments. That's feedback on your own ideas and the way that you have tried to spread them. It's useful feedback on whether your own system of communication works.

If you don't take a "scientific" approach - what are you doing? Are you creating a belief system, cult or echo chamber? There are plenty of those....

Related Posts

Sunday, 15 May 2016

A Thought Experiment on Definitions

Thought experiments are a very powerful tool. You probably use them a lot without realising! Any time you wonder "what would happen if..." or "what is a possible consequence of that?" then you're making your own mini thought experiment.

Einstein used them to develop his ideas around relativity. A recommended documentary on this here.

I recently posed a thought experiment on twitter

thought experiment: what happens when one agrees on purpose behind a definition but not agree on it's usage..

There are a number of layers to this question.
  • Purpose
  • Definition
  • Agreement
  • Usage
Purpose
This is the "why?" question. What problem are you trying to solve, and does the definition and usage examples help solve it?
So, what might happen if a usage of a definition doesn't appear to agree with it's intent, i.e. they are not congruous.

Definition
This is getting into the correctness and relevance area. Is the definition too narrow or broad? Is it circular? Is it complex to understand. Is there some guidance to help understanding?

Agreement
Complex or obscure definitions may be harder to agree with. Is the definition accessible, useable and congruous. Is there controversy or disagreement? Is that due to the purpose-definition-usage parts not being in synch? Is the definition generally accepted - de facto agreement?

Usage
Is it clear how such a definition would and wouldn't be used? Are there any examples, or patterns and anti-patterns of usage somewhere - or indeed any guidance at all.

It's not necessary for a definition to have usage examples or guidance. But it might help the case. Think about dictionaries - do they often, usually or seldom include examples of usage or guidance notes? (I think the answer would, of course, vary with the dictionary used.) This question would seem to be more relevant if the definition is complex or is difficult to accept.

What might symptoms of non agreement between definition and usage look like?
  • Dislike of the definition (fit for purpose? relevance?)
  • Aversion or uneasiness with the definition (understanding, clarity?)
  • Misuse of the definition (understanding, clarity?)
  • Non-use (relevance, clarity, understanding?)
Conclusion
To me there are a number of consequences if such a contradiction crops up between usage and definition.
  • The definition is not clear or complete.
  • The usage of the definition is not clear or illustrated.
  • The definition is misunderstood.
  • The definition is communicated in a way that doesn't align with the definition.
  • There is resistance to the definition and/or usage - emotional response.
  • There is resistance to the definition and/or usage - different paradigm.
  • There is resistance to the definition and/or usage - different dictionary references.
  • There is resistance to the definition and/or usage - frames of reference.
  • There is resistance to the definition and/or usage - little value add visible.
  • A combination of the above or even something else.
So, good definitions are generally robust. Unfortunately in the world of software testing many definitions would fail a lot of these tests above. Go look in the ISTQB Standard Glossary of Terms used in Software Testing and try it. Do you find any terms that "don't add value"? 

Example?
Ok, so if I wanted play the school ground bully and pick on the weak I'd start with the ISTQB glossary, but I have higher intellectual ambitions, so...

I've been thinking about checking recently, let's try there.

Checking
I would say I have had a certain uneasiness with the definition - for reasons I don't think I've always been able to articulate. This could boil down to my understanding or the clarity of the definition or something else.

It could be that this feeling is also reflected elsewhere - as recently appeared on the software testing club. The reasons others may give for their "Icky feeling" may be unconnected from my observations, but it would be interesting for them to give their reasons.

Ok, so let's take the checking definition from RST:
Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.
  • “evaluations” as a noun refers to the product of the evaluation, which in the context of checking is going to be an artifact of some kind; a string of bits.
  • “algorithmic” means that it can be expressed explicitly in a way that a tool could perform.
  • “specific observations” means that the observation process results in a string of bits (otherwise, the algorithmic decision rules could not operate on them).
 Now let's apply it to a stochastic process - eg speech recognition.

According to the definition I can make specific observations (samples of audio) and apply an algorithm to them (for example a speech recognition algorithm). The interesting thing here is that the result is non deterministic (due to speech/accent/pronunciation variation - making the test data design problem difficult) and is going to need some engagement - both for input threshold parameters and analysis of the output. I might get a boolean output (match/no match) or I might get a range (78% match) - and that is a function of the input parameters and the specific observations I ran the algorithm with.

Now the actual algorithm that is making the comparisons is the "checking" part of the process. But this becomes a very small part of the whole - because I need to put effort (more effort and time than the algorithm takes) beforehand and afterwards.

To make this example fit into the current definition I'd have to have all possible samples for certain speech snippets (infinite) or I'd have to define the sample population (this is the test design part of the process - by implication this is part of "testing"). (I won't get into the problematics of the sampling mechanism I use.) So, I'm narrowing the checking part of the whole even more.

So, the question becomes (for me) - should I only use checks where I am certain of the wanted outcome - i.e. a binary answer (which might be "yes/no", "pass/fail", "above 78% threshold/not above 78% threshold"). And here's the problem - I'm quite happy to use scripts as change checkers - or early/leading indicators - they are a mechanism to draw my attention to a result and then ask a question, "should I investigate more or what does this result tell me?". As soon as I am paying attention to the result or thinking about it I am not checking anymore - that's testing.

In this example, checking becomes a very small part of the whole - compared with all the other parts of requirement and test analysis, test design, test set-up and result analysis that make up testing. Then I wonder what value it really adds.

Am I using the definition incorrectly? I don't see any usage examples anywhere, so maybe the definition is incomplete. Or maybe guidance is incomplete. Or maybe the terminology is just not useful for me.

Divergent thought: In the definition of checking it's not clear to me if the algorithm can be a non deterministic algorithm. It could be read in that way - then here's another thought experiment --> what would the consequences of that be?

If I was to revisit the purpose and intent behind this definition I'm not sure that it achieves what it wanted. The checking part is quite small - the other activities in testing are not described so the importance of checking seems to be artificially increased. This is a problem! To me, it would be better to list different tactics of test execution and highlight that checking is one of them.

So, in this example, the "checking" is a very small part of the whole and falls into (for me) a very narrow definition, with a certain amount of ambiguity. (It's narrow as it is contrasted with testing. This is analogous to a "testing vs test design post".) The definition is incomplete and/or incongruous (no usage example and generates confusion and discussion) and fails to add value (as it seems to artificially inflate the importance of checking in relation to other testing activities).

Note, it's taken me quite a while to come to this conclusion - I have needed to put an amount of time thinking around this. It's certainly not an obvious conclusion. And I can also understand if others don't have the time, energy or inclination to do this type of thought journey and treat it as a heuristic to help in their communication. And I also understand that this term is helpful for some people and they have success in using it with their stakeholders - again if this heuristic communication works for you - fine.

Final word
It seems to me that there are many definitions around in the testing and software testing community that could benefit from this type of approach. Do you agree? Which would you try it on first?

Potentially Related Posts

The Conway Heuristic
Communication Heuristic: Use Cases
Thoughts around the label "Checking"

Sunday, 24 April 2016

Communication Heuristic: Use Cases

In the last year I've been working with different aspects of software development - many aspects related to Continuous Integration and Testing, but also other areas. Part of this work has involved developing new ideas to test, reflect on and potentially spread. A common challenge I've had from a number of colleagues when I'm talking about an idea or concept is:

"Can you give me or describe it in a use case?"

People want to relate to the idea. Sometimes a user story is really meant, but that doesn't matter. The real power here is the invitation to a discussion and dialogue about the topic, concept, way of working, product, etc. It's a way of saying. "let me understand your idea in use".

The use case doesn't necessarily describe the entirety of the idea but it starts the discussion - at least from one angle.

It's not always easy to do either - because sometimes it generates discussions in unexpected areas. This might be because people interpret the need differently. Or they've framed the problem differently. But that is good, and indeed useful, to generate such a discussion. It helps weed out misunderstanding.

When people (and groups) have worked with (or thought about) the idea then they will naturally develop new ideas about it or generate new questions. Some of this is testing the idea or concept, sometimes it's information gathering and sometimes it's clarification. Usually the testing of the idea explores ways that it could be misunderstood or produce unwanted results.

This is a very useful tool not only for product development but communication in general. It is common to use this in product specification and requirement capture, but it's also very useful in concept/idea discussion.

It is a heuristic approach to communication.

Does this all seem abstract? I used this heuristic recently to discover that I didn't have a common understanding - at least via use case - for the usage of the word "checking".

Potentially Related Posts
The Conway Heuristic
Testing. What was the question?
Framing: Some Decision and Analysis Frames in Testing
Thoughts around the label "Checking"

Thoughts around the label "Checking"

Observant readers of twitter and my occasional (6) blogs related to checks, testing and checking will know that I have done some head-scratching over the term "checking".

Some of My Heuristic Triggers
Why, what has triggered this for me? I have a number of factors - mainly these heuristic sources:
Are You're Lights on? [1], Reason for testing (term usage) [8], Framing testing terms [9]
--> My questions: What problem are we trying to solve? And does this succeed?

Weinberg's Rule of Three [2] and the Conway Heuristic [5]
--> My questions: What alternative interpretation is there? How could this be interpreted, or what consequences might it create (unintended or not)? How could it be misinterpreted?

Communication, Exploring Requirements [3], Dialogue & tacit knowledge [4], Understanding Arguments [6] and Heuristic wear-n-tear / refinement [7]
--> My questions: Do the statements & arguments stand up? Does something get lost in interpretation?

Towards Clarity (for me)
These are usually with me in my toolkit as aids - some more than others. So, I read Michael Bolton's recent post [11] on checking and what isn't checking - this got me thinking on this topic again.

As I stated on his blog - after a challenge for clarity from Thomas Ponnet:
Checking "... is a label that should only be used (imo) for an observation of something that is happening or has happened. It’s a post event rationalisation (it’s “a posteriori” knowledge).  
If you state your intention to include checking in your activities you are really describing your testing – because the intent, analysis, selection and discussion of results is testing – even if checks were used. 
Then I would find it more accurate to talk about the testing that /will be/ aided by checks. Afterwards it might be accurate to describe the testing has included checking. Planning checking, intending checking is by implication testing – and I (personally) don’t see the added value and I question it’s accuracy. 
But, that doesn’t, of course, stop anyone using the term however they want and in ways they find useful."
I am making an assertion for how checking might be used in a less-ambiguous way.

Shallow-Agreement?
I was challenged that I might be in shallow-agreement with the RST meaning of checking. Of course, this is always possible - I went and re-read the testing-checking-refined [10] and Michael's post [11]. I didn't find any examples of the usage of "checking" (as of 23 April 2016) - i.e. examples of how it would be used in speech or written form.
Thus, there is a risk for shallow agreement with something that isn't demonstrated. Whether that is small or high, how would you know?
A couple of questions then occurred to me: 
  • What's the guide for shallow agreement on the definition and use of "checking"? 
  • What's the accepted form for agreement when the main post doesn't demonstrate it?
Does it matter?
What have I been doing? I've been testing the concept of "checking" - trying to understand ways it will work and risks associated with its usage. I've given examples (in the blog [11]) of potentially unintended consequences of its usage.

Would examples and the above guide for "checking" help? Maybe, maybe not. It might be that the problems I have highlighted "don't really matter" or isn't a "high priority issue". That's fine, I can live with that. It's possible I'm getting stuck in the semantics... Oh!

My context/background: 
I started writing, questioning and exploring these issues in September 2009. I was one of the vocal parts of the discussion that led to the re-drawing/refinement of the diagram in the testing-checking-refined post [10].

I've called out and questioned people - typically on twitter - that might be using "checks" and "checking" in unsafe ways. I don't typically use the word "checking" in my work - partly due to some worries I've seen in misunderstanding - and also that I can distinguish between testing and checks without "checking".

Not using "checking" myself doesn't mean I can't usefully "test" it, its usage and risks associated with its usage. Can you? And if so, what heuristic guides would you use?

References
[1] Are Your Lights on? (Gause, Weinberg)
[2] The Secrets of Consulting (Chapter 5; Weinberg)
[3] Exploring Requirements: Quality before Design (Gause, Weinberg)
[4] Dialogue, Skill & Tacit Knowledge (Göranzon, Hammaren, Ennals)
[5] Tester's Headache: The Conway Heuristic
[6] Understanding Arguments: An Introduction to Informal Logic (Sinnot-Armstrong, Fogelin)
[7] Tester's Headache: On Thinking about Heuristic Discovery
[8] Tester's Headache: Testing. What was the question?
[9] Framing: Some Decision and Analysis Frames in Testing
[10] James Bach: Testing And Checking Refined
[11] Michael Bolton: You Are Not Checking

Friday, 19 February 2016

The Conway Heuristic

If you have not read Polya's "How to Solve It", [1], and have an interest in heuristic approaches to problem solving then I'd recommend it.

There are many heuristic approaches to problem solving that you probably use without realising it. The book may just help spot and discover new problem solving heuristics in the wild, [2].

It was whilst reading a version of Polya's book, with a forward by John Conway, that I read this passage:
It is a paradoxical truth that to teach mathematics well, one must also know how to misunderstand it at least to the extent one’s students do! If a teacher’s statement can be parsed in two or more ways, it goes without saying that some students will understand it one way and others another, with results that can vary from the hilarious to the tragic.
This has a clear application (to my mind) to testing. Think of specifications, requirements or any type of communication. We are generally very good at imprecise language so the risk for miscommunication, misunderstanding, hidden assumptions remaining hidden or unsynchronised priorities is real. I'm sure we all have our favourite misunderstanding war stories.

In about 2012, [3], I re-stated this paragraph specifically for communication as:-


















I called it the Conway Heuristic. It's something I use from time to time - to remind me that there might be a loose agreement or understanding - or a shallow agreement. I tend to think of this when there are weasel words or phrases about. For example, saying something is "tested" - which can be interpreted as fault-free, where in fact there may be several issues remaining.

This is a heuristic for a couple of reasons: (1) it's a guide, and not fool-proof; (2) it's impossible to think of all the ways something would be misinterpreted. So, this is like a useful check question, "is there or could there be a problem here?" If it helps remove the big mistakes then it's done its job.

So, anywhere where there are context-free questions or context-free reporting - or really, generalised or context-free communication, then keep this heuristic in mind. There be dragons....

References
[1] How to Solve It: A New Aspect of Mathematical Method; Polya
[2] On Thinking about Heuristic Discovery
[3] Testing Lessons from The Rolling Stones

Sunday, 14 February 2016

Testing. What was the question?

Details, details, details. It seems sometimes people get sucked into debates and discussions on details and forget a bigger picture - or that there is a bigger picture. 

TL;DR -> Jump to the Thoughts and Observations part

Do you recognise any of these?
  • Is it right to say I am automating a test?
  • Is it ok to have lots of automated tests?
  • Is it right to have a standard for software testing?
  • Do I need to distinguish between scripts that I code or encode and my testing?
Notice any pattern? The discussion usually focusses on the detail. This might be ok... But sometimes it's useful to have a tool, aid or heuristic to help judge if this detail is needed or not. So, here's my control question for you, if you end up in such a discussion. 
  • What set of questions are you working with that you want your testing to help answer? 
Meaning, what is the relation to your teams', or projects' or programs' development activity? So, really,
  • WHY are you testing?
  • HOW are you testing?
  • WHAT are you testing?
Yes, it starts with "why". Depending on how you view or frame, [1], your testing will help you understand if the detail is needed, needed now or not.

The WHY
This could circle around ambitions, visions or goals of a team, company, product, campaign, program etc. 
  • Who is the receiver of my test information? Are there safety critical or contractual aspects involved? (e.g. Visibility of testing whether by artefact or verbal report? Have I made a stakeholder analysis?
  • How long will the product live and what type of support/development will it have? (e.g. Support needs, re-use of scripts as change checkers? Supporting tools and framework to leave behind?
  • What are the over-aching needs of the information from testing? (e.g. See reference [2])
  • Are there other factors to work with? (e.g. product or company strategic factors)
  • Check-question: 
    • Does answering the "why" help describe if testing is needed, in which cases it supports the product or team? Answering this question usually helps working with the "how" and "what" of testing.
The HOW
Depending on the factors that the team, program or project comes up with to work out the WHY some testing will be needed, the HOW usually gets into the detail and factors affecting the approaches of testing.
  • What type of test strategy is needed, or what factors should I think about? (e.g. see references [4])
  • What items or types of information do my stakeholders really want? (e.g. see references [3])
  • How does the development and/or deployment pipeline look? Staging areas? Trial areas?
  • Does the team have dedicated equipment, share or use CI machinery to support? Orchestration factors?
  • Will the test strategy need tool support?
  • Is the test target a GUI, a protocol layer, some back-end interface, combination or something else?
  • How do I and my team iterate through test results, assessing where we are now, and where we need to be? Do we have the right skills? (e.g. see references [6] & [7])
  • Check-question: 
    • Does answering the "how" help describe where the testing fits into the development of the product (i.e. an end-to-end view)? If not, then you might not be done answering this question.
The WHAT
If and when you have worked out the "why" and the "how" then the artefact-heavy part of testing might be the next set of questions to work with. 
  • Split between existing script re-use and new script development
  • What heuristic inputs do I use? Are they sufficient? Do I notice new ones or refine some? (e.g. see references [5])
  • Now you can think about test design techniques (pick your favourite book, blog or list).
  • Extra tooling to create to support the testing (e.g. test selection and filtering aids).
  • Check-question:
    • The techniques and components chosen here should support the "how" and the "why". Do they? What might be missing?
Thoughts and Observations
Notice that the "details" usually fall into the "what" or sometimes the "how" of testing? But - if you're not connecting it to YOUR "why" of testing then you might be getting yourself into a rabbit-hole. You might be - from a team, product or product perspective - locally optimising the effort.
  • So, details are fine as long as you're explaining the why, the how and the what together - and for what (product, company, team). This is the context.
Another way to look at it - if you're getting caught in details or not connecting the "why", "what" and "how" of testing in the context of your product, program, company or team then you might just be trying to solve the wrong problem. For help on trying to work out the right problem I'd recommend starting with Gause & Weinberg [8].

References

Sunday, 10 January 2016

Some Software Development & Testing Challenges 2016

So it's 2016 and I have been reflecting on some of the challenges I see for Software Development, with emphasis on Software Testing.

Continuous Integration

Everyone knows what this is right? The concept has been around a while* - everyone has been there and done that, if you read the hype. But who is innovating?

A lot has been written about CI and its place in support of testing... Or has it?

Some Challenges

  1. Massive parallel script execution against the same target drives a re-think on test framework design, modelling and creation - impacting data modelling and needs for flexibility in frameworks and harnesses
    • This is a move away from single isolated and independent "tests" on a stateful application. It will trigger a change in test script approaches. Where is the work on that? Pointers gratefully received...
    • I have seen some research on "multi-session testing of stateful services", but more is needed.
  2. CI script execution and studies showing the effectiveness of dynamic test script selection strategies for team or testing support
    • I see that as a rule-driven approach to setting a series of checks on commits, e.g. (1) which checks cover my updated code-base (execute and result), (2) which whitespots in my codebase are there now (report)
    • Where are the studies or experience reports, where is the work?
  3. There are socio-technical challenges with CI use and implementation. Technology is the easy part, the soci-technical part comes in when organisational issues and preferences distort the technology choices. This might range from "we have always done it this way" to "the language or framework of choice here is X so everyone must adapt to that".
    • CI is a development approach, and is distinct from testing. It's like an extension to compiler checks**. Thinking around selecting and adding to those "compiler checks" needs to be dynamic. Experience reports, empirical studies for this?
    • There is a danger that "testing" is driven into a TDD/Acceptance Test-only mode.
    • I would like to see more research on organisational and soci-technical challenges around software development...
  4. Are people really going all-in on cloud and virtualization technologies to solve their CI related bottlenecks? Mmmm...
Software Testing

Some Challenges

  1. Detachment from Software Development
    • This can be seen in various forms
      1. Distillation down to process on "testing" only - the ISO 29119 papers are a classic example of this. This is the "reductionist" approach to a wicked organisational problem - not very sophisticated and with a high risk of solving the wrong problem.
      2. Other examples are some/many software testing only books - yes, it can be good to highlight the testing and testers role and special challenges there, but until you start from software development as a whole (systems thinking approach) then there is a high risk that you are making a local optimisation. Another reductionist approach, liable to solving the wrong problem.
        • So, software testing focus -> good; software testing interplay and interaction and CONNECTION to the creative process -> better.
  2. Mis-understanding of the software testing challenge - how to link a creative process (software creation and positive and confirmatory tests and checks) to a challenging process (testing for issues, highlighting problems, testing for risks)
    • Many organisations focus on confirmatory tests - a proxy for customer Acceptance Tests - as an MVP (minimum viable process), i.e. a proxy "get out of gaol card". See Myers [2] example of testing in an XP approach is an example here.
    • Myers [2] first wrote about the psychology of software testing. However, Martin et al [4] make the case for reframing this as an organisational approach/problem. Rooksby et al [5] observe the cooperative nature of software testing.
      • More studies on satisficing the organisational needs please!!
  3. Lack of soci-technical studies and research into software testing and its part in software development. Rooksby & Martin et al [4] & [5] performed ethnographic studies of software testing to highlight its cooperative and satisficing nature. This called for further research
    • Sommerville et al [6]:
      • "An over-simplification that has hindered software engineering for many years is that it is possible to consider technical software-intensive systems that are intended to support work in an organization in isolation. The so-called ‘system requirements’ are the interface between the technical system and the wider socio-technical system yet we know that requirements are inevitably incomplete, incorrect and out of date."
The sooner we stop treating software development, and especially testing, in reductionist approaches, consider the socio-technical aspects - especially for large and complex systems - the better. And, today, most systems are inevitably complex.

Got any pointers to recent advances? I'm all ears...

References
[1] Li, Chou, 2009, IEEE; A Combinatorial Approach to Multi-session Testing of Stateful Web Services
[2] Myers, 2011, 3rd ed; The Art of Software Testing
[3] Rethinking Experiments in a Socio-Technical Perspective: The Case of Software Engineering
[4] Martin, Rooksby, Rouncefield, Sommerville, 2007, IEEE; ‘Good’ Organisational Reasons for ‘Bad’ Software Testing: An Ethnographic Study of Testing in a Small Software Company
[5] Rooksby, Rouncefield, Sommerville, , Journal of CSCW; Testing in the Wild: The Social and Organisational Dimensions of Real World Practice
[6] Sommerville, Cliff, Calinescu, Keen, Kelly, Kwiatkowska, McDermid, Paige, 2011, Communications of the ACM; Large Scale Complex Systems Engineering

*I led the architecture work on a multi-stage CI system in ¨2010
**yes, a big simplification.

Saturday, 9 January 2016

Understanding Arguments... and "Trolls"

In the autumn of 2013 I took an online Coursera course called "Think Again: How To Reason and Argue".

Many readers here would be familiar with a number of the concepts but the course was useful to me to help structure some concepts around statements and arguments, strategies in analysing them and eventually trying to understand the viewpoint of the person making the statements (arguments).

The course was good and something I'd recommend as an exercise in helping to understand and categorise ones own approach to argument analysis and deconstruction.

The course also gave online access to the book Understanding Arguments: An Introduction to Informal Logic, a comprehensive and useful reference book.

An Important Lesson

On element that was re-inforced and stood out early on in the course was to treat all arguments and statements sympathetically. This is like a safety valve when you see or hear a statement that might infuriate, irritate or wind up.

It's an approach to help one get to the root meaning of a statement and understand the motives of the person (or group).

I used this approach when first looking at the ISO 29119 documents and supporting material, ref [2] [3].

Challenges & Trolling

I often get challenged about my reasoning and thinking. I welcome this, it's usually very good feedback not just on my thinking but also the way I communicate an idea. So, I try to treat all challenges and criticism sympathetically.

But, when does it become "trolling"?

I saw a post this week from Mike Talks (@TestSheepNZ) - I liked the post - but I also recognised the source that triggered the post.

Troll?

Well, when it comes to software development - and especially software testing (due to expert-itis, amongst other things - I need to update that post) - there might be some tells to help judge:

  1. Does the person claim to be an expert/authority in the field, but without evidence or catalogue of work? (this is a form of avoidance)
  2. Whether an expert or not, do they listen and engage in discussion - not just avoidance? (hat tip to James @jamesmarcusbach)
  3. Twitter - due to the 140 char limit - can make people appear to be buzzword, soundbite or even BS generators. Do they resort to soundbites or other indirect responses when questioned or challenged? (this is a form of avoidance)

Essentially, the keyword in the above is avoidance. If so, you might have a hopeless case on your hands.

Treatment?

You can google how to deal with internet trolls, but my approach:

  1. Start with sympathetic treatment. If this doesn't help you understand the statements, arguments or motives (and see the list above), then
  2. Detach, ignore, unfollow.
  3. Give yourself a retrospective. Was there a potential feedback element - can you communicate your message differently, is there some fundamental insight that you could use? (this is a topic for a different post). I.e. "trolls" can have their limited use even when their interpersonal skills let them down...
  4. Learn and move on. 
  5. I'm also a fan of humuorous treatments & critique. I'm reminded of The Not The Nine O'Clock news approach changing attitudes in World Darts (you can google it...) Sometimes these are forms of reductio ad absurdum.

If you have other perspectives on understanding arguments I'm all ears!

References
[1] Understanding Arguments: An Introduction to Informal Logic
[2] ISO 29119 Questions: Part 1
[3] ISO 29119 Questions: Part 2
[4] Internet Troll