Sunday, 17 September 2017

Testing as Feedback

Reflections and a thought experiment connected to testing

I saw this tweet from Martin that triggered my thinking. I noticed it because the tweet was getting several responses and Martin poses an interesting question, as he usually does.

https://twitter.com/vds4/status/908270611465728001


I looked at a few of the replies. The testing community is full of insightful thinkers. These two caught my eye:



Of course, I had to comment on this because it goes to the heart of any hypothesis based and social interaction development.



Then I wondered how the replies were being formulated, and even how they were being received.


Maybe I'm over-loading Martin's original intent, but I'm trying to extend his line of thinking, so bear with me.

Based on Martin and Jari's questions, an approach to analyzing a situation is to consider an item (artefact), activity or set or interactions and consider what would the situation/system look like if you flipped the meaning 180 degrees; an opposite, or antonym. Then look at that resulting situation and consider do you learn anything about the original.

Stumbling block?
The idea of what testing is and isn't and who does what for what purpose, is not clear. That was part of the reason for my "data question" - who is answering from which perspective and how does one know. If this was an anthropological or qualitative analysis study the original question might have been set up differently - but then it might not have gotten the same engagement or attention.
Actually, I think the biggest stumbling block with the replies to the question posed is that the respondents are not indicating what they think of as testing for the "opposite of testing" answer, which is always going to be limited in twitter.

As interesting as many of the replies are, it's difficult for me to use them to understand "testing" or the system of software development from the replies.

That triggered my own thought experiment.



For this experiment, I will not try to define testing directly, but I will use another element in software and product development to understand the potential impact of "no testing". So, the simplistic assumption is that the opposite of "testing" is "no testing" and that this will be observed by looking at other characteristics in a product development lifecycle, in this specific case feedback.

Approach
A system can be analysed by altering a meaning, or purpose, for one of its components and see how the resulting new system might perform. Another way is to remove that component.

Compare to a picture of a group of people - what does the picture tell you about the group of people, their interaction, their context or their potential. Now remove a person or an item. What changes about the whole system or story do you now see (search for photoshopping, green screening or photo manipulation for examples) - e.g. do you see things that were obscured by the now-removed object or person.

Why Feedback?
It is difficult to look at a system of interactions that makes up product development and isolate components, activities and elements of output. For me, feedback is an interaction between people or information about a system or product - it can be internal within the system (company, teams) or external (information received or gleaned from outside the company or teams). Internal feedback - to me - has a value, care and thought attached to information or data points, maybe even discussion or talking points.

Compare, "login doesn't work" to "login doesn't work under circumstances X, Y & Z" or "login performance has changed since version a.b.c". These are all potential forms of feedback - I claim that the second two have more value than the first. A team or product owner can react on all of these, but my claim is that it's easier to make decisions based on #2 & #3. Note, I'm making no claims on who or what generates this type of feedback.

Note, my experience is that good testing generates good feedback in systems of product development, which aid a variety of decisions and analysis  - sometimes that comes from individuals and sometimes from systems that individuals have put into place to facilitate feedback and analysis.

And so, riffing on testing being removed from software development and the impact on feedback:






Reflections
There are other aspects of product development that one could look at besides feedback.

The timing is implied in these tweets - the implication is that if it takes longer to get feedback and be able to make a decision about it then you (team, product owner or business manager) miss the opportunity to adjust course, make a different decision, conclude that an experiment doesn't / didn't work and adjust investment, etc.

My implication in these tweets is that testing provides valuable feedback to teams and companies. 

Another implication is that feedback is in the system of product development and that ultimately it is a social interaction between individuals.

I do not make a claim that testers own feedback. Good testers can contribute to good feedback systems and delivery.
Extending that: the proximity of feedback to when code is written is vital for hypothesis or experiment-based development.

I think I've probably generated a few question marks that need explanation and extension, but I will leave that to another post.



Sunday, 6 August 2017

Quality-Value: Heuristic or Oracle?

The other week James Thomas wrote down some thoughts related to “Quality is value to some person”. I added some diverse thoughts - a mini-brain dump. This post is an expansion on one of those thoughts.

Start point
I made the assertion that the heuristic/statement (“quality is value to some person”) had been primarily used in a software testing context as a tool to illustrate that stakeholders are the gatekeepers of quality and not necessarily testers.

To me this is ok - but ultimately not a proactive approach helping the team, company or organisation work out how to delight it’s current and potential customers with good/acceptable/superior value and quality.

Point to explore
The point I made on James’ blog that I want to expand on is:
“4. In what scope is the “quality is value to some person” used in SW testing? I don’t know if it really matches Jerry’s original intent. I think it has been used to find a responsible stakeholder to discuss test results & objectives with, and probably also to aid testers to explain that they are not the sole arbiters of quality.  
4.1. I read the intent (from Jerry’s QSM) as highlighting a relationship and a perspective (i.e. subjective rather than objective) - which by nature can’t (usually) be static. I haven’t really seen/read of anyone applying it from this perspective to SW testing. I wonder how it would look…”
So, the question I will explore- if this is a transient and subjective statement - what use can it have to software testing or software development as a whole? I make some claims (statements and opinions) and questions - partly re-anchoring the context in which the statement* could be used:
  1. Who is making the statement*?
  2. When is it useful to make the statement*?
  3. To what scope does it apply?
  4. It’s bigger than software testing


Who is making the statement*?
I assert that it makes no sense for only certain people in a development team or project to have this view (“quality is value to some person”) - therefore, this is a team or project view of quality, or preferably a team together with a product owner or even customer view on quality.

This starts to imply a synchronised view on acceptable goals for a product, or feature, or vertical slice of a feature - as a goal for the team/group rather than individuals using it as a reminder (or in the worst case a defensive and passive position).

When is it useful to make the statement*?
I assert it is a useful reminder at the start of work (goals of a product, or feature) - these may be preliminary goals used in a prototyping activity or a hypothesis on what a customer might want. Doing this at the start is establishing a common goal for the team (or project or program, etc). This is not a statement useful for gatekeeping but useful for goal alignment - alignment of subjective views if you will.

To what scope does it apply?
I assert this applies to the whole development and delivery chain. Therefore, this implies that synchronisation between development and delivery teams (or even a DevOps set-up) would be desirable. Again, the alignment is about aligning common goals, not gatekeeping.

It’s bigger than software testing
Hopefully, this point is obvious?
Applying the statement* to product development (and delivery) I assert that it soon becomes clear(?) that it is much more than software testing and should be running through the whole chain. It doesn’t need to be a defence mechanism used by testers if alignment on goals has been achieved within the team, program, organisation, company.

Flip side? From Heuristic to Oracle
Suppose individuals or teams are using the statement* as a reminder/defence mechanism to illustrate that one or more stakeholders need to take a view on quality - what could this mean? I’d interpret it as a symptom of the team/organisation and its maturity with regards to delivering synchronised development & deployment quality. 

Another way to look at it: it’s a canary call for silos and local-optimisations. You could say it could be used as an oracle for spotting an organisational problem! More on that in another post…

*Statement: "Quality is value to some person"

Tuesday, 25 April 2017

Where is Testing ...?

It's in people's nature to notice change and differences. It's also in people's nature to make assumptions (or stop and ask questions) when they expect something and, to them, it is missing.

So, if when trying to optimise a number of teams working together for the benefit of customers, something people are familiar with is not obviously there, then people can get tripped up.

Due to this, it can become difficult to discuss and explore new models or approaches due to questions about what is missing - or, really, not visible, not obvious, not understood.

TL;DR -> Aim (Why): Delight Customers; Plan (How): Where to invest effort, observe and act; Execute (What): Experiments to perform and data to gather - to match the "How". These are universal "test skill" attributes - i.e. "testing" is everywhere in well-functioning product development, delivery and operation.

Example: 
Consider a model for product development, ideally optimised to get feedback from users of the product and work on customer needs. This can be many software components, many software systems and many configurations. Assume the aim for the company producing a product is: to delight the customers & users of the product.

Product Inception, Development, Delivery and Monitoring


Note, that this model can be applied on a team or company level.

Potential problem: Interpretation
If 10 people look and study this model there will be more than 10 interpretations. Very often this is a result of how someone frames the object to study (or problem to solve). This can be a factor of where they "sit" in the model/company, what influence they have and what they "want" to do.

Potential question: Where's testing? 
Some people might look at the model and wonder where is "testing" done? This can be a leading question - sometimes as a function of thinking of "testing" as separated from other activities, sometimes as a function of what someone is used to anchoring to. Sometimes it might be a worry - how do we understand the customer is getting what they asked for.

I don't see this model reflects a particular development model or even a type of "testing school". Conversely, I'm not sure any testing school has put the work into supporting such a model (for optimising feedback from customers and working on customer needs).

Potential Approach: Find a place for testing.
One way is to find how testing contributes to each box of the model. There is a trap with this - if the whole is not also considered (or at least adjacent boxes) then this approach will tend to a local-optimisation in each box and not necessarily between boxes. It's an approach that tends to place testing in boxes - in the extreme it creates separated testing boxes. In the ultra extreme it creates a standard for SW testing detached from SW development.

Potential assumption: Specialised testing is not needed.
If you can't see it in the model then it's not needed, right? But, note - I haven't spelled out product architectural and system design. It doesn't mean they are not needed. So, that leads to the question - what is the model conveying. This model is not a WYSIATI (what you see is all there is) model - or rather it is above a level of practices. 

Ok, so what use is such a model????

My take?
Yes, the model is on a very high level, but that's the point - use an example where it appears as though the thing you want to talk about is absent...

When I discuss such a picture above and a discussion about testing comes up, my approach is, "think how testing helps each of the above boxes", or really think of the boxes containing:


Observation-Hypothesis-Experiment-Sensemaking

To give a number of questions, for example:

Product in Use & User Feedback
- How to observe or get data about the product in use? Getting data about customer opinions, complaints or new wishes and needs?
- How to make judgements and derive opinions about the data (form hypotheses)?
- How to create experiments to gauge and observe the product performance in use or product usage?
- How to evaluate the results of those experiments, data and results?

Product Backlog & Development
- How will observability of the product and product usage be prioritised, developed and in what circumstances?
- How should the product architecture and supporting environment look to be observable?
- How should the product architecture support (fast) feedback on product changes? (hypothesis)
- How should the supporting environment support product changes? (hypothesis)
- How should experiments be created to observe and gather data on the product, its usage and performance?
- How to understand the results and data and whether the experiments are giving data on the hypotheses?

Product Delivery
- How to observe and understand product delivery and deployment?
- How to understand if a product delivery will delight or disappoint a customer (new or old)?
- How to create experiments to gather data on product delivery and potential response from customers?
- How to evaluate the data from the experiments of product delivery? What does the experimental data indicate about product delivery and potential (or actual) customer reaction?

And finally.... the whole:

Product Inception, Development, Delivery and Operation
- How do we observe product usage and customer satisfaction?
- How do we create an understanding of what the customer wants and is happy with?
- How do we create experiments to understand our understanding during development, delivery and operation? Do we have consistency of hypothesis through development & delivery?
- How do we evaluate the data from development, delivery and operation to a consistent picture? Do we have data to help understand delivery to customers, customer perception, feedback to the product development teams? Do we have data to understand what can be improved?

Why->How->What
Most of these questions are "how" questions. They are predicated on supporting a model that optimises feedback from a customer and providing a product that a customer wants - the why. The "what", the implementation, is the least important - although it is important.

Sometimes "where's testing?" questions are really about "what" rather than the purpose and meaning. This is a check observation to keep in mind.

And So....
  1. Notice, all of the above might be more recognisable as test and fact-based advocacy (observations), test and fact-finding analysis and design (hypothesis & experiments), test and fact-finding execution (experiments and iterating on experiments) and test and (qualitative) data advocacy and reporting (sense making).
  2. Notice, it is everywhere in the SW development, delivery and operations loop. You might want to be ultra-specialised and constrain your "test" advocacy-design-execution-reporting skills to a small subset of the whole. Or, you might realise that those same observation-hypothesis-experimentation-sensemaking skills are needed (and can be used) everywhere. The trick is to realise that, then to balance the amount of time you want to spend in a small subset of a product development activity - whether as a team, separate team or individual and balance those skills elsewhere.

So - the testing skill set tied to observations, hypothesis forming, experimentation and evaluation and sense making are vitally important all through the product inception, development, delivery and operations flow!! In my experience successful teams and organisations have these skill sets in multiple places, not isolated.

Of course, if your practical skill set (or comfort zone) limits you to a small subset, you might want to work on expanding those boundaries - at least for the good of the people and teams around you.

Potentially Related Post