Sunday 14 February 2016

Testing. What was the question?

Details, details, details. It seems sometimes people get sucked into debates and discussions on details and forget a bigger picture - or that there is a bigger picture. 

TL;DR -> Jump to the Thoughts and Observations part

Do you recognise any of these?
  • Is it right to say I am automating a test?
  • Is it ok to have lots of automated tests?
  • Is it right to have a standard for software testing?
  • Do I need to distinguish between scripts that I code or encode and my testing?
Notice any pattern? The discussion usually focusses on the detail. This might be ok... But sometimes it's useful to have a tool, aid or heuristic to help judge if this detail is needed or not. So, here's my control question for you, if you end up in such a discussion. 
  • What set of questions are you working with that you want your testing to help answer? 
Meaning, what is the relation to your teams', or projects' or programs' development activity? So, really,
  • WHY are you testing?
  • HOW are you testing?
  • WHAT are you testing?
Yes, it starts with "why". Depending on how you view or frame, [1], your testing will help you understand if the detail is needed, needed now or not.

The WHY
This could circle around ambitions, visions or goals of a team, company, product, campaign, program etc. 
  • Who is the receiver of my test information? Are there safety critical or contractual aspects involved? (e.g. Visibility of testing whether by artefact or verbal report? Have I made a stakeholder analysis?
  • How long will the product live and what type of support/development will it have? (e.g. Support needs, re-use of scripts as change checkers? Supporting tools and framework to leave behind?
  • What are the over-aching needs of the information from testing? (e.g. See reference [2])
  • Are there other factors to work with? (e.g. product or company strategic factors)
  • Check-question: 
    • Does answering the "why" help describe if testing is needed, in which cases it supports the product or team? Answering this question usually helps working with the "how" and "what" of testing.
The HOW
Depending on the factors that the team, program or project comes up with to work out the WHY some testing will be needed, the HOW usually gets into the detail and factors affecting the approaches of testing.
  • What type of test strategy is needed, or what factors should I think about? (e.g. see references [4])
  • What items or types of information do my stakeholders really want? (e.g. see references [3])
  • How does the development and/or deployment pipeline look? Staging areas? Trial areas?
  • Does the team have dedicated equipment, share or use CI machinery to support? Orchestration factors?
  • Will the test strategy need tool support?
  • Is the test target a GUI, a protocol layer, some back-end interface, combination or something else?
  • How do I and my team iterate through test results, assessing where we are now, and where we need to be? Do we have the right skills? (e.g. see references [6] & [7])
  • Check-question: 
    • Does answering the "how" help describe where the testing fits into the development of the product (i.e. an end-to-end view)? If not, then you might not be done answering this question.
The WHAT
If and when you have worked out the "why" and the "how" then the artefact-heavy part of testing might be the next set of questions to work with. 
  • Split between existing script re-use and new script development
  • What heuristic inputs do I use? Are they sufficient? Do I notice new ones or refine some? (e.g. see references [5])
  • Now you can think about test design techniques (pick your favourite book, blog or list).
  • Extra tooling to create to support the testing (e.g. test selection and filtering aids).
  • Check-question:
    • The techniques and components chosen here should support the "how" and the "why". Do they? What might be missing?
Thoughts and Observations
Notice that the "details" usually fall into the "what" or sometimes the "how" of testing? But - if you're not connecting it to YOUR "why" of testing then you might be getting yourself into a rabbit-hole. You might be - from a team, product or product perspective - locally optimising the effort.
  • So, details are fine as long as you're explaining the why, the how and the what together - and for what (product, company, team). This is the context.
Another way to look at it - if you're getting caught in details or not connecting the "why", "what" and "how" of testing in the context of your product, program, company or team then you might just be trying to solve the wrong problem. For help on trying to work out the right problem I'd recommend starting with Gause & Weinberg [8].

References

1 comment:

  1. I love this article! Being a newly assembled automation team, we are attempting to answer these questions all the time. Ideally, we use automation as a tool to remove the pain points the manual test team is going through. They would be embedded with the developers figuring out the constantly in flux new code. At first, we would automate from the regression test suite any high value repetitive tasks.

    Now, though, we are experimenting with front-loading all testing, so we are writing automated tests for everything, while the manual test team is doing their testing. We end up testing to a moving target, writing tests for a piece of functionality that is still evolving, or writing tests for something that may not get done on time. Oh, it's a learning curve, let me tell you, and requires constant attention to detail throughout the entire dev process!

    -T.J. Maher
    http://www.tjmaher.com/

    ReplyDelete