Are you questioning your questioning?
- What do you do when you have limited access to the product to be tested?
- What if your test lab is expensive - meaning you have little time there or have little preparation time on site?
- What if you have (in the past) had limited compute time to perform calculations?
- How good is my test cheat sheet or template?
Once upon a time…
…access to computing power was limited. Mainframe access (as it was sometimes) was time limited or budgeted.
And even today…
…some test lab set-ups are expensive, meaning preparation and execution time (or access) can be limited.
This may (and has) lead to
…extensive preparation of what to be done. Sometimes in terms of steps, sometimes in scripts, sometimes in code.
I’ve worked in all these scenarios - during formal education, during research and during software testing.
There are skills connected to working in this type of environment, in particular, and also investigation in general. Whether it is testing or performing a calculation - in some of my cases, a complex algorithm - there is the calculation to perform (a piece of code to produce some output). There might be various input data - which might be fed sequentially, or have another algorithm (program, or wrapper) iterate through the data and config combinations.
Then there might be complimentary tests (calculations) to check the output, calibrate it or look for patterns in output, or to compare with other data sets. There is the coding of these calculations (scripts & programs), construction of data and investigation of configuration. There is the paper work and head-scratching (scribbling and calculation) of what combinations make sense, derivation of new algorithms, exploration for meaning and then how and what to communicate and discuss.
Then there is the evaluation of the results. Do they make sense, do they answer the original question. which new questions do they raise, are the results reasonable, how do I know?
Whither Testing Skill?
Where is the testing - and testing skill? It’s everywhere - but especially in the parts around working out the types of questions (or calculations, or algorithms) to make and interpreting the data, evaluating the results and deciding if complimentary work is needed.
How to work out the types of questions? Sometimes people will answer that this falls into the category of test techniques, but before that there may be experience and discussion helping to direct which techniques to try, which type of questions are useful (or interesting), before even discussing what type of implementation (and limitations they might have).
All of this broadly applies to scientific research as much as software testing.
Questions, questions
I was reminded recently by a colleague of a seminar I gave some years ago. He said that something I’d said stuck with him:
“A key skill of good testing is about asking good (relevant) questions and assessing the answers - either to help with more questions or to help understand how good the questions (and testing) were.“
I.e. you must know how to evaluate the quality of the testing - when the results still make sense, what their “shelf life” is and when to ask new questions (when the gap in knowledge about the product grows).
Investigation activities
Notice, the investigation activities and skills fall into some broad categories
(1) working out the good, useful and relevant questions,
(2) working out how to get answers for them,
(3) getting the data for the questions,
(4) documenting - lab notes, procedures,
(5) analysis of results,
(6) new questions to ask,
(7) communication of results and assessment
This type of pattern can be seen in scientific research and good software testing. Also in both, all these steps need to “work” - great execution of the wrong question (test) or misinterpretation of a good test might be useless.
Note: none of this means that all these activities have to be done and controlled by a single tester (or only by testers) - they could be distributed in a team - or really - the team has the same goal as these activities and keeps the product development AND software testing true to its purpose. In scientific research this is the equivalent of peer review.
Testing & Tests
So, in this story…
Tests are very much related to the questions that might be asked about a product. They could take different implementation forms (types of test design or test techniques) and could be encoded (or instantiated) in different ways - a script or a data/configuration set or set-up, evaluating some data points.
Testing is the activity of working out which questions (tests) are useful or relevant in a given situation, what to make of the data and results for the tests, evaluating how to change or proceed and what type of information is useful to communicate.
The starting point above were typical scenarios using test scripts and scripting. However, the testing skill was not purely reliant on or a result of the test scripts.
Does this apply to me?
Suppose you don’t work with expensive test labs or that you have extensive access to the software to be shipped. Does any of the above apply to you? You might think - “I have a handy cheat sheet or list of things to do or nice reference book or a standard to follow.”
Then ask yourself:
1. How do I improve or test my own cheat sheet, list of heuristics, guide, template or standard?
2. How do I know they are relevant or useful?
3. What do I need to create or practice for myself to help improve the value I add to my team, group, company or customer?
4. What are my (or my team’s) weak areas? And how do I get help on them?
In my experience, good testers and teams ask themselves these types of questions.
And finally…. To script, or not to script, that is the question.
Wrong that’s a distraction!
Some people think that a script being used or not is an indication of testing skill or test quality. There might be correlation but not usually causation!
Some people think that good testing is distinguished by how good the artefacts are that are left behind. To some extent this is true, but a major part that drives which artefacts are left behind (or avoided), including their content and quality, lies in the thought and investigation that goes in upfront, the analysis, discussion and evaluation afterwards.
If you (or your team) are focusing on one of the categories above (investigation activities) at the expense of another, then that’s a warning sign…..
If you (or your team) can account for and question the categories above (investigation activities) then you have a chance of doing good testing. Keep questioning your questioning!