Some thoughts on the ideas presented in Michael Bolton's post.
Firstly, I like it. A lot. I commented on the post about a day after it was published. But I knew there was some more thinking I needed to do on the topic. So here we are, over two-and-a-half months later (yes, I think slowly sometimes - sometimes one-dimensionally and sometimes in a very divergent way - meaning I create lots of strands/threads to follow-up - but I do continue thinking, usually! )
Showing your thinking is crucial to any activity that involves questioning, observation, hypothesis forming, evaluation, reporting, learning and feeding back under constraints of time, cost and equipment availability (this is testing, if you didn't recognise it already).
Showing the provenance of your test ideas is a powerful communication tool - both for yourself, team and any stakeholder. Doing this with many of your ideas will naturally show up if many of the thoughts/considerations you'd intended to include were indeed included. (What?) Ok, example time.
A simple example here is to construct a test idea around a "happy path" of a function, the most basic and usual case for that function. This will then lead to thoughts around:
- The use case itself (variants and alternatives);
- Platform (how many?);
- Environment (Small with many simulated elements going right through to a full customer acceptance configuration);
- Application configuration (Based on what? observed or projected?);
- User data (Based on what? Supposed, observed or projected?);
- User behaviour (Based on what? observed or projected?);
I work in situations where these are distinct variables - not so much overlap, but in some shops some of these could be less distinct.Evaluating and selecting which of these elements to use, include and vary is part of the test design of the test idea in conjunction with the test strategy. So, in this example we can see (hopefully) that there is a range of test ideas that could be distilled.
In the example I alluded to elements of choice/decision based on risk, priority, test design technique, strategy and availability. Sometimes this is about scoping the area for test, what to include (for now) and exclude (for now).
When I am telling the story of my testing I like to emphasise the elements that could be missed - areas I've skipped and the reasoning behind that, areas that maybe were discovered late that could have an impact on the product and for which it might be useful to recommend additional testing.
So, I want my story to be very explicit about what I didn't do. There is nothing new in this - traditional test reports were including this information over 20 years ago. But, I want to link it into my test framing in a more visible way. If I start talking about all the things I didn't do it's easy for a stakeholder to lose focus about what I did do.
So, as well as the explicit chaining of test framing showing the roots and reasoning behind the test idea I also want to include (explicitly) the assumptions and decisions I made to exclude ideas (or seeds of new ideas). In the context of the test frame this would represent the a priori information in the test frame/chain (all of the framing (so far) is a priori information).
But, there might also be scope to include elements of ideas that would affect the test frame (or seeds for new ideas) with information discovered during the testing. Then from a test frame perspective it could be very useful to include this a posteriori information.
Examples of a posteriori information would be feature interactions that were discovered (not visible before) during testing - which didn't get covered before testing stopped. There might be aspects where test ideas couldn't be 'finished' due to constraints on time, feature performance (buggy?), third-party sw issues (including tools) that couldn't be resolved in time or some other planned activity that didn't get finished (a feature element that needed simulation for example).
I usually think of these aspects that are not 'included' in the testing story as the silent evidence of the test story. Making this information available and visible is important in the testing story.
There are aspects of these assumptions and choices that are implicitly in the test framing article, but for me it's important to lift them forward. (Almost like performing a test framing activity on the test frame post.)
As a side-note: Test framing as a form of chaining / linking around around test ideas fits well into a mind map, threads and test idea reporting. But that exploration is for another time...
So... Have you been thinking about test framing?