This train of thought started at Let's Test 2012, a
fabulous conference that has been written about in many places, ref [1].
A theme I identified during the conference, and even before in the LEWT-style peer conference, was the discussion of models, mainly mental. These are models that are used for a variety of purposes:
- Trying to understand the testing problem space (mission)
- Trying to understand the procedures and processes at use within a project
- Trying to understand and document the approach to the testing problem
- Trying to understand progress and map coverage of the testing space and of the product space
- Trying to communicate information about the testing and about the product
Some of these models are implicit, undiscovered, unknown or tacit - or a combination of these. Some models are understood to different levels by the user, some are communicated to different levels of accuracy and some are understood to different levels of accuracy by the receiver.
Some people translate (and communicate) these models in mind maps, some in tabular format, some in plain text format, some verbal and some in a combination of these.
Problem?
All models are wrong, but some are useful. Ref [2].
Another way to think of this - all models leave out elements of information. But, I think the inherent challenge with models (mental or otherwise) is how we communicate them. My frames of reference for a problem, ref [3], may be different from my stakeholders and my stakeholder's stakeholder.
At Let's Test there was much discussion about the use and applicability of models, but not so much about the translation and communication of them, IMO.
It's this translation of models between people or levels that is an interesting, and perhaps underrated problem.
If you have a model that is communicated and understood by only some of your stakeholders then there may be a problem with the model, the communication or both.
Models often start out as a personal tool or don't capture the frames of all those involved in the information flow.
My questions in the keynotes of Michael Bolton and Scott Barber, and in Henrik Emilsson's session, at Let's Test 2012 were along the lines of "how do we overcome the translation problems with models between different layers in an organisation or between different groupings?"
Recently someone showed me a representation (pictorial) of a complex set of data, followed by a question, "do you see what I see?" To which I replied, "I'm sure I do, but have no idea if I interpret what you do."
Trap?
The trap that I see is that we often put a lot of effort into the capture and representation of data and information. But the effort in the communication and dialogue that must accompany isn't always considered, or to the same degree.
The trap is that we start to think of the model as an artifact and not a tool (or enabler) for communication.
I often refer back to the SATIR interaction model (that I discovered via Jerry Weinberg), an online example is in ref [5]. If we're missing the significance and response parts then there's a danger that our intended message won't get across.
Examples
Ok, this all sounds theoretical, so time for some examples in my usage of such models.
Mindmaps. I use mind mapping tools (and pen and paper) for lots of different activities - I don't use them as artifacts, and that's an important distinction.
I have a couple of A3 mindmaps pinned up by my desk, variants of ref [6] & [7], and occasionally someone will ask about them, "oh, that's interesting, what does it show?" There will usually follow an interesting discussion about the purpose and content, my reasoning behind them and potential future content. But it's the discussion and dialogue that makes the communication - as there usually will be comments such as, "I don't understand this", or "why do you have this and not that in a certain position?", or, "aren't you forgetting about..." - some will be explained by me and not the piece of paper, and some will be taken onboard for future amendment.
But, it's the information I leave out that NEEDS my explanation that makes the communication work - and hopefully successful.
Presentation material. I purposely write presentation material to be presented - rather than writing a document in presentation format. This means it can be quite de-cluttered, empty or abstract - because these are meant to be items that the audience can attach to and hear about the story or meaning behind them.
The presentation material is only an enabler for a presentation (and hopefully dialogue) - it is not the presentation. In my day-to-day work I occasionally get asked for presentation material from people who missed it - they may get something, but not everything I'd intended. So I usually supply the standard health warning about the material not being the presentation.
How, and how well, I present the story is another matter....
Dashboards and kanban boards. I like low-tech dashboards, see ref [4] for an example, and kanban boards are tremendously useful. But don't mistake a status board/chart for a status report - it's the persons describing elements of the charts that are reporting. It's those elements that allow the others/audience/receivers to grasp (and check) the significance of the information with the report presenter.
Test analysis. I work with many teams on a large and complex system. It's very typical that I will look at a problem from a slightly different angle than they do - they take a predominantly inside-out approach whilst I tend to look outside-in. That approach helps cover most of the angles needed.
Discussions occasionally happen around the team's or my own approach and understanding of the problem. "Why are only feature aspects considered and not the wider system impacts?", or "why are we so worried about this system impact for this particular feature?" These are symptoms that the models we're using to analyse the problem are not visible, transparent or communicated and understood by all involved. If the team is not familiar with it then I should be describing, "these are the areas I've been looking at because..."
Test case counting. Sometimes stakeholders want to see test case number or bug defect counts. I then usually start asking questions about what information they really want and why. I might throw in examples of how really good or bad figures might be totally misleading, ref [8]. Their model for using such figures is not usually apparent - sometimes they think they will get some meaning and significance from such figures that can't really be deduced. It might be that they really need some defect count (for their stakeholders), but then it's my duty to see that the relevant "health warnings" about the figures and limitations in which they can be used (if any) are understood by the receiver - for further communication up the chain.
Way forward?
Awareness of the problem is the first step.
Referring back to the SATIR interaction model, think about whether all parts of the model are considered. The significance and response parts are the most commonly forgotten parts - are you and all parties understanding and feeling the same way about the information. If not, then there might be a problem. Time to check.
Communicating models through different layers of an organisation or between groups? In some cases the models will only be applicable to certain groups or groupings of people - it's important to know how and where the information is to be used. In certain cases this may mean making "health warnings" or "limitations of use" available (and understood) with the information.
I think there will be more thoughts to come in this area...
References
[1] Let's Test:
Let's Test 2012 Re-cap
[2] Wikiquotes:
Quote by George E. P. Box
[3] The Tester's Headache:
Framing: Some Decision and Analysis Frames in Testing
[4] James Bach:
A Low Tech Testing Dashboard
[5] Judy Bamberger:
The SATIR Interaction Model
[6]
The Tester's Headache: Communication Mindmap
[7]
The Tester's Headache: Book Frames and Threads Updated
[8] Slideshare:
Test Reporting to Non-Testers