Friday, 27 July 2012

Challenges with Communicating Models III


The previous posts, [1] and [2], have looked at some of the problems and traps with the mental models we create and how they might (or might not) show up when we try and communicate (use) those models.

Then I was reminded of a passage in a book that might help...

A Heuristic For Communication?
George Polya wrote a marvelous book, ref [3], which addressed heuristic reasoning in mathematical problems. All are applicable for software testing - indeed the book could be treated as an analogy for heuristic approaches in software testing, and deserves separate treatment.

However, just as useful about the specific edition of the book I reference was the foreword by John Conway, a mathematician, ref [4]. He made some great observations about Polya's work, and I will raise one of the observations from Conway's foreword:
"It is a paradoxical truth that to teach mathematics well, one must also know how to misunderstand it at least to the extent one's students do! If a teacher's statement can be parsed in two or more ways, it goes without saying that some students will understand it one way and others another..." 
And from this I derive something I've been calling the Conway Heuristic, ref [5]:
"To communicate something well (effectively), one must be able to misunderstand the information (report, result, interpretation, explanation) in as many ways as the other participants in the communication (discussion or dialogue) process."
The beauty of this is that it reminds me that no matter how well practiced my method is, how well-polished my talk or presentation is, there is likely to be someone in the crowd, stakeholder grouping, or distribution list that "doesn't get it". The chance of that is greater if I haven't spoken with them before or they are unfamiliar with any of the used methods and procedures for the communication.

This is a difficult heuristic to apply - it requires effort, training and awareness to do it consistently and successfully. I think more training on how we report testing information (and stories) is needed with emphasis on their clarity, devil's advocacy role-playing and awareness of rat-holes and how to handle them.

To be continued...

References
[1] The Tester's Headache Challenges with Communicating Models 
[2] The Tester's Headache Challenges with Communicating Models II 
[3] How to Solve It (Polya, 2004, Expanded Princeton Science Library)
[4] Wikipedia http://en.wikipedia.org/wiki/John_Horton_Conway

Tuesday, 24 July 2012

Challenges with Communicating Models II


Anders Dinsen posted some great comments and questions to my previous post (here). One comment was, "if I get your model", i.e. if I understand you correctly. I loved the irony here - I was writing a post about potential problems with communication (or trying to) - and I was getting questions about what I meant. 

I thought the comment exchange was great. Anders was doing what good testers do - not sure if he understood me, he posed questions and stated the assumptions he could see. 

A Recent Example
I'm reminded of a course I took a couple of months ago - it was called Communicating Value - and covered a range of topics and techniques, including presentation techniques and content, presenter style and negotiation techniques. One of the exercises was to give a five-minute presentation where you were trying to sell something or get "buy-in" for an idea. I chose the horrendously difficult task of trying to sell the increased use of qualitative information to execs in my organization (which is quite large) and do this in 5 minutes. This was a challenge for me because (i) the topic goes against a lot of "learning"/reliance/intuition about only quantative methods, (ii) the topic is quite complex to explain, (iii) my own explanation techniques can be quite "wordy" and abstract, (iv) execs are used to executive summaries - how to introduce the topic and content and get agreement in five minutes was then a challenge.

For the exercise, I modeled my presentation on a group of execs (that I knew) and on information that I knew they were familiar with and targeted the goal to getting buy-in for follow-up discussions to lead to a trial. In my experience this model worked with people familiar to the topic, the content and some of the related problems. For those not so familiar it became more abstract to them and they lost touch with reality.

Lessons
  1. Models of communication (or content) do not translate automatically. 
  2. Some models need more time to digest or understand. 
  3. Some models need more familiarity than others.
  4. All models are not perfect, and so need the facilitation and guidance to aid communication and dialogue. 


A Model for Communication
When I started thinking more about this I thought of this relation:

Communication = Artifact (model) + (Discussion and/or Dialogue)


Thinking about the example above, the good exchange with Anders and the previous post, I attempted to jot down some thoughts about modeling as a mind map, below. It might be useful, but as I've been saying, I also expect it to be fallible….






Thursday, 12 July 2012

Challenges with Communicating Models

This train of thought started at Let's Test 2012, a fabulous conference that has been written about in many places, ref [1].

A theme I identified during the conference, and even before in the LEWT-style peer conference, was the discussion of models, mainly mental. These are models that are used for a variety of purposes:
  • Trying to understand the testing problem space (mission)
  • Trying to understand the procedures and processes at use within a project
  • Trying to understand and document the approach to the testing problem
  • Trying to understand progress and map coverage of the testing space and of the product space
  • Trying to communicate information about the testing and about the product
Some of these models are implicit, undiscovered, unknown or tacit - or a combination of these. Some models are understood to different levels by the user, some are communicated to different levels of accuracy and some are understood to different levels of accuracy by the receiver.

Some people translate (and communicate) these models in mind maps, some in tabular format, some in plain text format, some verbal and some in a combination of these.


Problem?
All models are wrong, but some are useful. Ref [2].

Another way to think of this - all models leave out elements of information. But, I think the inherent challenge with models (mental or otherwise) is how we communicate them. My frames of reference for a problem, ref [3], may be different from my stakeholders and my stakeholder's stakeholder.

At Let's Test there was much discussion about the use and applicability of models, but not so much about the translation and communication of them, IMO. It's this translation of models between people or levels that is an interesting, and perhaps underrated problem.

If you have a model that is communicated and understood by only some of your stakeholders then there may be a problem with the model, the communication or both. Models often start out as a personal tool or don't capture the frames of all those involved in the information flow.

My questions in the keynotes of Michael Bolton and Scott Barber, and in Henrik Emilsson's session, at Let's Test 2012 were along the lines of "how do we overcome the translation problems with models between different layers in an organisation or between different groupings?"


Recently someone showed me a representation (pictorial) of a complex set of data, followed by a question, "do you see what I see?" To which I replied, "I'm sure I do, but have no idea if I interpret what you do."


Trap?
The trap that I see is that we often put a lot of effort into the capture and representation of data and information. But the effort in the communication and dialogue that must accompany isn't always considered, or to the same degree.

The trap is that we start to think of the model as an artifact and not a tool (or enabler) for communication.

I often refer back to the SATIR interaction model (that I discovered via Jerry Weinberg), an online example is in ref [5]. If we're missing the significance and response parts then there's a danger that our intended message won't get across.


Examples
Ok, this all sounds theoretical, so time for some examples in my usage of such models.

Mindmaps. I use mind mapping tools (and pen and paper) for lots of different activities - I don't use them as artifacts, and that's an important distinction.

I have a couple of A3 mindmaps pinned up by my desk, variants of ref [6] & [7], and occasionally someone will ask about them, "oh, that's interesting, what does it show?" There will usually follow an interesting discussion about the purpose and content, my reasoning behind them and potential future content. But it's the discussion and dialogue that makes the communication - as there usually will be comments such as, "I don't understand this", or "why do you have this and not that in a certain position?", or, "aren't you forgetting about..." - some will be explained by me and not the piece of paper, and some will be taken onboard for future amendment.

But, it's the information I leave out that NEEDS my explanation that makes the communication work - and hopefully successful.

Presentation material. I purposely write presentation material to be presented - rather than writing a document in presentation format. This means it can be quite de-cluttered, empty or abstract - because these are meant to be items that the audience can attach to and hear about the story or meaning behind them.

The presentation material is only an enabler for a presentation (and hopefully dialogue) - it is not the presentation. In my day-to-day work I occasionally get asked for presentation material from people who missed it - they may get something, but not everything I'd intended. So I usually supply the standard health warning about the material not being the presentation.

How, and how well, I present the story is another matter....

Dashboards and kanban boards. I like low-tech dashboards, see ref [4] for an example, and kanban boards are tremendously useful. But don't mistake a status board/chart for a status report - it's the persons describing elements of the charts that are reporting. It's those elements that allow the others/audience/receivers to grasp (and check) the significance of the information with the report presenter.

Test analysis. I work with many teams on a large and complex system. It's very typical that I will look at a problem from a slightly different angle than they do - they take a predominantly inside-out approach whilst I tend to look outside-in. That approach helps cover most of the angles needed.

Discussions occasionally happen around the team's or my own approach and understanding of the problem. "Why are only feature aspects considered and not the wider system impacts?", or "why are we so worried about this system impact for this particular feature?" These are symptoms that the models  we're using to analyse the problem are not visible, transparent or communicated and understood by all involved. If the team is not familiar with it then I should be describing, "these are the areas I've been looking at because..."

Test case counting. Sometimes stakeholders want to see test case number or bug defect counts. I then usually start asking questions about what information they really want and why. I might throw in examples of how really good or bad figures might be totally misleading, ref [8]. Their model for using such figures is not usually apparent - sometimes they think they will get some meaning and significance from such figures that can't really be deduced. It might be that they really need some defect count (for their stakeholders), but then it's my duty to see that the relevant "health warnings" about the figures and limitations in which they can be used (if any) are understood by the receiver - for further communication up the chain.


Way forward?
Awareness of the problem is the first step. Referring back to the SATIR interaction model, think about whether all parts of the model are considered. The significance and response parts are the most commonly forgotten parts - are you and all parties understanding and feeling the same way about the information. If not, then there might be a problem. Time to check.

Communicating models through different layers of an organisation or between groups? In some cases the models will only be applicable to certain groups or groupings of people - it's important to know how and where the information is to be used. In certain cases this may mean making "health warnings" or "limitations of use" available (and understood) with the information.

I think there will be more thoughts to come in this area...


References
[1] Let's Test: Let's Test 2012 Re-cap
[2] Wikiquotes: Quote by George E. P. Box
[3] The Tester's Headache: Framing: Some Decision and Analysis Frames in Testing
[4] James Bach: A Low Tech Testing Dashboard
[5] Judy Bamberger: The SATIR Interaction Model
[6] The Tester's Headache: Communication Mindmap
[7] The Tester's Headache: Book Frames and Threads Updated
[8] Slideshare: Test Reporting to Non-Testers

Wednesday, 11 July 2012

Communication Mindmap

About 2 years ago (May 2010) I started preparing a presentation for Iqnite Nordic 2010. In the preparation for this I'd described that I'd prepared a mindmap that I had printed out on A3 format - to be able to read it.

I recently had a very good question (thanks Joshua) about the availability of this mindmap - well, I'd forgotten about the source as I generally just have the paper copy for reference. But I dug it out, and found that it is all still very relevant (to me).

I still give the presentation, "Test Reporting to Non-Testers", fairly regularly, ref [1] - it goes to some of the points about metric obsession and some of the cognitive biases involved in why stakeholders might like them - such as cognitive ease and availability biases. Some stakeholders find it a useful eye-opener. I may try and record a future session and make that available.

For an example of cognitive ease in software development and testing see ref [2].

The picture of the mindmap is below, I'll try to make the source files available in the future. There is the usual health-warning with all mindmaps and presentation material - they only give my perspectives and are only fully unambiguous when accompanied with dialogue and discussion - but have a certain amount of value as they are, so enjoy!




References
[1] Test Reporting to Non-Testers
[2] Best Practices - Smoke and Mirrors or Cognitive Fluency?