Monday 23 August 2010

Strung up by the Syntax and Semantics

Ouch!! Sounds unpleasant doesn't it?

I've just re-read Eat Shoots and Leaves; originally read it a few years ago, then forgot about it after leaving my newly finished copy on an airport coach in some foreign land.

So, it was a joy to re-discover this book and read again over the summer.

What's all the fuss about? It's just punctuation!
That's the thing! A lot of testing communication - whether the requirement specification of the product or the reporting of the results and findings usually happens in written format, and it's the written format that has potential pitfalls for misunderstanding and confusion.

So, when you receive a document/description/mail is it always unambiguous. Is it ambiguous on purpose (ie unknown behaviour not documented) or by accident? Is your own writing similarly unambiguous?

True, it's possible for the spoken word to be ambiguous, but then we usually have the luxury or putting in a question to clarify our understanding.

So, the moral is: ambiguous? Start digging - is it accidentally unclear or is there a source of potential problems in your testing - the 'swampy area' on your testing map!

Superfluous hair remover
I'm getting into that age range where more hair is developing in "strange" places - ears - and less on the top of my head. I'd love to hear an evolutionary explanation for hairy ears - that were not needed until now... (Oh, I'm not being exclusive either: any creationist can chip in with the design explanation also. Agnostics: press the "don't know" button now.)

So when I read the phrase (superfluous hair remover) it immediately struck me on two levels - the age-related one that I just alluded to and the tester in me that wonders "what does that mean?" or "how do I interpret that?".
Do you see what I mean?
Is it talking about a hair remover device too many (a device that is superfluous) or is it talking about a remover of hair when you have more hair than is wanted (superfluous hair)?


What's the solution to this written problem? Well it's a hyphen. A "superfluous hair-remover" might be something I'd see advertised in a "Unwanted items" newspaper column. A "superfluous hair remover" or "superfluous-hair remover" might be something seen in a "For Sale" column or store. Note, there's still room for ambiguity though!

Lesson?
Don't always assume that the writer of a product description, specification or requirement outline or specification is writing exactly what they mean.

What, I hear you say, why wouldn't they write what they mean. Well, that comes down to ambiguity in the way someone (maybe not a technical writing specialist) describes the product - both the wording and the punctuation.

If it looks fishy/suspect - there's a good chance there is a need for testing investigation (and/or a bunch of questions).

Also if you're writing reports - whether test progress or bug reports - make sure that the written word is saying what you think it is (or at least your interpretation - true your interpretation may be skewed ...).

My sympathies to anyone who is writing or speaking in their non-native language. I make the odd slip in Swedish - although not as bad as a colleague who whilst giving a presentation got the emphasis wrong, so what should have been "you can see from the following six pictures" came across as "you can see from the following sex pictures" - that's one way to catch the attention of the audience!

Watch those hyphens and pauses!

Wednesday 18 August 2010

Good Testing & Sapient Reflections

It was a good while since I read James Bach's posts about sapient testing, here, plus some of the previous ones (here and here). I even contributed some observations on it in the testing vs checking discussion.

I have an understanding (my interpretation) of what is meant by sapient testing and it's something I can tune in with completely. In my book there is no room for a human tester to be doing non-sapient testing. Agree?

So, do I call myself a sapient tester?

No.

I work with a lot of testers and non-testers. Sapient testing wouldn't be any big problem for the testers I work with to understand and accept. The non-testers might be a different story - for the past 3 years in my current role I've been working on introducing intelligent test selection (initially applied to regression testing) within a part of the organisation.

I've made a big deal about thinking - about the testing we're doing, what we're doing and when, what we're not doing and trying to get a grasp on what this means - for me I've been contrasting this with the abscence of these aspects. I haven't called it non-intelligent testing, I've just made a point of not calling it intelligent testing.

At the same time I've also started introducing the phrase "good testing" and implying that if we're not thinking about what we're testing (and why, what it means, what we're not covering, what the results say/don't say etc, etc) then we're not doing "good testing".
Of course, there's scope for people to say "I think about my testing" when they might not - I observe, question and discuss, and together we come to a consensus of whether they're thinking about their testing: what assumptions are involved, what are the observations saying, what are the observations not saying, should the test approach be changed, what else did they learn...
By using the phrase "good testing" I'm also priming the audience - especially the non-testing part - they want the testing to be good and now they're learning that this implies certain aspects to be considered. Eventually I hope they automatically connect it with the opposite of "just" testing.

So, changing an organisation (or even a part of it) takes time - just like changing direction on an oil tanker - there's a long lead-time. That's one of the reasons why I don't use sapient or sapience in my daily vocabulary - I'm using the "keep it simple" approach towards non-testers.

Pavlov or Machiavelli?
If that means using "good testing", "intelligent testing" or "thinking tester" to produce a type of Pavlov reaction from non-testers then I'm happy with that. Does that make my approach Machiavellian?

So, although I might be sapient in my approach and identify with a lot of the attributes of being a sapient tester, I do not call myself sapient. Does this make me a closet sapient tester?

Are you a closet sapient tester? Have you come out? Are you Machiavellian or otherwise nicely-manipulatively inclined?

Are you a 'good tester' or do you just do 'good testing'? Do you know or care?

Problems and Context Driving

I got involved in a small twitter exchange about problems, bugs and perception of the problems the other day with Andy Glover and Markus Gärtner.

Problem Perception
There was a view expressed that a user's perception of a problem is a problem.

I'd agree with this in most "normal" cases. But then there was a knock at the door and Mr. D. Advocate lent me his hat.

So putting the devil's advocate hat on I thought about how this view might not be enough. Or borrowing De Bono's phrase, "good, but not enough".

Cars
My way of looking at this for a counter-example was to think of driving a car. If I press a button, or depress a lever, expecting a certain response or action and something totally different occurs is this a bug? It might be, depending on what the button/lever was and the response.

If I'd pressed the button marked AC and the radio came on - I might think, "there's a problem here".

If I'd pressed a lever for the windscreen wipers and the indicator blinkers started then I might double-check the markings on the lever.

Blink Testing
BTW, Anne-Marie Charrett labelled this lever mix-up as an alternative form of "blink" testing!

Further Ramblings
I then started having flashbacks to my own encounters with cars in different countries and how I'd understood the issues:
  • Being stuck in an underground carpark at SFO, engine running, facing a wall, unable to reverse as I couldn't release the parking brakes - there were two and one was hidden. Manual to the rescue. Perception issue about where I'd thought the parking brake release would be.
  • Driving to San Jose (same car) and a heavy rain shower starting - how the heck do I switch the wipers on? Pull over and get the manual out. Perception problem about where the lever normally is.
  • Opening a taxi car door in Japan - taboo. Soon followed by the next of trying to tip the driver. Applying customs of one culture to another. My problem.
  • Driving someone round a roundabout (left hand side of road) when they'd only ever experienced driving on the right hand side. Were the muffled screams a "problem" or just a perception issue?
  • Trying to park in Greece - not on the pavement, unlike everyone else. Problem with local customs using my perception of a norm.
  • Sitting in the back of a taxi on the way to the office outside Athens going round a mountain pass whilst the driver is reading a broadsheet. Is my anxiousness a cultural thing?
  • Being surprised by overtaking customs in Greece. Flashing lights before the maneuvour starts. Irritation and confusion. My problem, not being familiar with the local customs.
  • Driving round the motorways of France and Italy - minimal gaps between cars - my problem, perception problem?
  • Driving in Sweden - coming face-to-face with a moose at speed. My problem. Obviously the moose has right of way!
Moose EncounterImage by Steffe via Flickr


Problem vs Perception?
Problems can have fixed interpretations (this is an agreed issue) and areas of vagueness. Is this my perception or interpretation?

As testers I think we try to root out and understand what our perceptions are and then understand whether they are reasonable, on track or in need of further investigation.

Some of this might be working against a well documented expectation or at other times not knowing what to expect - I deal with both extremes.

One way I handle this is to keep an open (and skeptical) mind and then work out what my hypothesis (interpretation according to the observation) is and compare that with the product owner's.

Sometimes there's no clear-cut right or wrong interpretation.

But it helps to keep an open mind with perception (and borrow Mr. D.A.'s hat now and then).

Have you had any problems with perception lately?

Enhanced by Zemanta

Tuesday 17 August 2010

The Testing Planet has a site!

I read a post from Rob Lambert the other day about The Testing Planet, about the next submission deadline, a short Q&A and how it's open for all types of contributions.

What I didn't really notice was that the testing planet has it's own site now. I didn't really see this announced - although I've been on holiday so very possible that I missed it. I think I saw a pointer too in a tweet from the @testingclub.

If you haven't seen it yet go and check out the new site for The Testing Planet. There you can look through both editions (the first being the software testing club mag from February), click through the contents list (for the 2nd edition) and even click through the different tags in both editions.

Very nice looking site and layout!

Have you seen it yet? If not I'd recommend a look.

Creative Thinking Challenge Follow-Up

During the summer I read a fairly interesting book, Think! Before It's Too Late, by Edward De Bono. Brief review of the book below.

In one section on creative thinking in 'Knowledge and Information' - just after being quite negative about multiple choice exams, something I've reflected on before, (here) and (here) - he poses a couple of challenges. One of these challenges was the basis for the Creative Thinking Challenge I posed.

Go read the comments to the above challenge to see the different approaches - I give a brief view on the approaches later in this post.


My Approach
My approach was to adapt the "Gauss method" and apply it to even numbers. After experimenting a little I was able to see the pattern for even numbers. So, my approach to the problem...


First listing the two rows, one in ascending and the other in descending order:-

    2   4     6   ...  298 300
300 298 296 ...    4     2

finding the pair sums (300 + 2, 298 + 4, etc) all summing to 302, noting that there are 150 such pairs and that the sum will be double the required amount, so I must divide by 2. Giving

(302/2) * 150 = 151 * 150

As for the mental arithmetic, well I took 15*15 to equal 225, then added a couple of zeros (one for each one I'd removed from 150*150 - ie divided by 100 before, so I must multiply by 100 afterwards), giving 22500, then I must add the final 150 (to get to 151*150) giving 22650.
(we all know our 15-times table right? 
I, for some reason, know it's 225, but doing it long-hand in your head (?) would be 15*10 = 150 + 10*5 = 200 + 5*5 = 225)
This method is the so-called Gauss 'trick' or method. 



The Gauss Method?
I saw a suggestion (on twitter I think) that this was the famous Gauss problem, a problem that he is supposed to have solved as a school child.

I found a reference that apparently shows the first known solution to this type of problem was presented by Alcuin of York; a translation from the Latin of the problem and solution are (here) - it's the one about the doves on ladders!


The Challenge Answers
Firstly, many thanks to those that took up the challenge and were willing to submit their answers! It was really interesting to see the different trains of thoughts. There were some different and inventive solutions - and that's what it's all about.



The first effort came from Trevor Wolter. I think he took the approach that I would've taken if I hadn't known the 'trick'. He wrote out a few observations, looked for a pattern, came up with a hypothesis and presented it. Right answer!

The next entry from anonymous either knew about the formula for arithmetic progressions or looked it up. He/she then presented the modified version. I assume they tried it out. There is potential for mis-interpretation in the formula though -  the 'n' value must be the highest even value in the series. If I'd said add up the even numbers from 1 to 301, there would be some pre-filtering needed.

Next up was Timothy Western, who presented a simple logic case and then did the actual calculation with the aid of excel. Short and sweet - right answer!

Abe Heward gave a close rendition of the "Alcuin of York" method. Creative thinking and right answer!

It looked like the arithmetic progression angle was the basis for Stefan Kläner's answer. Right answer!

The "Alcuin" approach was in use by Ruud Cox - well demonstrated steps and the right answer!

The arithmetic progression approach was used by utahkay. Right answer!

Finally, Parimala Shankaraiah gave a detailed walk-through of the initial observations, spotting patterns, following a hunch about arithmetic progressions, looking that up, modifying the formula, trying that out and matching with the observation to present the final hypothesis. Good analysis, research and explanation. Right answer!


Summary
What's interesting about these answers is that there are different methods of approaching and solving the problem - there are no right or wrong approaches. I found this problem intriguing purely from the "creative thinking" or angle - this is something that can be practised, to add as another tool in the testing toolbox. Something I'm going to explore more.

Another good point about showing your thinking is that it's a natural step for writing bug reports. So I reckon a lot of the answerers can write bug reports with enough detail.


Book Review
The book is part complaint on the current state of thinking, part re-hash of previous work and part self-promotion. A fairly ok read but I do have quite a few niggles with it. The book has had mixed reviews - although I read it quite avidly, which is slightly unusual for me!

There is a repeating statement through the work: Good but not enough. After a while it grates a bit. I understand his point and perhaps the continued repetition is the "drilling it home" method, but I'm not always receptive to those types of approaches.

The book gives short summaries of his previous work on Lateral Thinking, Six Thinking Hats and Six Value Medals. The summaries are fairly superficial - in theory not enough for someone to pick up and use in depth, although not impossible to grasp and use at some basic level.

Another niggle I have with the book is the lack of references or bibliography. De Bono states that this is because all his ideas are his own - however he uses Gödel's theorem to state a point about perception without referencing the theorem. Some references and bibliography would be nice, otherwise it just comes across as opinions without the back-up.

There are some reasonable points made, but when they're not backed up by references to the source material the reader has reached a dead-end in this book (if he wants to dig deeper into the source material).

I haven't read any previous De Bono work - I'll probably delve into the Lateral Thinking work at some point, but for now I have some pointers.

More on some creative thinking reflections connected to this book in another post.


Getting inspired about creative thinking for testing?

Sunday 1 August 2010

Carnival of Testers #12

In the northern hemisphere it's been a hot July. That's both weather and blog output!

The month started out with introspection (or was it just a talking to yourself fad?)

Rikard Edgren was asking himself questions in the corridor whilst Jon Bach was having a team meeting with himself. Two very insightful posts worth reading!

In this time of humid weather and lightning strikes it was good to see some advice on lightning talks from Selena Delesie.

Michael Larson wrote about the "Law of Raspberry Jam" and "Army of One" and what it he means to him.

Abe Heward was asking good questions related to survivorship bias of bugs and quiet evidence.

Most bloggers hit a writer's block once in a while. Rob Lambert describes how he regained his mojo, here.

Stephen Hill drew some interesting parallels to software testing after visiting an English courthouse.

In case you missed it the Software Testing Club announced the launch of the Testing Planet.

Making the release call for software, the problems and parallels with mysticism, were pondered by Curtis Stuehrenberg.

If you'd tried Ben Kelly's puzzle then you can read his reflections and what he learnt here.

Trish Khoo drew an interesting analogy between blog writing and shipping software, here.

Test case counting was a popular subject this month. James Christie wrote about his objections, several others pitched in and John Stevenson did a worthy round-up and addition to the subject.

More on the metrics angle was explored by James Bach, here.

Last, but not least this month was Markus Gärtner. He's been exploring aspects related to testing and quality with his latest post here.

Until next time...