Saturday, 25 July 2009

A Plague for Critical Thinkers?

#softwaretesting

Are you a critical thinker? Do you know? Do you care?

If you’re a software tester then the answer to the first question is probably yes.

Plagues
It’s with this aspect in mind that I’ve been reading James Whittaker’s series on the 7 plagues of software testing. I enjoy reading James Whittaker – whether in book or in blog form. He has great insights and viewpoints.

I have become increasingly perturbed by the “7 Plagues..” series of blogs. Yes, there has been the research, analysis and insightful observations, but the thing that has been nagging at the back of my mind has been the use of “plague” in the titles.

Critical Thinking
I believe most testers are critical thinkers – they need to be due to the nature of the job – whether they realize it or not. It’s with this in mind that I feel uneasy about the use of plague – this is more of a rhetorical device (in the parlance of critical thinking.)

Critical thinking (to paraphrase) is the study of arguments/suggestions to analyse the underlying proposals and conclusions to determine whether the argument “holds together”.. In this field the idea of a rhetorical device is one where a loaded/weighted phrase is used without contributing anything to the proposal to advance an argument. I.e. it’s emotional weight is intended to sway the decision rather than a clear argument.

Clarity
Clarity is king in software testing. Yes, there is room for analogy, but when a series of articles pounds the idea of a plague affecting software testing then it is reasonable to assume that some people will think of software testing being “plagued” by certain problems…

I am not going to go through the individual plagues here – I’ve made comments on some of the posts (both agreeing and disagreeing with the contents.) However, in summary, I can say that James highlights some potential pitfalls that testers (and the industry at large) can fall into.

I suspect the meaning was “plights” that can affect tester/industry rather than “plague or epidemic” – maybe even meaning that something was endemic. However, I was tripping up on the use of “plague” and that partly distracted from the content.

“7 plagues of software testing” sounds much sexier and has a better sound-bite than “7 pitfalls for testers and the software testing industry”. So that’s probably the main reason for the titles.

A Plague for a Critical Thinker?
The problem for testers is anything that distracts from the root cause, the underlying problem - the "reduced" argument. Rhetorical devices "get in the way" - they cloud the argument and don't help the analyser understand what a person/statement is trying to say.

So to repeat, "clarity is king".

Critical mass?
Sound-bites should not be what we strive for in the industry. Consistency, clarity and honesty will win more friends and supporters in the long run.

Software testing needs to achieve a critical mass of critical thinkers to carry the profession forward.

So: critical thinking – if you didn’t care before, do you care now?

Wednesday, 22 July 2009

Moonlighting #2

#softwaretesting

Reflections on the Test Republic Test Challenge

(Not the 80's series with Bruce Willis.....)

My look at the testing challenge on Test Republic. It was interesting for a number of reasons. It re-affirmed the need for some key skills and highlighted that learning opportunities are all around.

I liked the challenge because it gave me a couple of learning opportunities. One was to try something new (explicitly testing a website) and the other was to write an example of a bug report that would be a good/useful example to others. It's sometimes easy to preach about technique, but sometimes just demonstrating is equally instructive.

I approached it from the perspective that it would give me some reminders about things to have in mind during our daily work.

The challenge didn't give any guideline on how the testing should be done (it's aim was on the fault reporting and motivation of fault fixing), so I approached the test subject from a usability angle using an ad-hoc/exploratory method.

Usability gave me the starting point and the ad-hoc approach limited my test session (30 minutes) and gave me the thread for the test step development. My raw test notes (to be published later - I want to add some additional thought explanation) show the test steps with observations and comments and how they lead to the successive steps.

The challenge didn't give me any groundbreaking moments but it did throw up some very useful reminders of skills and techniques to have in mind.


Take a break
#1. Emphasied the importance of walking away, taking a break, unplugging

When you finish a piece of work during your working day (test session, writing a report or reading a document) take a break. This might be taking a coffee, a five minute leg stretcher, talking to a colleague or walking the dog.

This take-a-break activity allows you to re-charge (resting), re-focus on your next activity and allowing a little non-focussed-thinking - this might be drifting into problem solving whilst doing something else (for me this was pushing a pram with my sleeping daughter.) Some of my best brainstorming happens this way.

The challenge didn't make me do this - but it's something I consciously employed as the challenge was an "on-the-side" activity.


The Power of Review
#2. Emphasised the importance of review, inspection and re-review

This is something I've been thinking about recently in preparation for a different post - the amount of time spent on reviews and analysis of work, reports and feedback. This is a huge resource for any testing professional.

Review and re-review was demonstrated as being very important in this exercise with two examples. One was to do with the actual fault decription/reporting and the other was to do with reviewing the test session notes.

Fault/Bug Report

Reviewing the reporting. When writing a report - whether a bug/issue report, a test session/execution summary or some presentation of findings it's always useful to take a break after it's "finished" and go back and re-read. Look at it from one of two perspectives: 1) proof-reading - does it make sense, does it say what I intended?; 2) will my intended audience/receiver understand it?

Test Session Notes

A re-review of the testing notes threw up a fault that I didn't pick up as an obvious fault the first time around. Whilst writing a report for another fault and explaining the steps involved (for reproduction purposes) the fault became obvious.

This emphasizes the need to review test notes, environment changes, lab set-up now and again. There is always information to be found. Sometimes it might show that all is as expected. Other times it might raise a question "why is this step skipped, this step inserted, this env variable set, etc, etc."


Keep it clear and understandable
#3. Illustrated that there are useful learning opportunities for testers in the most common of places.

A well-known website. Go find the errors... Ok, so you find an error, now describe it...

Sounds simple? Yes, but doing the simple simply is something everyone should be practising now and then. Describe something in a clear and straightforward way that's directed at your audience.

In this challenge this meant describing the faults found in a way to get the message across - that the fault is understandable, along with all the basic data, how to reproduce and some motivation for the fix to be implemented.

I'm a great advocate of "keeping it simple" or "simplify, simplify,..." - it's very easy to preach, but putting your words into practice is something we should all be able to do.


Push your own boundaries
#4. Learning is (should be) fun. Don't be affraid of mistakes.

Don't be affraid of doing something new, different or unfamiliar. This is something where you can always find a learning opportunity. You might "fail", achieve or exceed your expectations but you'll always learn something.

Whether this is to do with a new test technique, approach or a way of analysing a problem there are always opportunities to pick something up and improve.

Maybe the test technique didn't work for you. Why? Was there some preparation missing or do you need to go back through the basics?

Tip: Think about how you might explain that technique to someone not familiar with it. This challenges you to understand what you're talking about and is a great way to clarify your own thoughts. The next step is to do it in person - then you might get challenged for real.

If you never make a mistake you'll never demonstrate that you've improved. The Japanese work ethic is very open to making mistakes as long as you learn from them. If you're learning to ski or ride a horse you'll be told that if you're not falling over / falling off then you're not trying hard enough.


The "What's in it for me?" Priniciple
#5. Influencing factors - the perspective of the receiver or the report

The challenge called it bug advocacy. Some might know it as motivating your request. But however you understand it you can think of this as an example of the "what's in it for me?" scenario. I wrote a post recently, here, where I touched on influencing skills that are useful for all testers.

The bottom line for any motivational argument is how to answer the other person (the one you're selling to) when they're thinking "what's in it for me?" If you just tell them about you're problems (bug #n fails due to xyz, please fix) without any motivation then you're likely to be disappointed.

(For best results do the influencing indirectly - let them come up with the idea.)

So the problem might then be presented as: bug #n fails due to xyz, impact of failure "abc", estimated cost of solution "def", benefit to customer "ghi", benefit to supplier "jkl". If the supplier/developer then turns around and says we'll fix this as a priority to be delivered xxx, then you've done your job - you've convinced them of the need for the solution. Then the bargaining can start about actual delivery times, emergency/interim fixes etc.

This principle is an influencing skill - it's something all testers should have (to a greater or lesser extent) and it's something that needs continuous practice to stay sharp.


The above five areas occurred to me during the course of the challenge and during the susbsequent assessment. They are skills/techniques useful for all testers during their day-to-day work.

I'll post my actual entry at a later date with some extended notes on the approach and steps for deduction and reporting.

When did you last think about the skills you're using and which need improving?

Sunday, 19 July 2009

Clear out the attic!

At home
The other day I was clearing out a cupboard in the kitchen. A cupboard that had started to accumulate miscellaneous objects other than kitchen items. I wanted to make more space for more food-related items and to sort out what wasn't needed anymore. Therapeutic...

You discover things that have been kept "just in case", "I'll need this one day" and other forgotten items. When I'd packaged some items that I couldn't decide if they were needed or not for further storage in the attic. Off to transport to the attic...

The attic contains more items that haven't been used in a while, also important items that are used seldomly, but also other things that are kept "just in case", "will come in handy one day...". Hmmm deja vu!

And at work
I've noticed this pattern at my work environment. Before going on holiday or a leave of absence I'll clear my desk. Inevitably there are items, papers and documents that haven't been used in a while, haven't been useful in a while, "this will come in handy one day" items.... Deja vu!

Yes, I'm a gatherer. Not so much a hoarder but I'll organise things in a way so that they can be found for later usage - but that can mean that I gather a lot of things. In one sense this isn't a problem - the periodic clean-up will take care of this.

It shows an example of things that can happen in work practices. I've been reflecting recently on reflecting - thinking about the time we use to review, re-review and go over past work.

This is part of my daily work - I read past reports as an input into the next work phase, I review what the teams and I have learnt and how we're applying those lessons. I look to see if the current strategies need modification due to what we've learnt.

The Product Attic
But what isn't always automatic is the clearing out of the product attic.

Where there are legacy systems we don't always budget for some of the housekeeping. An overhead will always be applied in the budget - but the continuous housekeeping is something that sometimes just sits on the side.

Remember to budget for the feature interactions and the analysis/review that it needs.

Example?
How many test cases have been superseded?

This is a problem that can exist in large test bases - a new feature is added which has interactions with several other features. Some of those other features may no longer operate in isolation - i.e. there will always be an interaction with another feature. Are those "old" test cases that look at that feature in isolation re-evaluated? (I'm not thinking of unit tests here.)

Of course, this re-evaluation should be taken as part of the new design (elements can be considered as maintenance/housekeeping) and captured.

The next planning exercise (in the next phase/project) uses that experience.

The experience of the feature interaction is important. The feature interaction mindset should be kept in focus. Reflecting on those experiences is an important factor.

Are you keeping up with your product housekeeping? And are you reflecting enough?


Friday, 3 July 2009

A Non-Testing Book for a Tester

#softwaretesting

I have a couple of presentations/discussions coming up after the summer. The peers and colleagues to which I present are very intelligent and "on the ball", and so I thought it was time to give myself a refresher on "rhetoric" - or at least a layman's version.

With this in mind I went and picked up Dale Carnegie's classic "How to Win Friends and Influence People". This book came out in the 1930's but has many relevant topics and lessons that anyone can use today - you'll probably recognise the lessons in any of today's TV series on child behaviour or books on teambuilding.

It's not exactly a guide to presentations and rhetoric but it has some important points for anyone wanting to make a case...

Making Friends?
For the area of my interest the parts about "making friends" were not so relevant for lectures/presentations - although still useful inter-personal skills.

It's the second half of the book that is interesting - the influencing people part. Although the book is aimed more at inter-personal use I find some of the elements in it essential for anyone wanting to make a presentation or make a case for something.

Influencing
One of the most basic lessons is the "what's in it for me" scenario - most people are usually interested in themselves and their own interests. There are genuine critical thinkers, altruists and "devil's advocates" out there - but they're not always in the majority.

Don't sell anything in terms of how great a product/tool/service is, sell it in terms of the other person's issues/problems and how that will help.

Don't ask for a favour for change, "It would be really appreciated if you could do..." (unless it really is just a favour), but state the case as what the other person would get from this "favour".

A Book for Testers?
If you've ever wanted to put a case over for something whether it's trying out a new test technique, investigating a new tool, alter some methods or introduce radical changes into an existing system then these techniques are for you.

No matter how good you can describe the technical merits of something or how much of a problem you're having with you're existing tools / ways-of-working in presenting a case for the new tool/method/system you need to start out by playing "devil's advocate".

Ask the "what's in it for me?" but from the perspective of the person you're talking to - whether that's one person, a team or an audience of strangers. Why should they want to hear about your problems first?

As testers we should always expect our judgements to be open for "test", questioning and analysis. If you have put yourself in the other persons shoes (looked at it from their perspective) then you have a much greater chance of answering any and all questions.

Another point made in the book is to accept when you're wrong and admit it. This isn't any sign of "weakness", but a way of influencing people, building your brand through genuine openness to feedback and a great way to learn.


Summing-up
Don't start an argument for a new "widget" by saying "these are the problems I'm having with the current widget". Change it around to understanding how the other person is experiencing the widget today and this is how a new widget can improve that experience...

Don't make requests for change without phrasing them in terms of the other person's benefits from that change.

Do you play devil's advocate? Do you look at a problem from the other person's perspective?