Monday, 31 August 2009

To test or not to test, just checking...


To test or not to test, that is the question....

There was a very good and incisive post from Michael Bolton, here, the other day, distinguishing testing from checking. A lot of good points and items to trigger reflection.

The evidence of a good post is that it gives you something to think about. I started thinking about my own experience, specifically in areas of performance, load & robustness testing and automated regression suites.

I asked myself a couple of questions - with Michael's post in mind:
When is a test not a test?
Does it matter to distinguish a test from a check?
Note, when I talk about a test check or "check" below I'll use it in the meaning that Michael used.

A Test Case

In my particular working area (telecomms) a test case is a generic term. It's a label that can be attached to a group of test steps - to be used in a test, a "check" or a measurement.

In performance testing we have scenarios executing from which measurements about the system under test are gathered. These scenarios are labelled as test cases even if they don't explicitly test or check.

The output is used for comparison and tracking against established benchmarks, or even creating benchmarks. The results are analysed with stakeholders to determine whether further investigation is needed.

There is no problem in the organisation in calling this measurement a test case - it's purely a way of grouping some steps that contribute to an activity.

So test cases, checks or performance measurements can be designed and executed but they are collectively referred to as test cases - or even just tests. The test case, check or measurement relates to the activity and not the unit container.

Test (Case) or Not Test (Case)

The need to define, distinguish and be clear about terminology is a natural part of a critical thinking approach.
So, is there a conflict in using a general label such as test or test case?
A test or test case is a container. It doesn't say very much unless we know something about how they are applied - e.g. whether it's a unit, functional, robustness or other type of test case.

With most testing the point at which a test check emerges is when it has gone through one successful iteration - even a unit check is a unit test until it "works" - meaning the correct behaviour is verified (or agreed). The first times it is executed may reveal information about the unit design or the design of the unit test that may need modification to get an accepted result.

So from first inception it is a test case - once the correct behaviour has been verified (and agreed, if need be) then it can be used for regression purposes - at which point it might be thought of as a check. However, I'll probably still think of it as a test, see below.

To Test

When I think about software testing I like Gerry Weinberg's definition of testing about information gathering. In lay terms I like Michelle Smith's description as an investigative reporter, here.

Taking a dictionary definition of test (not specific to software - from the Oxford Concise dictionary): "A critical examination or evaluation of the qualities or abilities of a person or thing." I can agree with this also.

So is a test case written with lots of expected results a test or a check?

An expected result from the test is defined. However, the determination of the regression test/suite as being suitable to give input to the stakeholders is the critical evaluation step, ie the decision that the outcome will give valid input to the stakeholders.

A regression suite should be kept under review for relevance and value of the information that it provides. Note, not all of these tests will be automated and this will always throw up an interesting dimension.

So, in this context, it's a test and not a check.

Checking vs Testing?

In Elisabeth's example, here, the difference between preforming tests for the GUI and exploring the system for usability is really a problem of communication - it wasn't clear (maybe to the developer, tester or both) that the type of testing being talked about was different.

Labelling it as checking and testing is one way to distinguish. Another way, using an agile example, is to say that one type of testing belongs to quadrant 1 and the other is quadrant 3 (referring to agile testing quadrants.)

So, what's in a name? Or what's in a label?


Q: When is a test not a test?
A: When it's a performance measurement, but even then I group it as a test. Test is just a label for grouping.

Q: Does it matter to distinguish a test from a check?
A: In general, no. Items are grouped as tests. In my experience regression tests are selected for the information about the system they will provide.

If we need to distinguish between checks and tests then there may be other issues that are at the root cause of the problem.

Are you putting the evaluation into your regression selection criteria?

Do you or your co-workers know how to distinguish between different types of testing?

Are you "checking" that you're "testing"?

Tuesday, 25 August 2009

My Latest Metric


There has recently been a debate over metrics when applied to software testing. Stories of usage and examples have run the whole range from the good, the bad, the ugly to the incomprehensible.

Linda Wilkinson has bravely decided to start a series on metrics and her viewpoints, here. I anticipate a good discussion - if there is any negative feedback I'm sure it'll be given in a professional way - rather than just being unfriendly feedback.

My latest metric
The latest metric that I have used is the pedometer - or step counter. It shows me how many steps I have taken and total elapsed time with a conversion to distance and calories used.

But that's just raw data - I need a certain history and environment in which to interpret the data.
It doesn't tell me about the terain I was walking in.
Was it hilly - so some steps counting double for the exertion?
Was I walking fast/slow?
Was I walking fast and taking lots of breaks or did I do the whole thing in one go?
Was I carrying any baggage?
Was I pushing or pulling something?
How comfortable were my shoes?

Test Metrics
If using them, then understand the data - what it's representing, what it's an instance of and question what it's not telling you.

Know and work with examples of what might be deducable from the data.

Know and work with examples of what cannot be deduced from the data.

Know how to deal with data that's missing - sometimes nothing more than to acknowledge it - but that's an important step in itself.

Some would say that the answer to my pedometer problem is to get a better pedometer or ultimately some sort of human-tachometer. However, I just want a simple comparison - something that gives me some background data. It's just data until I can set it in a context with some meaning and I'm happy to do that step.

Even a super-duper-human-tachometer couldn't tell me about my motivation on a given day. The final story always needs the narrative!

Have you thought about the problems and limitations of your metrics?

Feedback: Friend or Foe?

#testing #feedback

I'm currently making notes on learning opportunities and one aspect that I thought needed separate treatment was on how testers give feedback to each other.

If you think about your experience in the work environment, on testing forums, twitter and blog comments are you keeping it professional - with the emphasis on professional courtesy?

I've seen examples on forums where sometimes a "seemingly innocent" comment or question is met with annoyance, condescension, rebuttal or frustration. Let's call it "unfriendly feedback".

Some Unfriendly Feedback Types
Annoyance: Maybe the responder had a bad day?

Condescension: What's the responder trying to show? Hopefully they are not falling into the "knowledge is power" syndrome. This is not a team-player response.

Rebuttal: This can be a legitimate "critical thinking" response. However, remember that the essence of critical thinking is about understanding and clarifying communication and not about being "critical".

Frustration: Maybe the question was something that could have been googled or someone has just asked a re-formulation of a previous question.

My Take?
One of my professional mottos is that "I'll help anyone but won't do the work for them."

This can be a fine line - especially for newbies - sometimes they don't know where to start, don't know which questions to ask first, how to formulate the question and are just keen to "start". That enthusiasm is good; it's an ingredient for future interest and development.

So, yes, answering questions from newbies can sometimes be hard work - the key is to direct and discuss threads for investigation. But remember, questions from non-testers can also sometimes be hard work!

These feedback types are equally visible in many peoples' work environment. They are not limited to tester<->tester, it occurs tester<->non-tester and non-tester<->non-tester also.

I've seen consultants treated less than courteously precisely because they were consultants - to their credit they responded professionally. This falls into a wanabee-superiority response and I don't think of it as a feedback type - more a cooperation problem.

Give it to 'em
If you're giving feedback keep it professional, courteous or light/humorous. If you can't then count to ten and try again. The important thing is not to withhold feedback just because you're biting your lip or counting to 10, 100, 1000,.... Give the feedback - it helps the receiver and it helps you!

If you're on the receiving end of the above type of feedback then rise above it. Don't bite back and lead by example. Give it back to 'em - an example of professionalism that is!
Note. When I was a new tester I was on the receiving end of the condescension type of response from a fellow tester in my organisation. I persevered - and saw that tester subsequently sidelined. If you're not a teamplayer then you're heading for a team of one!
I know! Easier said than done at times!

Have you been on the receiving end of unfriendly feedback? How did you respond?

Have you recently given unfriendly feedback? How do you think it made the receiver feel?

Thursday, 13 August 2009

Expert-itis, Diagnosis and Treatment!


I started threading a few thoughts together before my holiday – more as a cautionary tale. However, after the recent outbreak of plagues I thought I’d give it a medical slant….

Who knows, I might start a whole thread of testing pests, diagnosis and treatment!

This is a condition affecting testers and test leaders, commonly in projects/iterations experiencing stress in timescales, build releases or test execution completion. The inducers of these affects come from outside the test team/organization.

This is the occurence of test experts that are not practicing testers that want to dictate/direct the testing effort.

A non-tester (typically PM or developer) expresses their opinions (sometimes forcefully) about how the test execution (and even design and follow-up) should progress.

Occasionally, this is done in an oppressive manner meaning that they are dictating the testing effort – micro-managing (instead of the test leader).

PM’s have the “most” right to say how something should be done. They’re responsible for the completion of the project/phase, including all parts within it.

Do not antagonize or dismiss opinions. Take the opinions on board – within the scope of your remit. Developers have very valuable input into aspects of the testing effort – but you’re the one responsible for the test effort - your team may have a joint dev/test responsibility in some areas and separated in others.

If you can’t reconcile differences with a developer over how an aspect of the testing should progress then discuss with the team leader, test leader (if there is one) and PM, if needed.

If you can’t reconcile differences with the PM then you need to discuss divisions of responsibility – where the test responsibility ends – include the line boss, if needed, to get things clarified.

If you’re unfortunate enough to end up in this situation then think out your game plan first – motivation, problems to your work, problems this causes for the team and project, suggested way of working (emphasis on the team and project benefits) – before discussing with PM or line responsible.
If you can, bring in a real test expert to help put over your case. A real test expert, in this context, being a tester in your organisation that all sides will listen to.

Where the cause is an "old-style" PM (not rating the tester very highly) then the outlook can be quite poor. Perseverance is the key here, but usually this type of PM is not going to change very quickly.

Occurences of testing expertitis have reduced in recent years, partly due (but not exclusive) to:
Testers have a weighter role in development projects – they are not the second class citizens that they might have been 10 years ago.

Many organizations are more “en-lightened” these days – open to many different development and testing approaches. This isn't just a nicety; the projects have to be efficient - meaning everyone is working from the same storyboard.

Many projects/iterations work in more cohesive units meaning there is less of the blame game.

Other Comments
Although I haven’t seen this for a few years – the ingredients for it to occur are not so diverse. A PM from the old-school /pre-enlightenment – they still exist – is probably the main catalyst for this type of event.

NB. Comments, observations and input from non-testers are valuable and should always be appreciated – it’s where the dynamic turns into a directive (that doesn’t come from or with the ok of the team/test leader) that it can be a problem.

Do you suffer from testing expertitis? Do you have any other testing affliction?

Wednesday, 5 August 2009

Summer Reading: Busman's Holiday

#softwaretesting #books

A long holiday this year – with plenty of traveling and sometimes sporadic internet access, so I decided to take a few books with me.

I changed focus from my usual pick of biography, philosophy and science to something a little closer to the profession and ones I know will probably help in the next couple of months – so a little bit of homework and refresh at the same time.

The selection of books was some that I’ve read and dipped into many times and a couple of new reads - some classics and some potential classics.

Software Test Specific

Agile Testing: A Practical Guide For Testers and Agile Teams, Crispin & Gregory, Addison Wesley, 2009
First-time read.
A very good and comprehensive guide to hands-on agile testing for testers and team leaders. Can be used as a dip-into read (using the summary as a guide.) The use both non-Agile and Agile-specific observations and take in other well-established work (eg Marricks).
A good collection of research, observation and experience – follows one of the golden guidelines for good test literature (see below).

Lessons Learned in Software Testing, Kaner, Bach & Pettichord, Wiley, 2002
A re-read.
A potential classic. The series of lessons includes experiences that many readers will recognize – “yes, done that, seen that, been burnt by that etc, etc.” This is a “dip-into” book for me – almost a coffee-table book.

I don’t agree with all the observations – but that’s not the point of reading it – it’s there for the “on-tap” observations – having an observation that you don’t agree with can help clarify your own thoughts.

Perfect Software and other illusions about testing, Weinberg, Dorset House, 2008
First-time read.
A book that highlights the importance of information in testing – both information used in testing and how testers represent information. Some great thought-provoking comments around the typical everyday questions that testers face, e.g. “why don’t we test everything?”


An Introduction to General Systems Thinking, Weinberg, Dorset House, 2001 (Silver Anniversary Ed.)
A popular recent classic. This is a great introduction to how science looks at different types of problems and ultimately gains from the system approach – giving an important different perspective to the problem/model. Read this for the first time and enjoyed it.

Critical Thinking: A Concise Guide, Bowell & Kemp, Routledge, 2005
A re-read.
This is a great book for outlining and defining structures of arguments and how to distinguish valid contributors from erroneous ones. Not a “dip-into” book, but worth the work. Going over certain chapters as a refresher and for the exercises.

How to Win Friends and Influence People, Carnegie, Vermilion, 1981
A classic from the 30’s that’s always worth browsing and re-browsing. Some of the ideas around influencing people and teams are valid today even when some of the references and quotations are 00’s years old. You’ll see similar principles being taught in modern-day child psychology books and docu-soaps on childcare and relationships…

I paraphrase some of the lessons and call them the “What’s in it for me?” principle. Before asking anyone (or a team) to do something different you must look at the proposition from their perspective and understand how that change/difference is going to benefit them (from their perspective, not yours!)

Secrets of a Buccaneer Scholar, Bach, Simon & Schuster, 2009
I managed to squeeze in a copy of the free download. Interesting read. I pick up the essence of a challenge here: “Dare to fail”.

The Mythical Man-Month: Essays on Software Engineering, Brooks, Addison Wesley, 1995
I had read parts of the original classic but bought the revised version for the additional essays – however, haven’t had the chance to open it so far.

I like this book for some of the great lessons that you can hear in SW engineering – “Throwing people at projects does not give the numerical pay-back” and “Avoid over-engineering”. I don’t find all the books thoughts relevant today, but there are still some thought-provoking observations. Looking forward to the newer essays when I get the chance.

A Golden Rule?
One of the golden rules about test literature relating to “handbooks or guides” is the way in which observations, lessons and ideas are presented.

Sometimes good literature actually re-states the obvious, whether it’s something you’ve heard or experienced before or something new that “clicks” with you and sometimes it’s a great collation of ideas and information – that is the research has been done, collated and summarized in a useful way.

If a handbook does this it has a greater chance of being more accessible. Think of this as the simplicity idea: “State the ideas clearly”, “State the relevant research and ideas around them”, “State the observations and arguments connected to the ideas” and “Summarize.”

When did you last take a busman’s holiday? What other books would you take and why?