Wednesday, 27 January 2010

So, you're a tester? What's that?

 #qa #testing #softwaretesting

This was an internal blog post that I thought I'd share.

So, you're a tester? What's that?

Have you ever been asked this question? Where to start the answer...

Are you into Quality Assurance (QA), Quality Control (QC) or just Quality?

I think about this question from time to time, depending on the articles, blogs or tweets that I'm reading. There's been plenty of tester twittering about quality in the last couple of days so the question re-surfaced for me again.

If you think of yourself as being in QA, do you really "assure" quality?

Isn't the assurance really taken care of by the developers/designers (that implement changes) and the project leaders that direct the effort?

Ok, QC then:
Do you control quality? Do you make it better or worse? Your testing effort surely feeds back into the design/development loop - and this is a valuable (sometimes I'd use the word crucial) contribution. But is it the tester that's controlling quality?

In some senses "quality control" is taken from production line uses. This essentially is a sampling and checking exercise. Well, sampling definitely fits into the activity of testing. Checking is one element of testing - it's a comparison against a static (unchanging) output. However, this only covers a subset of testing.

Production lines also produce the same output (or at least intend to), so software does not fit this model. By definition, a development project is producing something new/unique - so the idea of a production line check only works for test cases that are known to already work - ie regression test cases.

Ok, what about new test cases and the learning (and even exploring) side of testing. Yes, QC doesn't quite do it here either...

Ok, so are you just into quality?
Well, there's a problem. Quality is not a judgement that you as a tester can make about a product! Some fault-prone products may need to go out during a certain quarter to keep the cashflow ticking over - without that there's a risk for less R&D, jobs, products and testing needed etc.

So how do you think about quality? I like Weinberg's definition that "quality is value to some person". This might ultimately be your project sponsor or product owner - and their values are "usually" a reflection of what they think the customer's value are / will be.

So, when you're maybe so far detached from the customer how do you make sense of quality?

Even if you are making reports against your project sponsors definition of "good enough" quality, your findings (reports) are still only one factor in the "release" equation.

Diamonds in the rough?
It's those reports, investigations and approach to digging for that evaluation that is the real key to a tester's role. I think of it as being part investigative journalist (credit to Michele Smith for that analogy that I'm happy to borrow) or researcher. You're digging for the story, the real story, the story behind the facade - it's serious and not a tabloid activity - the story/evaluation is a professional finding.

But, it's a report (or set of findings). Don't get emotionally involved with the product so that you're "disappointed" if a product releases with what you think is "bad quality".

Your reports are used to frame the product - put the story of the product in a setting - you're certainly not any type of police (hopefully all are agreed on that.)

Remember, if you are "disappointed" then take the team approach and sit down to look at improving that investigation (test execution & design) and reporting next time - this is tester + developer + project manager together.

Downbeat? No!
The best approach you can have towards testing and quality is that your testing is providing information about the product (telling a story as I usually say) that will help your project leaders / sponsors understand if their idea of quality (their values) are being backed-up or disproved by your findings.  [Credit to @michaelbolton for parts of this sentence!]

The tester is providing very valuable information about areas of the product that are meeting expectations, or not, issues and questions about the product's usage and suggestions for further areas of investigation.

It's only the tester that's providing this vital information - remember when the project sponsor is making a decision about releasing a product they don't fret over the results of the code desk check...

You still don't know what a tester does? Well take a smattering of investigative journalism, professional researcher, scientific experimentor, philosopher and put into a devil's advocate mould and you're getting close....

What would you add into the mix?


  1. I think you will like some of the writings I did recently. In the first issue of the STC magazine there will be an article on testers in the gatekepper or quality police role that should aid here.

    Second I'm currently working on a tester's fable, which I don't know yet what to do with. The first episode took quite some time to develop, and is about 2000 words in size now, which feels a bit long.

    Your downbeat paragraph reminded me on my picture on testers:
    Software Development is problem-solving in the sense that the customer knows where he's at now, thinks he knows where he wants to be, and has the problem just between this two positions, which the software shall solve. When the software is finished, the customer has the problem, that he does not know if the software really solves his problems and if not to which degree. This is where testers come into play. "Testers are the headlights" as mentioned in one of the former lessons in Lessons Learned about Software Testing. We light the way, but we also provide the information necessary to find out if the software really solves the problems of our users.

  2. The issue with role of the tester is very much a case-by-case basis. What it means to one tester in one part of an org doesn't necessarilly mean the same to another tester in a different org in the same company.

    Some testers are closer than others to the "real" customers - everytime a layer comes in-between means there is another potential layer about what the results/findings mean to the report receiver, that can be different from the end receiver...

    But I think it's fairly safe to say that all testers contribute on the value of their feedback and ways in which it is presented.

    I think evaluating whether a customer's problem is solved to their expectation is a different ball-game. In principle, it boils down to the same necessity for cooperation and negotiation - "our findings are x, y & z. We think that fulfils your requirements and we can also report that performance and useability findings are a, b, & c. Issues to note/discuss are j, k & l... Let's discuss."

  3. Simon,
    How about bridge gaps?, build relationships?
    Can we agree that testers do more than testing?...

    Queue the drum roll (dum, dum, tssh)...
    and if by magic watch out for the first issue of the STC magazine sounds like it's going to be a cracker!.


  4. Hi Peter,

    Yes, agree that testers do more than testing.

    Some of their best work is non-test execution - it's the investigation beforehand and afterwards as well as the reporting that is so valuable and makes them such an asset in the team.

    This side is underrated - or not enough PR until now - so it needs testers to step forward and tell the story - and writing for a magazine like the upcoming STC mag is perfect practice!

    (How's that for a segway?)


  5. Its true that testers can add a lot of value before testing even begins. Ambiguity review on documents, pestering questions that uncover something no one has thought of yet, and a different approach to software (user perspective, quality perspective) make QAs very useful throughout the whole development cycle.

  6. Spot on Devon!
    Asking incisive questions is a great asset to the organisation.