Tuesday, 29 September 2009

Are you an Inquisitive Tester?

#softwaretesting #testing #qa

 I read an ariticle yesterday, here, about a study of innovators - including characteristics of the main drivers behind Apple, Amazon & Ebay.

They distilled their finding into 5 "discovery skills":
"The first skill is what we call "associating." It's a cognitive skill that allows creative people to make connections across seemingly unrelated questions, problems, or ideas. The second skill is questioning - an ability to ask "what if", "why", and "why not" questions that challenge the status quo and open up the bigger picture. The third is the ability to closely observe details, particularly the details of people's behavior. Another skill is the ability to experiment - the people we studied are always trying on new experiences and exploring new worlds. And finally, they are really good at networking with smart people who have little in common with them, but from whom they can learn."

 They then summarized these into a characteristic - inquisitiveness. A good summary I think.

Initially, I was just browsing but the more I read the more I thought "yes, this is exactly what I'd look for in a tester". A tester has to be inquisitive. Throw the communication and networking skills into the mix and you have a very sound foundation.

So, as far as skill sets go: Great Innovators = Great Testers.

Have you thought about the skills that make you successful? You are inquisitive, right?

Sunday, 27 September 2009

Is the message getting through?

Sat on a bus this weekend on the way to Stockholm airport.

Approaching the airport the driver announced that the bus would stop at all terminals. He did this in both Swedish and English.

Straight after the announcement, and coming up to the first terminal stop, the chap in front of me reached up and pressed the "next stop" button. Hhmm. Got me thinking:

Did he not understand Swedish or English?
Did he understand the language but not the message?
Did he understand the message but have an ingrained/reflex reaction?
I don't know, but I got curious as this is behaviour I can sometimes identify in my daily work. A message is transmitted  (sometimes several times - both written & verbally). However, the expected change doesn't necessarily occur.

A typical example might be a change in a routine. After doing X and before doing Y we should now do ...
But the message about the change doesn't always sink in. Why?

I suspect that the changes don't happen when there is a small tweak in a routine, process or activity. It's a case of someone not really listening or paying attention.

So, if it's a small change -> get the attention first. A big change from a standard practice or routine will probably get the attention anyway - but don't take it for granted!

Similar experience anyone?

Tuesday, 22 September 2009

More notes on testing & checking

Background
Read Michael's series on Testing vs Checking. I've also made observations here and here.

Why?
Since seeing the first post I've been thinking, why?

Why the need for the distinction. I've been getting along ok without the distinction. I'm by no means a perfect example of how a tester should communicate - but one thing I do quite well in my work environment is to tailor the message to the receiver (or at least I try!)

That may be one reason that I haven't been bitten by the testing/checking bug. Or has my mind been blown? Ahhhh....

Digging
Anyway, about 5 days ago I wanted someone to point out the root problem, the root cause for the distinction (or at least one of the root problems.) My idea was that the distinction arose from communication problems and so I twitter'd to Michael about this:-

YorkyAbroad: @michaelbolton Nice latest post. The root cause of the need for the distinction hasn't been addressed yet. Or?
about 6 days ago

michaelbolton: A test, for you: what *might* be the cause for the need of the distinction?
about 5 days ago

michaelbolton: That is, I'm asking you: if you *see* the distinction (between testing and checking), what use could you make of it yourself?
about 5 days ago


YorkyAbroad: @michaelbolton Understand the diff bet. checking & testing as presented. A symptom of a problem is being addressed not the cause. Waiting...
about 5 days ago


michaelbolton: Sorry that you're not willing to respond to my request. You'll have your answer within a few days; please bear with me.
about 5 days ago


YorkyAbroad: @michaelbolton See checking akin to part of a test question. Little scope for checking that is not preceded or followed by sapience.
about 5 days ago


YorkyAbroad: @michaelbolton Am willing. Distinction needed in the communication (a root issue) of observations & feedback into a questioning approach.
about 5 days ago


michaelbolton: I&#39m finding it hard to read you in < 140. Why not send an email; I&#39d appreciate that.
about 5 days ago


YorkyAbroad: @michaelbolton I was "testing" if "the woods from the trees" was being distinguished. Will work on my own answer. Look forward to yours.
about 5 days ago


If anything is obvious from the exchanges it's how limitting 140 chars can be when trying to communicate more complicated questions... You might also notice the Socratic fencing/avoidance - wanting each other to answer their own question. So time to resort to email.

Bad checking? Communication?
The following is extracted from a mail I sent to Michael detailing some thoughts on bad/good checking and communication, I thought they'd be interesting to insert here:-

In my own area of work we have a fair amount of both testing and checking.

I can think of good checking (the methodolgy not the action) and bad checking.

Bad checking is where a regression suite is used to give a "verdict" on a software build. I liked your last post questioning if the tester is looking at the whole picture rather than just waiting for a "pass" result and moving on. But it can also be bad checking if we set off a suite of tests without asking the questions; "what is the goal here with these tests? What areas of the product am I more/less interested in? Should I pay particular attention to feature X?"

These questions have to be asked before executing a test manually or by automation - hard work maybe but if we don't ask them then we slip into the "regression coma" or your green bar pacifier. This is bad checking - or maybe just checking but "bad" testing.

So, to distinguish "good/bad checking" - good checking is preceded by a bunch of questions - some more vague than others - but they are there for a purpose.They have to be there: Why test something without any purpose? Learning, going on gut instinct, "I've seen a problem in this area in earlier releases..." are all valid purposes for targetting a test session. But testing without thinking is a problem and dangerous - as the "question/goal" mutates into "I don't want to find a problem" and becomes a self-fulfilling goal (in the long run!)

That's why I think that checking and sapience can never get very far apart. The "check" is an element of the test - where the test is preceded by the question or goal of the test (pre-defined or not.) So understanding the distinction (between testing and checking) also means needing to understand if a tester is asking any questions and appreciating that a check is an element of a test.

On to one of the root problems that I alluded to: Communication, specifically between tester and non-tester (although it can very easily be tester-tester also.) I'm thinking of the issue where people are talking past each other - a good example was in a post by Elisabeth: http://testobsessed.com/2009/05/18/from-the-mailbox-fully-automated-gui-testing/

Communication is a must in testing - whether it's understanding a requirement, a scope of work from a stakeholder, relating the observations made during a test session, making recommendations about areas further work, etc, etc. BUT, also communicating what's going on at any stage of a testing - different types of test and their different purposes - maybe even the different goals/questions going along with each.

If testers (or test leaders) can't get that information over to the receiver then there will always be the "testing trap" - the testing information black hole. If a tester now thinks they can answer a dev or stakeholder by throwing "check, test, sapience" terminology at them they're on equally shaky ground as they were before (if they didn't understand the differences in test types and how to communicate them.)

The check vs test distinction can help but only if you had a problem communicating about your testing activities and understand that you have/had that problem. The distinction goes part-way but I think it'd be even better to start with an "enabler": a question that a tester can use to discover/realise that this problem exists for them.

If you read Michael's posts you'll see that he's answering questions that are coming into him left, right and centre. It really is a discussion (rather than a monologue) - exactly in the spirit of a good blog. That's good to see, and commendable!

More digging
I wanted to follow-up the communication angle by starting a discussion on STC & TR about communication experiences where the testing/checking distinction might have been a factor (a help or hindrance), but I think it's all getting a bit obscure now - either people don't follow the whole thread, don't understand (mine & other's bad communication) or they just don't have the need/motivation to be interested (a big proportion I suspect!)

Communication Follow-Up
However, I'm fascinated by communication and information flow - so I might re-jig the question to be more general and follow-up on how testers experience communication issues and get around them...

I'm very much in the Tor Norretranders school of information flow - that the sender and receiver have to share perspectives (context even) before they understand each other!!!

This synch'ing of perspective takes time! So if someone doesn't understand your message the first time around it could be poor articulation, but also that you're both not sharing common ground. Yet!

Pause
I'm not quite signing out of the testing/checking discussion but I'm following other things in the pipeline, blog posts I want to finish, a couple of conference presentations plus my regular day job :)

Have you ever experienced difficulty getting your message across?

Saturday, 5 September 2009

Sapient checking?

#softwaretesting

The recent discussions about checking vs testing have been interesting. My spontaneous response, here, looked at the label (or container) of a test/check but also alluded to the fact that an automated batch of tests are much more than checks.

Yesterday, I saw a clarification of checking from Michael Bolton, here, and a tweet from James Bach:-

jamesmarcusbach: We think a check is "an observation with a decision rule that can be performed non-sapiently" example: junit asserts.
about a day ago
After reading Michael's post and then seeing James' tweet I wanted to test putting sapience (the concious thought process) and checking together, resulting in the following tweet discussion:-

YorkyAbroad: @jamesmarcusbach RT: check is "an observation with a decision rule ... performed non-sapiently". ME: Selection of the observation=sapience?
about 4 hours ago

jamesmarcusbach: Sapience is whatever cannot be automated about human intellectual activity.
about 4 hours ago

jamesmarcusbach: I apply to term specifically to focus on things my programs can't do, but that I need to get done.
about 4 hours ago

YorkyAbroad: @jamesmarcusbach I was thinking of sapience as in http://bit.ly/10797H . You shouldn't make an observation with some thought to select it.
about 3 hours ago

YorkyAbroad: @jamesmarcusbach Freudian slip? You shouldn't make an observation //without// some thought to select it.
about 3 hours ago

jamesmarcusbach: That's what I'm talking about, too. So, are you saying that if I don't think about selection, I should keep my eyes closed?
about 3 hours ago

YorkyAbroad: @jamesmarcusbach I'm saying if you don't think about it then don't make the observation.
about 3 hours ago

YorkyAbroad: @jamesmarcusbach If you make the observation without thinking then what question is it answering?
about 3 hours ago

YorkyAbroad: @jamesmarcusbach If you have no question in mind then why make the observation?
about 3 hours ago

jamesmarcusbach: I'm simply asking then: are you telling me to keep my eyes shut until I specifically know what to look at?
about 3 hours ago

jamesmarcusbach: I may make observations because they are inexpensive and they supply my mind with fodder for generating questions.
about 3 hours ago

YorkyAbroad: @jamesmarcusbach I'm thinking of 'checking' and sapience. Would you make a check without applying any thought beforehand?
about 3 hours ago

YorkyAbroad: @jamesmarcusbach The thinking weeds out and refines which checks you might want to make.
about 3 hours ago

jamesmarcusbach: Good point. That is one of the little traps of checking-- GOOD checking still must be wrapped in what Michael & I call testing.
about 2 hours ago

YorkyAbroad: @jamesmarcusbach Precisely! I wouldn't advocate checking without thinking beforehand. I just advocate testing - implying thinking first...
about 2 hours ago
I had two things in mind with the sapience and checking combination. The first thing was to show that, according to the descriptions of sapience and checking in the above links, they are incompatible.
Did I do this? Well, if you accept that you wouldn't run a test/check for no reason then the answer is yes.

In this respect, the title of the post, sapient checking, is nonsense.

The other item: An example. I was thinking of a re-running of test cases - in my view this requires the objectives of running those cases to be understood beforehand.


Thinking before Testing?
I'm still of the view that all testing requires thought beforehand - otherwise how can you interpret results and try to help with their meaning? This has several implications.

One implication is that any automated batch of test cases to be used for re-running should always go through a selection process (review for relevance and effectiveness.) I know this might sound radical to some people but it shouldn't be!


Example
Suppose an existing/old test case is to be executed to ensure/check/verify that a new feature has not negatively impacted the existing feature?

Possible reasoning that was an input to the selection of the test case:
1. If the old feature was expected to be impacted then the test case would be opened for review/modification (I'm not distinguishing between new design and maintenance here.)

2. If the old feature had no expected impact then the test case is expected to "work", as is, then that's the measure of evaluation to make this a test. (The underlying software of the product has changed - the new feature - and the same inputs and outputs are to be verified in this new configuration/build.)

3. If we don't know if it's reason 1 or 2 then it may be for one of two main reasons: 3a) It's an exploratory/learning exercise -> testing, 3b) It's a test case that has "always" been run or never been reviewed for relevance -> time for reassessment?


Room for Testing?
For old/legacy test cases then the tester is making an assumption/assertion that a test case (or group of test cases) should work (on a new build/configuration/env.) The tester tests that hypothesis.

It's very important to keep the test base up-to-date and relevant! Filter out redundancies and keep them under review. In terms of legacy cases this should be budgetted as part of the maintenance of the product.

All of this work with legacy test cases is what I consider testing - it isn't anything without the thought process to guide it.


Cloudy? Foggy?
The problem with trying to define checking is that it is so tightly intertwined with testing that I don't think it adds value into the communication about a product or project.

If, as a tester, you're trying to help your project understand the risks associated with a project and communicating issues found then I don't think there is any need to say x issues have occurred during testing and y issues have been observed during checking.

I can think of checking as an element of testing, just like a test step might be an element of a test case. But on its own there is no value in communicating it to stakeholders.

I think there have been many communication problems tester<->non-tester in the past (and probably still continue today) - the information needs to be tailored to the audience, which is a great skill for a good tester!


And Finally...
I can't see the example where a test/check can be selected without any thought process. To distinguish a group of test steps, under a given context, as either a check or a test is totally unecessary in my working environment.

Keeping the storytelling good and the quality of the reporting and feedback high and consistent will lead to reduced misunderstandings and communication improvement. I know, easier said than done!

I suspect communication problems (and not a lack of terminology) have been the root causes for some of the cases where a distinguishment between testing and checking has perceived to be needed.

Checking without sapience doesn't hang together. Why would you do something without a purpose?



This post is my effort to understand the discussion - using my combination of learning, critical thinking and past experience.

What does your experience and analysis of the discussions tell you?