Saturday, 5 September 2009

Sapient checking?

#softwaretesting

The recent discussions about checking vs testing have been interesting. My spontaneous response, here, looked at the label (or container) of a test/check but also alluded to the fact that an automated batch of tests are much more than checks.

Yesterday, I saw a clarification of checking from Michael Bolton, here, and a tweet from James Bach:-

jamesmarcusbach: We think a check is "an observation with a decision rule that can be performed non-sapiently" example: junit asserts.
about a day ago
After reading Michael's post and then seeing James' tweet I wanted to test putting sapience (the concious thought process) and checking together, resulting in the following tweet discussion:-

YorkyAbroad: @jamesmarcusbach RT: check is "an observation with a decision rule ... performed non-sapiently". ME: Selection of the observation=sapience?
about 4 hours ago

jamesmarcusbach: Sapience is whatever cannot be automated about human intellectual activity.
about 4 hours ago

jamesmarcusbach: I apply to term specifically to focus on things my programs can't do, but that I need to get done.
about 4 hours ago

YorkyAbroad: @jamesmarcusbach I was thinking of sapience as in http://bit.ly/10797H . You shouldn't make an observation with some thought to select it.
about 3 hours ago

YorkyAbroad: @jamesmarcusbach Freudian slip? You shouldn't make an observation //without// some thought to select it.
about 3 hours ago

jamesmarcusbach: That's what I'm talking about, too. So, are you saying that if I don't think about selection, I should keep my eyes closed?
about 3 hours ago

YorkyAbroad: @jamesmarcusbach I'm saying if you don't think about it then don't make the observation.
about 3 hours ago

YorkyAbroad: @jamesmarcusbach If you make the observation without thinking then what question is it answering?
about 3 hours ago

YorkyAbroad: @jamesmarcusbach If you have no question in mind then why make the observation?
about 3 hours ago

jamesmarcusbach: I'm simply asking then: are you telling me to keep my eyes shut until I specifically know what to look at?
about 3 hours ago

jamesmarcusbach: I may make observations because they are inexpensive and they supply my mind with fodder for generating questions.
about 3 hours ago

YorkyAbroad: @jamesmarcusbach I'm thinking of 'checking' and sapience. Would you make a check without applying any thought beforehand?
about 3 hours ago

YorkyAbroad: @jamesmarcusbach The thinking weeds out and refines which checks you might want to make.
about 3 hours ago

jamesmarcusbach: Good point. That is one of the little traps of checking-- GOOD checking still must be wrapped in what Michael & I call testing.
about 2 hours ago

YorkyAbroad: @jamesmarcusbach Precisely! I wouldn't advocate checking without thinking beforehand. I just advocate testing - implying thinking first...
about 2 hours ago
I had two things in mind with the sapience and checking combination. The first thing was to show that, according to the descriptions of sapience and checking in the above links, they are incompatible.
Did I do this? Well, if you accept that you wouldn't run a test/check for no reason then the answer is yes.

In this respect, the title of the post, sapient checking, is nonsense.

The other item: An example. I was thinking of a re-running of test cases - in my view this requires the objectives of running those cases to be understood beforehand.


Thinking before Testing?
I'm still of the view that all testing requires thought beforehand - otherwise how can you interpret results and try to help with their meaning? This has several implications.

One implication is that any automated batch of test cases to be used for re-running should always go through a selection process (review for relevance and effectiveness.) I know this might sound radical to some people but it shouldn't be!


Example
Suppose an existing/old test case is to be executed to ensure/check/verify that a new feature has not negatively impacted the existing feature?

Possible reasoning that was an input to the selection of the test case:
1. If the old feature was expected to be impacted then the test case would be opened for review/modification (I'm not distinguishing between new design and maintenance here.)

2. If the old feature had no expected impact then the test case is expected to "work", as is, then that's the measure of evaluation to make this a test. (The underlying software of the product has changed - the new feature - and the same inputs and outputs are to be verified in this new configuration/build.)

3. If we don't know if it's reason 1 or 2 then it may be for one of two main reasons: 3a) It's an exploratory/learning exercise -> testing, 3b) It's a test case that has "always" been run or never been reviewed for relevance -> time for reassessment?


Room for Testing?
For old/legacy test cases then the tester is making an assumption/assertion that a test case (or group of test cases) should work (on a new build/configuration/env.) The tester tests that hypothesis.

It's very important to keep the test base up-to-date and relevant! Filter out redundancies and keep them under review. In terms of legacy cases this should be budgetted as part of the maintenance of the product.

All of this work with legacy test cases is what I consider testing - it isn't anything without the thought process to guide it.


Cloudy? Foggy?
The problem with trying to define checking is that it is so tightly intertwined with testing that I don't think it adds value into the communication about a product or project.

If, as a tester, you're trying to help your project understand the risks associated with a project and communicating issues found then I don't think there is any need to say x issues have occurred during testing and y issues have been observed during checking.

I can think of checking as an element of testing, just like a test step might be an element of a test case. But on its own there is no value in communicating it to stakeholders.

I think there have been many communication problems tester<->non-tester in the past (and probably still continue today) - the information needs to be tailored to the audience, which is a great skill for a good tester!


And Finally...
I can't see the example where a test/check can be selected without any thought process. To distinguish a group of test steps, under a given context, as either a check or a test is totally unecessary in my working environment.

Keeping the storytelling good and the quality of the reporting and feedback high and consistent will lead to reduced misunderstandings and communication improvement. I know, easier said than done!

I suspect communication problems (and not a lack of terminology) have been the root causes for some of the cases where a distinguishment between testing and checking has perceived to be needed.

Checking without sapience doesn't hang together. Why would you do something without a purpose?



This post is my effort to understand the discussion - using my combination of learning, critical thinking and past experience.

What does your experience and analysis of the discussions tell you?

2 comments:

  1. Hi, Simon...

    The problem with trying to define checking is that it is so tightly intertwined with testing that I don't think it adds value into the communication about a product or project.

    I've taken a first stab at addressing this issue here, and I'm working on more. I hope you continue to critique the idea. Criticism will strengthen it in the long run.

    Cheers,

    ---Michael B.

    ReplyDelete
  2. Hi Michael,

    I browsed the new post last night and thought there some good points to it. Will dig into more...

    Enjoying the discussion/analysis.

    Cheers,
    /S

    ReplyDelete