Saturday, 30 March 2013

Another Regression Test Trap

I mentioned in a previous post, ref [1], a trap to be aware of when thinking about regression testing. Time for another...

There is a lot of "rubbish" written about regression testing. As with other aspects of testing it is sometimes commoditized, packaged and "controlled".

Regression Testing is the act of re-running tests - that have previously been developed, designed or executed. Period.
The trap is that it stops there...
Sometimes, there is a contextual aspect included - which is addressed by stating that analysis of changes in the product, including the processes that were used for that change should be made. But in many cases - and indeed, in several pieces of literature discussing regression testing - the extent of this analysis is about setting the regression test scope from the previously developed or designed test ideas.

Hmmm, seems reasonable? Or does it? Let's look at a couple of commonly available sources of information.

Other Sources on Regression Testing:

Wikipedia, ref [2]: "Regression testing can be used to test a system efficiently by systematically selecting the appropriate minimum set of tests needed to adequately cover a particular change."

So, selecting a minimum set of tests - systematically - is an efficient test of a system, if that set provides adequate "coverage" of a change in the system. Mmm, good luck with that. This is bordering on pseudoscience, ref [4], - it's an untestable and vague claim. Having said that, the statement is qualified by "can" - hmm, ok, not so helpful though.

ISTQB glossary, ref [3]: "regression testing: Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is

I can almost agree with this. Almost. Well, ok, not really. What's a big sticking point for me? "Ensure". My dictionary (New Oxford American to hand just now) states that ensure is either a guarantee or the action of making sure... Ok, so how can my testing guarantee or make sure that a defect has not been introduced. Why would it want to make sure that a defect hasn't been uncovered - wouldn't you really want to know about this????

Note, it's possible to find similar problematic definitions in testing text books.


Ok, am I splitting hairs? Am I dissecting these definitions more than I should? Well, what level of clarity should I hold them too? Should I be satisfied that they give "surface" explanations and that I should not examine or look too deeply? The thing is, if a definition falls apart under scrutiny is it worth using? I can't use it and I don't advocate it they can't define something then the definition falls apart (or becomes ambiguous) under scrutiny is it worth using????

What's the problem?

My issues with typical regression scopes are connected to two often unstated assumptions:

  1. An assumption that the regression scope that "worked" once is sufficient now - where the system has changed. (This means that the number of test ideas covered previously - whether expressed as instances (test cases) or not - are good enough for a legacy scope now.)
  2. An assumption that a previous regression scope is somehow a conformance scope. (This means that an instance of a test idea - a test case - is correct now, in the new/modified system.)

Note, if these assumptions are made they should be stated. It could be adequate in some cases to assess the previous scope and determine that it is sufficient for the current needs. It could be useful to state that the previous regression scope is being treated as a conformance scope (or reference scope, or oracle) - if that's what the assessment of the current problem/system produces.

I think a post on how I'd approach a regression test problem is in order...

Tester Challenge
On a train journey returning from SWET4 to Stockholm I sat with James Bach and he tested me on "regression testing". He suggested there was one case where a previously executed test campaign could be repeated as a regression testing campaign. I gave him two.

I'll leave these as an exercise for the reader...

[1] Regression Test Trap #492
[2] Wikipedia: Regression testing (phrase checked 29 April 2013)
[3] ISTQB: Standard glossary of terms used in Software Testing, version 2.2
[4] Wikipedia: Pseudoscience

Bonus Reference
[5] Checklist for identifying dubious technologies


  1. Hi, Simon...

    You might be interested in this:

    In there, I propose some refinements to the Wikipedia definition, and emphasize that there are two concepts of regression testing. One is focused on repetition of tests; the other is focused on making sure that the quality of the product has not got worse. It's a mistake, I would argue, to pay attention to only one of these and to ignore the other.


    ---Michael B.

    1. Hi Michael,

      Thanks for the link.

      Yes, I see we highlighted one of the same wikipedia paragraphs. In my refined definition I would have trouble keeping much of the wikipedia definition - I've italicized the problematical ones:-

      "Regression testing can be used to test a system efficiently by systematically selecting the appropriate minimum set of tests needed to adequately cover a particular change."

      That's most of the paragraph…. Stripping these out would leave a paragraph devoid of adding any information.

      I'm not sure I follow your two concepts of regression testing - one that focuses on repetition of tests - this is presumably an early warning system. Or? But it doesn't really state what assumptions about the conditions of usage or analysis are. Repetition of tests will work better under some conditions compared to others, but is problematical when those conditions for usage are not stated. I would love to see figures that talk to how important the test selection part of test repetition is. A test that never fails might give some positive confirmation that is desired - indeed that might be the scope. But repetition has many unstated assumptions and them remaining unstated is problematical for me.

      The other concept - that of a focus on making sure the quality of the product has not got worse - is problematical (to me) for two reasons. (1) There are conditions under which this would apply - possibly acceptance tests or customer-agreed tests that would be re-used - but other conditions where it might not apply (i.e. a test that is repeated but doesn't add anything to the picture of product quality - it might even be obsolete for some purpose of detecting regressions).
      (2) Making sure of quality not getting worse via a test is problematical as it implies that the test/s or testing will "make sure of quality not getting worse". I'm not sure you meant that interpretation anyway, but I could think there are some that will let the test be the arbiter of quality. This /might/ be ok if the assumptions about the scope, purpose and usage of those tests are stated - indeed if the tests are declared as conformance/acceptance and are an agreed arbitor - but that is often missed - sometimes not even thought about.

      On the link - I liked the allusion to information value -> that's an area I want to emphasize more.