Monday 21 March 2011

Automation: Oh What A Lovely Burden!

" #softwaretesting #testing "

Do you have a suite of automated test cases? Have you looked at them lately? Do they seem to be growing out of control? Have they needed some update? Does it seem that you occasionally see a problem and think, 'why didn't the test suite catch that?'

If so, then maybe you have a 'lovely burden'.

One of the things with automated suites is that they are not guaranteed to maintain themselves.... Another is that they do not always tell the tester (or stakeholder) exactly what they're doing (and not doing). The information (results) that they give can be sometimes interpreted for something more than what they actually are.

Automated test suites can sometimes give very good information about changes you've made in the system, a lot of times they give very good feedback, sometimes they catch a catastrophic change.

However, they can sometimes lull 'people in software development' into a false sense of security. Wait! The test suites are not evil, as such, so how can that be?

Well, in addition to automated test suites not maintaining themselves and not guaranteeing a lot of  things - they are combined with people (whether testers, project, line or other stakeholders) into an idea of a 'holy suite'.

Why are they 'holy', untouchable and must be maintained (as though we form museums and living exhibits of test suites and frameworks)? Well, part of it is the "Golden elephant" problem (James Bach in Weinberg's "Perfect Software and other illusions about testing"). Another part of it is that people (testers, developers and stakeholders) can become detached from what the test suites are doing - something that has been around for a long time, might be 'left alone' until it breaks.

Oops!

Sometimes test suites are not maintained or refactored for several reasons. It may be a judgement call, sometimes it's not possible to easily see where the point of diminishing returns is reached, sometimes vanity (yes, we didn't see that they had an 'end-of-life' 5 years ago, but even so, I don't want to look like I couldn't plan 5 years ahead....) Projects usually have difficulty seeing clearly to the end of the project (at the beginning), so why should it be any different with any artifacts that are produced along the way (like test suites).

If I were not aware of a lot of the above problems I (mr stakeholder) might say that we need to plan better... But, as testers interested in contributing to working software products we should help contribute to the better understanding and use of automated test suites.

How?

Look at test suites regularly (or at least more than never) for:
  • Relevance
  • Signs for reaching the point of diminishing returns
  • The Test Suite "Frame"
Relevance

  • Is the test suite doing what it needs to do? Are there redundant test cases/scripts? Possibly - do you know where or which ones?
  • Are there cases (scripts) that never (or hardly ever fail)? Are there scripts that fail when there are always others that fail? This might show a pattern in (1) the system architecture - weak links are highlighted - this is good, but how do you react to it? ; (2) the test suite and the way it is configured - different tests funnel through the same route (is this intentional?)
  • The test suite is just one view of the software - maybe it's a real view or an artificial view (due to behaviour changes). Which is it?
  • Is it static - same data and behaviour model - or is it dynamic? If it's not dynamic do you (someone) inject dynamism in some way (e.g. change cases in and out, rotate cases used, ordering, data fuzzing, etc..) Do you have any refresh or re-evaluation routines for the suite?

Point of diminishing return

Think about the current cost of maintaining the test suite.

How many backlog items are there? Do the backlog items that 'need' implementing grow at a greater rate than can be supported with the current budget (time or people), is the architecture reaching it's viable limit, do you know or have thought about what the viable limit is?

  • Who's the test suite 'product owner', and how are the decisions about what goes in made?

It's important to understand what the automation suite is costing you now - this is an ongoing cost-benefit analysis - which is probably not done in a very transparent way. Not only should the current costs of maintenance be balanced against the benefits that the suite gives, but also more subtle items.

These more subtle items include the cost of the assumptions made about the suite - from the stakeholder perspective. How many decisions are based on inaccurate or incomplete information about what the test suite is giving? This is an area that is rarely appreciated, never mind understood or researched. Ooops!

The Test Suite Frame



Thinking about the test suite has several frames (models or filters in which people see the problem and interpret the information connected with it.) Some of these 'angles' might be:



1. What's the intention with the automated suite? Expectations?

  • Is it the same intention that the stakeholder has? If not, do they realize this? Do the stakeholders think it is all-singing-and-dancing - and if so, how do you bridge that expectation?
  • What about new stakeholders that 'inherit' a legacy suite? How do they get to know all the intricacies of the automated suites? They probably don't ask for them, so how does the tester communicate that?
2. Are there gaps to be filled? Planned or known about? (Maintenance plans)

3. The test suite will only give a limited view - do you actively counter this in some way?

4. Risks associated with the test suite - in terms of what it says and doesn't say? (How do you translate results into information further downstream?)

  • What assumptions are built into the test suite?
  • Happy path only?

These are just some of the most obvious frames.

And finally...

Ask yourself some basic questions (and just because they are basic doesn't mean they are easy to answer):

What assumptions are built into the test suite, what does it tell you, what doesn't it tell you, what expectations exist on it and how they are matched, or mitigated, how much reliance is placed on the suite, what risks exist with it and how they are monitored (evaluated)?

If you don't have the answer (or a view on these questions) then you have a potential burden.

You might think 'oh what a lovely burden' or you might think 'I'm a tester, get me out of here', or alternatively you might start thinking about which type of questions that need tackling now (soon) to ensure that the stakeholders are getting the information they need - and, importantly, understand the information that they are not getting. Then you/they can start wondering how much it will cost to get the extra (other) information and whether it's worth it.

But, ultimately you'll be working with automation in a responsible way.

Yes, sometimes it can be a 'lovely burden'...

2 comments:

  1. We depend on our automated regression tests, and so we have invested time in making sure they are designed for ease of maintenance. We continually refactor them, and when we get the chance we take a step back and do BIG refactorings to keep them valuable and maintainable. We're in an "Engineering Sprint" right now. We are refactoring our FitNesse suites to extract duplication, a continual effort, and we are refactoring our Canoo WebTest GUI suites to make them consistent with a major change in application terminology, so they will be meaningful and understandable later.

    These suites tell us within 90 minutes whether something has been broken. They free up our time so we can do meaningful exploratory testing. Over the past 8 years we have found our investment has delivered a great ROI.

    ReplyDelete
  2. Hi Lisa,
    Very good that you are designing for maintainability and making refactoring a regular part of your activity. The regular refactoring and stepping back are definite ways to keep in touch with the automated suite and keeping it relevant.

    As always, having an idea about ROI is good - but the key thing with ROI is to evaluate it regularly.

    Thanks for the comment.

    ReplyDelete