Friday, 29 May 2009

Are you a divergent thinker?

#softwaretesting #teamwork
Almost sounds unpleasant doesn't it? Something that was frowned upon in Victorian times? Well, I am a divergent thinker, and more...

Problem solving and learning fascinates me - especially the interaction with teams. The how and why people learn in different ways and how they approach different problems. Luckily, sitting in a software test engineering environment, I get to think about this, put it into practice and continue to learn and develop daily.

Whenever I sit down with someone to look at a problem - whether it's brainstorming, bouncing some ideas around or to help someone out - I'm intrigued by the different approaches and paths people take to tackle a problem.

Problem solving is like going on a journey - lots of different ways to do it (paths to take) - hopefully arriving at the same destination (the problem in question is solved.) This has a very close analogy to the way people think and learn - lots of different ways to do it, to hopefully achieve the same result.

So, what am I?

Many years ago I was involved in a self-managed learning program. Out of that my instructor and myself came to the conclusion that I was a divergent thinker. Okayyyy I thought...

Is that all?

No, we also concluded that I was an emergent (constructive) learner. Not bad, but what does all this mean?

Testing relevance?

Some key attributes are:

Emergent Learners

- Building knowledge from the bottom-up
- Diving-in to learn from experience rather than from a book
- Evolutionary - adapting to change and being self-organising
- More DIY-learning...

Divergent Thinkers

- Brain-storming
- Creativity
- Looking at the problem from several angles -> "Big picture" & "out of the box

Sounds great to have these qualities in a test team.

Of course, there are drawbacks with these. Emergent learners may have holes in their knowledge as they haven't gone via the classroom - but I think the ability to adapt and learn makes-up for this.

This doesn't mean that one type of learning/approach is better than another - as with any team, it needs a healthy mix of all types - then everyone is backing each other up... That includes mixing high-flyers and less-able members - it's the team that counts, not the team hero.

Similarly, I wouldn't want a team of only brainstormers (or even brain showers!) - all that whiteboard and post-it activity - they'd never do anything!

In practice, is it true? Has it worked?

In early 2005 when I joined an organisation as a system integrator - with part of the role being to establish a system integration process and organisation - I was the only tester not to be sent on a training course for the HW/platform we were using. My manager's explanation: "we're time stretched and you're the sort of person that will learn more without the course..."

Well, I didn't completely agree - it was like a bit of tightrope walking without a safety net - but I pulled on my waders (or was it diving suit) and jumped in at the deep end.

Horrible mix of metaphors!

True enough, I adapted and evolved both a strategy and process - this attitude fitted nicely as the org was an incremental development project in start-up, so there was plenty of change, modification and adaption.

Does this mean these traits are more suited to incremental development? Yes, they're good to have but again I think it's the healthy mix of competences that makes the project work. Everyone has elements of all learning styles - just that some are more practiced than others...

Is that all then Ted?

Recognising and understanding the differences is what's important. As a team leader it's very handy to know (or guesstimate) these styles in different team members because adapting the presentation to the individual wins every time! Also tasks can be shuffled around to match strengths, or even exercise/improve weaker areas.

As an aside, if you're interested in mind-mapping (will appeal more to the divergent thinkers) there's a lot of good software tools out there. One I've been playing with recently is bubbl. Quite nice.

Don't confuse emergent learning with emergence - this is another term I'm interested in - more to do with how teams can self-organise...

Tuesday, 19 May 2009

Pain? Test Automation w/ Incremental Development

#sw-test #sw-auto

I am a fan of test automation - when used correctly - with the right planning, motivation and expectations (see posts When to use Test Automation part 1 & part 2).

However, if you really want to put the concept (planning, motivation and expectations) to the test then try using it during an incremental development project.

This provides a whole set of additional factors to bear in mind - usually they're in the form of constraints and obstacles. Below is a snapshot of some thoughts in this area.

Typically, an incremental development project will have several teams (combined design & test) that will be working on separate packages of work (ideally with limited inter-dependancies). The aim is to have discrete packages that continually add to the base application/system and do so in a way to continually increase the functionality of the system/application in a controlled manner.

There are several different falavours of incremental development techniques with agile techniques being the latest umbrella grouping.


Ok, so what's the issue with test automation and incremental development compared to more traditional techniques?

The decision to go for test automation should be based on a grouping of factors (pros & cons, benefits and costs) as part of a business case for the go decision.

When planning the automation side then factors to bear in mind:
  • The turn-around time for the development package
Is it 2-4 weeks or 2-3 months?
  • Complexity of the development package.
Typically, a study/analysis should highlight this.

Vital that the automation experience is considered.

Automation issues should be flagged at the same time
as other design issues.
  • Have the automation experts ring-fenced to consider impacts across the board - not just for one team.
Benefits are:

Architecture of the automation framework is maintained.

Improvements can continuosly be fed into the pipeline.
  • Interdependancies with other teams
There will be some - most of the time.
This becomes a planning issue with the project leaders,
technical coordinators and automation experts.

The goal is to find the best fit for the different efforts
and delivery dates.


This all seems like a lot of work, so what's the pay-back?

Well, the goals of any incremental development effort holds true: smaller increments are delivered and verified to build a system in a more controlled way (more checkpoints along the way and ensuring that the system is not broken.)

This gives the result that the automation scripting (or code) is built up in increments with underlying changes to the framework incorporated. The regression testing performed ensures that the deliveries enhance the total test base - ie not breaking any legacy (or if it is broken it's then known and documented.)


Plan, plan and plan again. Usually plans become out-of-date quickly - so keeping them updated and communicated is important.

Communication. Get the right people involved at the right time. The automation effort should be driven/coordinated by the expert/s - these are involved at the early stages - so that issues affecting the test/automation effort are identified early.

Automation competence. Keep it and ring-fence it. During incremental development this is a competence that may need spreading over several teams so it's important not to dillute the work with non-automation related test work.

Thursday, 14 May 2009

When To Use Test Automation (part 2)

This post looks at some of the cost considerations for using test automation within a project or organisation.

Costs of Test Automation

There are costs associated with any test automation effort - some are obvious and some are not, depending on ones perspective/experience with automation. Let's start with some of the obvious (in your face costs):


It will become increasingly apparent that automation is not a part-time/on-the-side activity. It's true that all SW testers need to be able to handle/script to a greater or lesser extent as the amount of information to process is sometimes huge (for some testers this might just be some manipulation of data in a proprietary tool or an office tool like excel.)

Finding the balance for testers is important. It might be that the test framework is supported by full-time staff and other testers just make use of the API's in their own specific test cases.


The idea of whether to use a framework in an automated test environment is very much related to the flavour of the project and the use to which the test automation is used/re-used. This question is something that needs analysis before the test automation effort is committed.

It may be that the framework is so light-weight to be just some basic templates of how the test scripts should work, fitting into a standard execution environment. In this case the framework just needs a one-off start-up cost with limited maintenance afterwards.

On the other side of the scales the framework could be something quite comprehensive enabling test cases/scripts to be written on varying degrees of abstraction. In this case the maintenance and support of a framework is a dedicated/full-time activity.

Depending on where the test effort falls within these scales dictates the amount of effort required up-front on framework development, as well as supporting processes and strategy.

Processes, Strategy & Documentation

This is needed in all automation efforts. The degree to which it requires up-front analysis and investment varies. Ideally, all processes, strategy and documentation updates should be in place before teams are asked to use them. The reality can be that all of these develop and are updated in parallel with team development.

In both cases there is a bigger cost on the first teams using the new/adapted automation strategy. This is the cost associated with an "early adopter".


Either the automation tools/framework are bought-in, developed in-house or some hybrid. Whichever way it falls there will be a training cost associated with it. With any external tool/framework that is brought in there is a potential saving (re-using testers that have experience with those tools) - but there is still a cost as the test cases/scripts have to be "flavoured" for the product/system under test.

Test Organisation vs. Project

When estimating costs of an automation effort the long term needs to be considered. An automation effort (whether it is framework development or new test case development) will have a certain long term cost / maintenance need.

A less obvious factor to bear in mind in relation to test frameworks is that they usually need to be long-lived - it is no use having to re-write the test framework every project or every year, so the long term needs of the framework need analysis - partially covered in the business case analysis.

Business Case for Test Automation

Many talk about the return on investment (ROI) when considering automation business cases. This can be notoriously difficult to estimate in a standardised way. Below are some points to consider when looking at the case for automation - analysis of these will give different answers on the ROI question.


Consider the uses of automation (see previous post on benefits for some) and which problems (in the test strategy) that automation might try and solve.


Automation can get a bad press in terms of its perception to solve many testing needs - this can be a combination of misunderstanding or over expectation. So in any analysis it's important to be realistic and actual for the current test environment - and manage any expectations where necessary.

Underlying Implementation

Consider the need for a framework - do this early - as it could rule out large scale automation early on. It could be that for what's needed it isn't economical to automate on a large scale. Reasons for this could be immaturity on the product/system under test (especially true in early development stages) or limitations in available tools to integrate into a new automation regime.

Costs: Strategy & Effort

Consider and estimate the costs associated with any automation effort - as broad as possible. This should cover documentation, process updates, training, and allocation of dedicated support persons as well as strategy changes. The strategy aspects should take into account what happens with the framework after the coming project and how (or if) the base framework and test suites should be future proof - what's the cost both way (to future-proof or not?)


  • Test automation should be considered an activity in its own right (even if automation is done by non-automation test engineers)
  • When looking at aspects to consider with test automation it becomes clear that the use of automation within a project or maintenance unit should be considered as part of the Software Development process in its own right - and not an offshoot of the testing effort.
  • Understand the costs of automation.
  • The first three points will help determine a better business case for test automation.

Friday, 8 May 2009

Blue-sky thinking with Social Media (part 1)

The spread of communication in test teams has long been essential in the smooth functioning of the activity. This may be to distribute activities, test campaigns, troubleshooting, collating progress reports, latest known problems, etc, etc.

With the move towards increased use of social media there are many opportunities for improved effectiveness in the team structures and dynamics.

Information exchange via informal and more formal networks in test groups has always existed.
  • Formal: Due to the project/team/unit organisation forming a semi-rigid reporting/information chain.
  • Informal: For example whiteboards, emails and wiki pages.
With the increased use of social media there are many opportunities that test engineers (whether in teams, intra-team or management) can grasp to aid communication exchange.

Increased effectiveness will probably come from a hybrid use of the formal and informal channels.

In this team working context the social media could be wikis, webpages, some collaborative software, amongst others. Many of these have existed for some time but are quite often just used as a dumping ground for information - libraries, records, how-tos etc.

So, with all these different ways of communicating what do we do? Let's do a bit of blue-sky thinking...


A more fluid/dynamic team set-up? Something more evolutionary? This is an idea that will probably scare the socks off project managers used to more rigid team set-ups.

Suppose team A & B are working on two different areas. During the course of their execution they hit a problem (affecting both teams). Traditionally, there are several ways this can be approached - a sub-team (from both teams) is set-up to trouble-shoot, one team focusses on the problem whilst the other works around it or the troubleshooting is handed over to someone/team external to A or B.

Well, suppose the trouble-shooting group formed dynamically. This happens either via IM, wiki post, blog, email etc - certain team members (spanning the two teams) decide that the problem should be worked on jointly.

The idea here is that the network exists (the pool of test engineers) and forms a team dynamically (eg two people discover they're stopped by the same problem and join forces, even calling in external help) - this joining of forces aids the problem solving (in most cases)

This type of team dynamic probably needs to be ok'd/coordinated - but as the teams/network gets more and more used to it then the team formation should become more "natural" or "self-selecting".
This dynamism is meant to only create temporary teams for as long as they are needed!

Another example of dynamic team formation is a group forming from a given network to do some brainstorming on improving a process - the interested parties initiate the activity, setting up the group dynamically and kick-starting the work.

Tools for all of this - to help facilitate it?
I think some of best tools are the collaborative SW tools (project web pages with directive, status, discussion boards and team calendars), linked in with blogs (eg for thoughts/opinions on what went good/bad in a project), wikis & webpages (resources for how-tos) and email (ad-hoc questions/requests or inspiration).

When I say evolutionary I don't mean that the "weak" are left behind. This activity has to be inclusive - adoption and usage rates will always vary, and there is no place for exclusion.

Dynamism continued?

What about fluid team structures dictated by the problems in front of them? This is about moving towards collective intelligence! More of that later.

Other benefits & problems with social media and team dynamics in a later post.

Thursday, 7 May 2009

When To Use Test Automation (part 1)

Test automation has many benefits in test organisations but there are costs as well as benefits associated with it. This post looks at what test automation is and some of its benefits.

What is it?

Test automation can be a documented script or executable that can be distributed to several persons for repetition (under controlled/defined conditions) of certain test steps/activities.

The controlled conditions may be set up as part of the automation script or may be a defined pre-requisite before executing the script - this pre-requisite could be another automation script.

The test steps/activities performed may be the triggers against the system under test (SUT) - whether this be a website or other piece of software that can be triggered by some action to change its state. The test steps should ideally involve some form of validation of the results, but this is not always the case (i.e. sometimes automation scripts are used to generate the same input data/activity and then the result/s are checked separately.)

Why use it?

By generating the same activity repeatedly (potentially hundreds of times) there is a reduction in the following aspects:
  • The labour costs to execute the test activities, including execution overnight or weekend.
  • The labour costs to analyse results (if the checks are built into the scripts).
  • The reduced human error when the test activities are repeated - i.e. if they are executed 100 times, then it is assumed that the steps and the order are the same 100 times.
  • The scripts can be incorporated into a framework for speeding up integration cycles.
Should it be used?

It seems like a no-brainer: With all of these benefits when do I start using test automation?

Well, there are some other considerations. These are:
  • Consider the disadvantages/costs.
  • Is there a business case for test automation?
A following post will look at some of the costs of test automation as well as other points to consider for the business case. There will be an additional post on test automation within incremental development projects, dealing with some of the issues and opportunities that raises.

Wednesday, 6 May 2009

Agile and The Emperor's New Clothes

Have you ever wondered if someone is jumping on the Agile bandwagon, or the buzzword bandwagon?

When you hear a comment like "I can't tell you what it will cost because we're using Agile techniques," then it's time to be suspicious.

Agile doesn't replace planning and estimation but there can be a tendancy to use a buzzword for the sake of it, or to avoid/divert attention.

Using buzzwords in this way (whether it's Agile or something else) is akin to the story of the Emperor's New Clothes - someone is trying to dazzle you with buzzwords/terminology in the hope of avoiding further questions.

Buzzwords & the right terminology are good and useful, but they carry with them the same weight of all our day to day communication.

Now, business buzzwords - they sometimes are used to either impress, appear superior or just to amuse.... (see 50 office-speak phrases you love to hate).

I almost winced when I was being encouraged to pick or look for "low hanging fruit" in a workshop earlier in the year....