Tuesday, 30 June 2009

Moonlighting

#softwaretesting

I've decided to try a bit of testing in my spare time. (I know how some colleagues will react... But this is a learning exercise that I want to document also.)

I'm going to have a look at the Test Republic challenge, here. I saw the reference to the challenge in one of Pradeep's blog posts on blogging, here.

I haven't really thought about testing a website before (it's not in my area of work), so it's an interesting diversion for me.

There was no defined approach or strategy to use in the challenge. The approach I will use is of general useability testing, combined with aspects of exploratory testing.

Why exploratory & useability?
The useability approach is a means of looking at the human interfaces, and I'll take the role of someone with very limited experience of the website.

The exploratory element will help me dictate the course of the testing - which leads I follow-up and focus on and also help me decide when to stop (both in terms of the feedback from the test session and also by adhering to a timed/defined test session.)

Summing-up
I like the ultimate aim - to describe the problems (issues or faults) in a legible and useful way - this exercise is something all testers (at whatever level) should be able to do, so it's a very accessible challenge.

I'll publish my report in due course - both the one I enter for the competition and also the notes about how I went about it...

When did you last do some testing moonlighting?

Monday, 29 June 2009

Blogs, Darwin and the Testing Eco-System

#softwaretesting

A bit of divergent thinking...

I started about by thinking about testing blogs recently, progressed through evolution and ended up thinking about the testing eco-system (or natural habitat) - or really the testing evolutionary tree.

Blogging
I wondered what testers get out of blogging and in a "rare" moment where my glass was only half full (as opposed to half-empty) I could only come up with the positives. Here.

I like and agree with both James and Matthew's comments. Blogging is definitely one way of getting a body of knowledge out there (or views and opinions) - true sometimes they are or have to be bland, as James says.

But it was Matthew's reference to a sponge that triggered my Darwinian thinking...

Evolutionary trail...
There is room for sponges in the testing eco-system - some people have to start somewhere. Jumping on a Darwinian-theme: it's quite ok for someone to absorb information and get a few ideas under their belt.

James wrote an interesting post where he was saying that those methods/techniques will always be someone elses methods/techniques - it won't be the testers (that has sponged/absorbed them) until they develop them further and make them their own.

Hence Darwin - this is the evolution of the sponge into something more advanced (not sure what the next step up the evolutionary scale is - biology was never a hot topic for me). But the important thing is that they do "evolve" and don't lie stagnant...

Following the evolutionary analogy, I guess the spammers are lower down on the scale, maybe the hackers are more advanced. The sniping commentators are probably on another evolutionary side-branch and evolution will ultimately take care of them.

Maybe the evolution analogy is over-doing it -> people learn, develop and progress (hopefully.)

Phylogenetic Tree
However, all this thinking about evolution made me wonder about an evolutionary tree for software testing.

I really like to think of the testing eco-system/habitat as something comparable to an evolutionary tree - there are many branches, many ideas that have developed, transformed and evolved.

This is something to chart the different strands for test techniques and test approaches - one tree for each view. This is something I'll continue to work on.

Suggestions for the test evolutionary tree?

Tuesday, 16 June 2009

Building Your Test Brand

#softwaretesting #SocialMedia

Or... "Everything you wanted to know about being a trusted tester but were afraid to ask!" (nearly)


Is this something for me?

If you’re a test engineer – at whatever level of experience in whatever field of work – I’d say yes.

What?

Building your brand can be interpreted in many different ways. Some of them being:-

  • Self-promotion
  • Becoming a trusted source of information
  • Bragging

I’m really only interested in the 2nd point – there's a fine, but distinct, line between the 1st and 3rd point, but those are probably more useful if you looking for a job or selling something – and that’s another story.

So, I’d think of building my (or anyone else’s) testing brand as point 2 – being a reliable, consistent and, ultimately, trusted source of information – i.e. opinions, test results, reports, data, interpretation along with conclusions and recommendations.

Being a trusted source is not about being an expert (experts can build their “expert” brand) – it’s more about describing the world as you see it and being accepted/known that that is what you do.

Why?

Every tester can probably tell an unhappy story involving tight timescales, project/customer pressure, etc, etc.

Pressure is always placed on the “test organization” – whether it’s a development project, customer roll-out activity or even customer demonstrations – you could say it almost goes with the territory.

As the tester is usually placed in this situation then it is very important that they are a constant in a sea of variables – that their assessments and reports (even problem reports and observations) are consistent and reliable.

The reports from the test organization are an input (nothing more grand than that) to the decision makers (project and product owners) about whether sufficient criteria have been met to progress to the next phase with this release.

Therefore, it is very important that the results and reports from the tester/test organization are consistent and reliable. It’s just as important that your views are known to be consistent and reliable.

How?

Consistency and reliability are key factors.

Consistency

Consistent in approach, reporting of results, observations and issues.

Reliability

Telling the “truth” as you see it – or what you believe. It doesn’t matter if no one agrees with it (although if you’re outnumbered 100 to 1 it doesn’t harm to double check your conclusions) – you’ll be trusted more for an honest opinion rather than stating what you think someone wants to hear.

Lack of information?

If you’re unsure about something, then say so. If a recommendation can’t be made due to lack of data, say so and if further investigation is needed then state it. If the information is incomplete, then communicate this.

Open Questions?

It’s better to give an answer with a caveat of a lot of open questions (if that’s what you think is relevant) – it might be frustrating to the recipient to get more questions than answers – but describing the world as you see it will bring you more “long-term” friends.

Remember: Question to clarify, question to investigate, question to progress and question to learn.

Social Media?

In today’s world of increased use of social media it’s possible to ask if there is a separate dimension here. In some respects your brand is what it is “perceived” to be – i.e. your interaction in/with social media can affect how you are perceived.

The use of different electronic media can be used to build a brand – in terms of self-promotion.

However, the bottom line remains true – consistency and reliability – whether you are interacting by email, message boards, wikis, online forums or any other electronic media.

If you apply the ideas of consistency and reliability to your communication then you’ll also be perceived by those principles (maybe not by all and not at the same time.) That’s the future-proof route for self-promotion.

These ideas can equally apply to more areas and roles than just testers…

How do you build your brand?

Tuesday, 9 June 2009

Can we do it differently now?

#softwaretesting #process 

This is a question I hear now and again. 

Got a car service this week and it reminded me of a much-used phrase - in many walks of life - which can be applied as an initial approach. If it's not broken then don't fix it.

Some might see this as a defence against change or improvement, others that it's just not ambitious.

My Experience?
I get approached quite a lot with different tools/methods in different testing projects. I encourage this creative thinking, investigation and appraisal - both of what and how we're doing things and how they might be improved.

It's quite common that I'm asked to consider a new tool to replace an existing one...

My Take?
What's my own take on this phrase in the context of my own work environment? Well I would say it's more analytical...

"Is it broken?"
"What is it that's broken?"

These are the first check questions - establishing that all are agreed on the problem. It's worth remembering that any assessment (whether of a tool, strategy, method or process) considers the whole problem by looking at:

  • The past - Pros/cons of how it's work up to now. (Reasoning behind some of the decisions taken getting to where you are today.)
  • Current - What's the situation today. 
  • Next - What are the main improvement areas and how will we achieve them?
Going through these steps in the past have usually thrown up several telling insights. 

Sometimes the original process has not been followed correctly (leading to erroneous reasoning.) This is a good pointer for where education or refreshers are needed (either for the tool or process.)

Occasionally it's thrown up issues with a tool which could be modified (rather than replaced) and at other times new process/tool improvements have come forward - and we've done the analysis to back this up and have the input into the "business case" for the new tool/process.

So, yes, question everything, even the questioner. Do it with the right intentions and get the culture of change and improvement in the team working in the right way.

Do you question enough?



Tuesday, 2 June 2009

Simplify, Simplify, Simplify!

#softwaretesting

History is an amazing resource from which to learn. There is a principle originating in the 14th century that can help us when reviewing documentation - whether requirements, user guides, test documentation or whatever else.

Occam's priniciple of simplification (also known as Occam's razor) is something we need to see more in today's software testing circles... It's something I apply to descriptions, specifications and guides.

I think of it as the medieval KISS (keep it simple stupid) - or maybe just KIS (to use the simplification principle.)

Essentially it means removing duplications in the language - this is done by breaking something down into its component parts and removing the duplications (or even the parts that are not necessary.)

It's this act of analysis (whether there are duplications or not) that gives us the simplification/clarification - so it's almost an indirect use of Occam's razor.

A typical example would be to break a convoluted requirement into 2 (or more) requirements - breaking it down into it's constituent parts/requirements. Remember one of the earliest steps in a testing phase is to inspect/review the input documentation...

So, the next time you see fuzzy requirements or some other fluffy descriptions you can calmy tell the author/review-circle that you're invoking one of the following (make sure they know this is slightly tongue-in-cheek):-
  1. a 14th century priciple
  2. the law of parsimony
  3. Occam's razor

Where once we had "education, education, education" - then "location, location, location" - Occam gives us "simplify, simplify, simplify" - at least indirectly!