Tuesday, 15 December 2009

Carnival of Testers #4

What a period the last three weeks have been in the testers blogosphere. Really interesting with a few noticeable themes - conference-related and "honesty" were the two biggest themes that I noticed this time around.

Below is a taster - and I'm trying not to burden the same people with traffic to their blogs the whole time ;-) As usual you can volunteer any post worthy of a shout in the comments for the period (25-Nov - 14-Dec09).

There were a bunch of post relating to EuroStar2009. You can check out the twittering with the #esconfs hashtag.
  • It was whilst looking at these hashtags I noticed one of several posts that Nathalie (@FunTESTic) made.
  • Shrini Kulkarni (@shrinik) gave a nice roundup of the conference.
  • Michael Bolton (@michaelbolton) wrote about the test lab set-up. He also put up the slide show of his "burning issues" talk on his site. Well worth a look.
  • Rikard Edgren gave a full smörgåsbord of flavours (or maybe it should be julbord for this time of year) from the event, here.

Exploratory Testing popped in a few guises in this round - two different slants were:

 Other posts covering a range of topics were:
  • Albert Gareev was working through testing and checking questions in a questionin piece.
  • Amit Kulkarni (@mumbaitesting) reflected on a "popular" question about testing something completely, here.
  • Gojko Adzic (@gojkoadzic) made a nice film analogy about the dangers of releasing early.
  • As someone wanting to do my own Pecha Kucha I was interested to read Markus Gärtner's (@mgaertne) post. Nice pics!
  • utest (@utest) did an interview with Matt Heusser, part 2 is here.
  • For those wondering if you're an unconsciously incompetent tester check out Anne-Marie Charrett's post here.
  • Lists, lists, lists... Jay Phillips (@jayphilips) gave a list covering 100+ software testing blogs, here. You can check if your own is there!
  • Elisabeth Hendrikson (@testobsessed) gave a reminder about keeping the deliverables of Agile in focus.
  • Catherine Powell was insightful and to the point with a couple of posts on randomness and being thick skinned.
  • Justin Hunter (@Hexawise) gave an example of a "subtle" defect that is has been seen by many without being corrected. Could you see it before it was pointed out? (Warning: it's to do with English grammar!)
  • Naomi Karten (@NaomiKarten) gave a cautionary tale about managing expectations - although I think of it more as understanding expectations. Take a look, here.
  • Trish Khoo gave some advice on delivering bad news - for testers and wanabee testers. 
  • Lanette Creamer (@lanettecream) wrote a bunch of thoughtful articles, but this thought-provoking one stood out.
  • There was a Gordon Ramsay theme to a couple of Rob Lambert's (@Rob_Lambert) posts! Firstly in a nice analogy about ingredients for a tester, here, and then another when I was expecting the expletives to fly here - although it ended up as perseverance applause ! Tongue-in-cheek but with a serious background on "fairness".
  • This leads onto some other calls recently for niceness, honesty and fairness by a various people: Lisa Crispin (@lisacrispin), wrote a piece encouraging people to treat each other with respect and Matt Heusser (@mheusser) highlighted some "copied" work and rounds off the carnival with a piece on virtue.
Hope you find some interesting posts in the above - I lost count of the number of posts I read but had good fun doing so. More after the holidays...

Any gems that slipped under my radar?

Sunday, 13 December 2009

Submission done, Backlog & Inbox Zero struggle - roll on Christmas break

I finally got a submission posted for consideration for the STC magazine.

A few weeks ago I thought this was going to be easy - I had two articles planned for submission (one serious and one tongue-in-cheek) but then a combination of illness, kids-party-planning-turning-into-military-operation, workload and November weather-and-light-deficiency malaise got in the way.

So, what I'd thought of (but tellingly hadn't planned for...) as two articles being ready for a peer review turned into one article being a last-minute dash without any peer review. Yes, the "fault" pickings are likely to be high - lots of low hanging fruit on the bug tree!

Reflecting on it reminded me of my own failings when it comes to DIY (home decorating) estimates! Typically, when a re-decoration activity is starting there is always some unseen element - which might mean an additional trip to the shop, attic or drawing-board to work out how to solve the problem. (Think wallpaper stripping which contains multiple layers and the bottom layer is painted with a waterproof paint - making even a steamer much less effective...) So a 1 day job easily turns into 2-3 days.

Luckily, I don't always get unexpected derailments to my test-related estimations. They occur and it's just a case of working through them.

But this also started me thinking about backlogs - typically email inbox and rss reader backlogs.

Information Overload and the Holy Grail of "Inbox Zero"
I think the struggle towards "inbox zero" (the struggle to manage your email or rss reader inbox) is an interesting one. It's like some type of holy grail that is almost impossible to achieve. I just read of Phil's struggle to get through a backlog - this is something I can relate to.


Reading Jagannath's summary of Huxley's vs Orwell's view of too much vs too little information gives an interesting perspective to the information overload conundrum.


These two articles reminded me of an article I'd read in the HBR a while back (I think access to it is limited) about information overload. Some tips there were actually to delegate "information responsibility" - so you rely on a circle/group of colleagues/friends to disseminate & digest the information - this is something I implicitly do.

Follow-up question: Do you read email in the bathroom?

Putting "inbox zero" into context though - should you try to achieve it? I'm a little bit on the "lean-culture" part of the discussion here. Delaying the decision (to read or work on the backlog in this case) can sometimes actually reveal more information to make the decision better.

What? Delaying reading something can help decide if you should read something - how can that work? Well, there's a couple of tips below, but in essence it means sometimes you can glean some information via summary, heading, sender, receiver, etc, etc and use that to prioritise when/what to read.

From an rss perspective this might mean I'll read at least the first couple of posts from a first-time-poster as I'm inquisitive (and an inquisitive tester) and it sometimes takes time to focus for both the writer and reader. But it might also mean that I'm just as likely to read (or decide to not read) "big" names as well as "little" names...


How do I work with such backlogs?
#1. I mentioned that I implicitly delegate information digestion above. What does that mean? This means that I don't automatically jump into email debates/threads. I'm not interested in being the first to reply to a topic - unless it's only addressed to me! In my work environment this might mean several people are on the TO: line - and I'm not always at my desk monitoring email. This can result in the question has already been answered before I've even read the mail. Cool! If that reduces my need to spend time answering that's great in my book.

#2. Even if I'm at my desk I sometimes conciously decide to ignore email, twitter or rss updates. I leave that for "email, twitter or rss update"-checking time. This is usually the case if I need to get focussed on a particular piece of work.

#3. If they're work related and I've been offline for a while (holiday for example) then I will just take the latest mails in the threads and see if there's something I need to act on - I aid this by colour-coding auto-formatting emails depending on whether I'm on the TO or CC line (I even have different colours depending if I'm the only one on the TO line or whether I'm there as part of a distribution list or even whether the mail is from certain managers). This helps me to decide which emails to look at first.

This might lead to me replying or asking a question if a response is still needed (typically if it's gone over say 2 weeks.) If anything falls beyond this bracket then I'll probably just mark it as "read" - this helps me reset my baseline for items I should act upon...

Any other tips for "inbox zero" out there?

Now, talking about backlogs I have a carnival waiting... Oh, I'm looking forward to the Christmas break - and the inevitable inbox backlog when I go back online afterwards :-) Bubbly or cocoa? Mmmm...

Sunday, 29 November 2009

Swedish Product Documentation Testing

Today is the first Sunday in advent. In Sweden that usually means the start of glögg drinking (some know it as gluwhein and some as mulled wine).

Well, we made an early start by one day - always looking to integrate early and find the faults up-stream!

Traditional associated fayre with glögg are almonds and raisins - they're supposed to be in the persons cup and get infused with the drink. Well, we'd bought a ready-made assortment of blanched almonds and raisins.

Whilst warming the glögg I read the back of a packet:
Ingredienser: Kaliforniska russin 70%, mandel 30%. 
Kan innehålla spår av nötter ocf torkade frukter.

Translation (emphasis mine):

Ingredients: Californian raisins 70%, almonds 30%.
May contain traces of nuts and dried fruit.

I don't think I have to comment on this (for fear of over-dosing in sarcasm and irony!) If someone had a peanut allergy would this information help them? Are the traces of nuts related to non-almond nuts? Who knows? Same question for the dried fruit...

Lesson
Just as testing requirements before implementation is important, it is equally important to test the customer documentation that goes with the product - even if it is just a fruit 'n' nut packet.

The "testing" or questioning of the customer documentation should try and establish if it is clear, unambiguous and ultimately useful.

Nuff said!

I'm going to inform the company involved and we'll see where it leads...

I'm sure there are hundreds more real-life examples out there with every-day product packaging and documentation. Or?

Tuesday, 24 November 2009

Carnival of Testers #3

The past couple of weeks have been very interesting reading in my google reader. Here's some of the highlights of my reading.

  • Marlena Compton had an interesting introspective, here, about PNSQC and managed to compare James Bach with Santa!
  • Phil Kirkham compares Mary Poppendeick's "deliberate practice" and Geoff Colvin's list to wonder if they are used in a tester's area of deliberate practice.
  • Michael Bolton looked at "merely" testing and "merely" checking, here.
  • Matt Heusser opened the floodgates when asking testers to find grammatical errors in his chapter of Beautiful Testing, here. For the second fortnight in a row here announced a challenge winner, here. Matt also wrote a very interesting post on test training needs with a lot of interesting responses, here.
  • Michael Kelly, on Quick Testing Tips, made a post about templates for a testing session. Also on QTP Anne-Marie Charrett made some observations about learning and delegating, here.
  • For the fans of mnemonics Karin Johnson provided one for regression testing, here.
  • Pradeep Soundararajan posted some comments about Rahul Verma and fuzzing in software testing.
  • Ben Simo has been driving very fast recently or was there a problem with the GPS? He also gave food for thought about testing in a later post.
  • Yvette Francino looked at the matrix idea of rating priority and severity of bug fixes.
  • In this time of bacon fever and other pandemics Markus Gärtner had a look at some other things you can catch or be exposed to in the work environment, cultural infections.
  • Lisa Crispin wrote about making time to learn in the work environment.
  • Dave Whalen wrote a post on Priority vs Severity, finishing off with mentions of unicorns & Big foot, here!
  • I was spoilt for choice with Lanette Cream's posts! Here are three mentioning a test case bloat presentation , what a test case is and being context driven.
  • Parimala Shankaraiah also wrote some interesting posts in the last couple of weeks, this one talking about what she got out of the BBST foundation course.
  • Alan Page gave food for thought, as usual, in a post about integrity of test result data and what it implies.
  • Rikard Edgren made some interesting observations about scripted vs ET and vice versa.
  • Peter got in references to epistemology, empiricism babies and vortices all in one post.
  • Adam Goucher wrote a very interesting post about public speaking skills.
  • Of course I have to finish with a self-plug to my Monty Python post, here.

Hope you find the reading out there as interesting as I do!

Sunday, 15 November 2009

Monty Python and Software Testing Myths

After reading a post on Matt Heusser's blog mentioning a discussion on "Zero Defect Software" I immediately started thinking of this as one of the "Holy Grails" of software development.

This was then a natural progression (for me) to then start thinking about the film "Monty Python and the Holy Grail", Holy hand-grenades to battle the things stopping us from reaching "ZDS", Monty Python in general and other silly analogies, etc, etc.

There! You've just seen divergent thinking in action!


Use of analogy?
The monty python films & sketches tackle sacred beliefs, assumptions & taboos - things you shouldn't question the truth of.

But, can you say that to a software tester?

Yes, but don't expect them to not question what's put in front of them - as long as the questioning is about learning, understanding the different perspectives/applications and not just being critical for the sake of it!

So, monty python, ZDS and holy grails started me thinking about "testing myths" and how I might apply elements from their work to help achieve or dispell those myths.

It's seems unlikely to say that software testers can learn from Monty Python, but I think I'm seeing some parallels (it's not exactly proof by contradiction, more dis-proof by absurdity!)

For anyone not familiar I've attached some Monty Python youtube search 
options so you can view at your leaisure..


Myths and the Monty Python Answers

  • Zero Defect Software
MP: The holy hand-grenade can be used to combat obstacles to getting there!
(youtube search:  monty python and the holy grail holy hand grenade)

  • Quality can be tested into the product
MP: This is the dead parrot sketch. "No it's not dead it's just resting" - no matter how serious the bugs/flaws it can be denied/improved afterwards.
(youtube search: monty python dead parrot sketch) 

  • Test Certifications are necessary for testers
MP: This is the King Arthur discussion with the peasants on systems of government - he is king because of the lady of the lake, no other logical reason or demonstrated ability.
(youtube search: monty python and the holy grail systems of government) 

  • Testing is Easy / Anyone can be a tester
MP: Maybe this is the Life of Brian syndrome - an innocent passer-by is mistaken for the messiah. Ignorance and misinformation play a part in incorrect conclusions.

  • It's possible to test everything
MP: Could be the same answer as for "testing is easy". As a different answer: This is the Mr Creosote approach to eating - he doesn't get full, eating and eating until he eventually explodes. (Yes, it's not possible to "test everything")

  • Testers are the Gatekeepers of Quality / Quality Police
MP: "Nobody expects the Spanish Inquisition!" If you try and ship early we'll get out the comfy chairs!
(youtube search: monty python spanish inquisition)

And finally...
For all those unicorn questions Monty Python's The Life of Brian has a good answer for tolerance: "Blessed are the cheesemakers!"

Remember, the ideas are silly, but then so are the myths - what a perfect match!

I extracted some of the myths from Renjini S. & TestingGeek.

Got any "new" myths? Maybe there's a need for a follow-up...


Sunday, 8 November 2009

Carnival of Testers #2

Over the last 2 weeks I've read and dipped into a lot of different testing blogs, and even left comments on a few. My google reader gets the updates on the blogs of 66 different testers (some are group blogs), various user and testing forums.

I've done my best to avoid information overload and stay sane ;-)

So, here are my highlights of some of those blogs (covering 25 Oct - 7 Nov). A whole range of topics have come up with quite a few references for Exploratory Testing, Agile, the STC magazine and the Testing.StackExchange site:-
  • Farid raised a good point about the need for continuity in Exploratory Testing: Self-Discipline in Exploratory Testing
  • James Bach & co produced a new version of their ET Dynamics List.
  • Just over a week ago the book Beautiful Testing came out and Linda Wilkinson gave us a fairytale, here.
  • Fred Beringer gave a good overview, here, of the GTAC 2009 conference.
  • Eric Jacobson didn't get any great inspiration from Blink, here.
  • Catherine Powell highlighted the need to think about the other fence after develoment, moving towards customer deployment, roll-out and support, here.
  • The results of Matt Heusser's test challenge were announced, here.
  • Phil Kirkham, amongst others, announced the new STC magazine, with a call for volunteers and contributors, here.
  • Michael Bolton made some comments on an older StickyMinds article (talking about ET & scripted testing) that he wasn't so keen about, here.
  • Peter wondered if he'd found a link between bugs and full moons, here. Howl!!!!!
  • Rob Lambert shared some of his recent reading (both test & non-test related), here.
  • Alan Page announced that his blog was now an ex-blog, here, also giving directions to his new blog.
  • Justin Hunter talked about Cem Kaner's presentation on ET vs scripted testing, here. Justin also talked about the first month of TestingStackExchange, here.
  • Ben Kelly and Markus Gärtner also wrote about exploratory testing, here and here.
If you haven't already read any of these then happy reading.
You can drop me a line about noticable posts - in case they slip under my radar :-) or submit a link.

Carnival #3 in a couple of weeks (aimed at blogs)...

Saturday, 7 November 2009

Carnival of Testers #1

I got this idea from my company's internal blog site - where there's a weekly round-up of interesting internal posts. This is encouraged to be a rotating review - someone different does the round-up each week - a combination of a digest which is filtered by the reviewer's personal preferences...
I expect this to be nick'd - re-used & industrialized by some of the bigger testing forums soon :-)
I'm going to start with forums...

Over on the Yahoo SW testing group there was a question this week: What does the back of a hall of fame tester baseball card look like?
This started out as a silly/playfull question but developed into a more serious and interesting topic considering how (and if) testers can be compared across the industry, how to do it and inherent problems in just comparing testers within one company.

The beginning of October saw the launch of the testing.stackexchange.com site for questions related to testing. I dip in there daily as the site is quite active and worth a look - whether you have a question or an opinion/answer.
Topics discussed this week have ranged from "ET dynamics", tester/dev ratios, cost of bug fixes, Agile/Agile-testers to good books for absolute beginners. Definitely worth a look.

Just over a week ago the STC announced the idea of an e-magazine, asking for contributions for article submission and help with the admin, here. They now have enough volunteers for the admin/help but submission of articles is still open - and it's open to everyone of whatever level of experience!
Questions discussed this week ranged from Test managers who have never tested to Testers as Developers. I recommend everyone to take a look and contribute.

It's been a bit quieter on Test Republic this week - I'm hoping for some reports/posts from the Next Generation SW Testing II conference soon!

For the Swedish testers out there a new forum has just celebrated its first month, TestZonen.se (only in Swedish).

Discussions, blogs and questions seem to follow patterns where software testing is concerned - sometimes it's the sheep/lemin mentality and sometimes the goldfish mentality -but never really boring!

Next time I'll dip into some of the blog posts I've found interesting - hopefully rotating between blogs and forums.

Have you noticed anything interesting on one of the forums recently?

Monday, 19 October 2009

Roundtables and Being Promiscuous

This post was triggered by a post on "007 Unlicensed to Test" about "bounce buddies" - people to bounce ideas off.

I use and really appreciate the use of bouncing ideas off of others. It's a good way to learn and refine your thoughts. I do this in my daily work. In another way blogging is a form of the idea bouncing - putting thoughts and ideas out to generate a discussion (I very much believe in blog discussion rather than blog preaching!)

I have two approaches to my "idea bouncing" in my work environment - one is informal and the other is slightly less informal (but not formal).

The Round Table
The seed for the idea came from my manager - a regular meeting of test leaders. I developed this idea into a roundtable discussion.

If you google for "round table discussion" you'll see different flavours of how they should work. The pattern we follow closest is here.

One of the ideas behind the roundtable discussion is to have a discussion of equals - all having an equal contribution - anyone can contribute with a discussion topic and all have the option to have a say.

The semi-informal gathering is a enabler for bouncing ideas - all are busy with different schedules of meetings and so having an opportunity of a small oasis to bounce ideas around is great. We only started this a few weeks ago, but it's been very positive so far.
I'd started a similar weekly meeting - with similar intentions - in a different group some years ago. Due to the nature of the discussions this became known as the "weekly complaints meeting". The current version is staying truer to the idea of a round table discussion so far.

Promiscuous?
I don't use this term in the sexual context but rather the idea of having an indiscriminent approach to who I bounce ideas off. This is infectious (again not sexually!) If you behave in a very open way about asking questions or views on ideas then the receiver is more ready to do the same back - ask their own questions or say "what do you think about xyz?"

That is a very positive result!

Firstly, you might not have the polished idea when starting to talk to somebody - but by the end of the discussion you're going to have clarified some things - maybe it was a bad idea, there were some points overlooked, you get a lot of positive feedback or you come up with a new idea on an unrelated topic.

Secondly, the other person is encouraged to ask you their own questions (assuming you respond) - re-inforcing the importance of a two-way flow in any discussion.

For me, this type of discussion typically happens by the coffee machine, in the corridor, a visit to somebody's desk or an email.


These two ways of bouncing ideas around and getting feedback have a very positive impact on my working life (even if the feedback isn't positive!) They are great ways to communicate and great ways to encourage communication.

Blogging is another way - although less fire-sure about getting the feedback.


So, do you actively bounce ideas around and if so what methods do you use?


Tuesday, 13 October 2009

Real-life maps & Testing

Two things triggered me to write this post...

I saw an amplified post from Jerry Weinberg on maps, here, and the ongoing Agile Testing Days in Berlin. I had a weekend in Berlin a couple of weeks ago - and I made a note in my notebook at the time.

After one day/night there I made a note "Map chaos: Scale and position on many Berlin maps do not correspond". Yes, I also had a testing perspective in mind.

I had looked at 4 different maps (from different sources) and none seemed to tie in or they all gave me problems of different kinds.

  • On the plane I had been reading an flight magazine with a feature on Berlin nightlife - unfortunately their map didn't correspond to any of the other Berlin maps I had! I guess the map maker of that one was still under the influence when making the map - so yes, that was useless.
  • After landing I decided to "go native" and travel as much local transport as possible. However, interpretting the map for the link between the airport and the u/s-bahn system was not straightforward (to my non-Berlin-trained eyes!) I eventually got the message by the time I was leaving!
  • In my Time Out guide to Berlin all there detailed maps of the city centre (I was staying in Mitte - the centre) didn't have the street that my hotel was on - even though it was just around the corner from a main U-bahn station. This made even getting to my hotel tricky - as the last bit I did on foot. I got there by deducing which of the streets it must be based on the information I had of surrounding streets. Yes, I would've asked a passer-by but there weren't any...
  • Then there was the map I got from the hotel reception - at least that one had the street of the hotel on it! This was the most heavilly used map. But even this one had problems - when trying to get to a local landmark we asked a taxi driver (you'd think they'd have a good grasp on the sites) and he was non-plussed. When we showed him the map and pointed to the place with the name, he called it something else as though it was obvious!

So what's the testing tie-in?

Yes, this works on many levels. Perspective (or context): You understand the map quickly if 1) you understand how the map maker made it, 2) you have a link to a common understanding of the mapping principles.
Take contours as an example - if you don't know anything about contour lines then they're not going to mean anything on a map - and you could end up trying to climb some big hills unintentionally!

Underground/subway maps are typically drawn from the perspective of having a neat/even distribution of the lines & stations to be able to show that in a "nice/tidy" way. They're not meant for guides to "how to get to the station if you're walking above ground".

So, what were the bad assumptions with each of my maps?

Map #1: It turned out that this map was not to scale and only vaguely correct where the rivers junctions and districts were concerned - it was very scant on detail. Lesson: If the detail isn't there (and you need it to make sense of the situation) be suspicious - start asking questions.

Map #2: No real guide how to use for the bus-ubahn connections - for the un-initiated. I don't mind "using to learn" but it would be good if the map told that's how I'd learn about it... Lesson: Sometimes you just have to suck it and see!

Map #3: Although the book was a 2009 edition my hotel straight was on a line with the "wall" - meaning that the street wasn't useable on the first maps that they'd based their guide on. Lesson: Use your own "fuzzy belief" or "fuzzy logic" - use the information you can see and confirm - where there are gaps use the surrounding information to help interpolate/deduce.

Map #4: This seemed to be the most accurate and useful map but it still was understood by some of the locals. They obviously had their own map (whether real or mental). Lesson: Synch'ing your perspective to someone elses is important (and probably underrated in communication.)

What more did I learn from the exercise?
These apply everywhere, but including our daily testing lives:

  • Don't take anything for granted - especially when 4 different sources are giving 4 different versions of information. Don't take for granted that whoever you communicate with have a shared perspective - check that you're both "in synch".
  • Always try and have an assessment of your own meaning of the situation. Take the data you're getting and make your own story - how you'd re-tell the story of all the information you've just taken in.
  • Lack of data in a situation where you need it to make sense of the situation? Whether this is map reading or interpretting a customer requirement - it's time to start asking questions!

I'm sure there are more map-making & testing analogies out there. Anyone?

Monday, 12 October 2009

What's Your Testing Motto?

Do you have a motto in your testing work?

What's a motto?
Do you have a general approach to your work? Maybe it's an attitude or general starting position. Maybe it's something that sums up your team approach, your problem approach, the approach to test issues - it could be a separate approach for each or a single approach that works on many different levels.

When I first started as a trainee function tester (that was the job title) one of my first team leaders said to me, "Anyone will help you, but you will have to do the work."

Personal Motto
I liked that, adopted it and personalised it. I reframed it for my use as, "I'll help with anything, but it's you who needs to do the work."

This fits into the teamwork approach grouping of mottos, or phrases that sum up my approach.

On the face of it this could sound negative. I never interpretted it that way when first hearing it and have never meant it that way when using it. To me it is an enabler for teamwork. If used within a team it means that everyone supports everybody else within the team and also that everybody contributes to the team. So, from that perspective it's very inclusive.

I don't remember the last time I used the phrase (maybe 2-3 years ago), but everybody that works with me (including managers) knows about it, buys into it and I occasionally hear it used in front of me - so it's an idea that is easy to adopt and spread.

People like and see the value in it, whether it's called a motto, a phrase or an attitude.

Other Mottos?
Can I find a phrase that sums up my attitude/approach to problem solving and test approaches? Mmm, let's see:

Problem solving: I'm a divergent thinker and emergent learner, so I think my attitude has got to be broad coverage, initially shallow and follow-up on key areas.

Test approach: This is a combined top-down and bottom-up approach, trying to understand the big picture as well as digging into the details (a typical answer from a divergent thinker and emergent learner!)
Note, it doesn't mean that these approaches always work for me - but they are starting points.

Why bother?
You could ask the question, "why bother trying to sum up my testing approach?" Maybe you have a list of things that you could categorize that you use, something taking a page or chapter to discuss.

Well think of the example of twitter. Sometimes when trying to get a message across (in 140 characters) you need to re-think what you want to say, cut out the noise and try to distill the message.

It doesn't always work - sometimes you remove part of the message/meaning as well as the noise. But, try it as an exercise - what do you do, why and can you describe it? It's a very powerful exercise.

Do you have any mottos?

Can your testing approach or attitude be summed up in a few words? In your own words, or somebody elses...

Tuesday, 29 September 2009

Are you an Inquisitive Tester?

#softwaretesting #testing #qa

 I read an ariticle yesterday, here, about a study of innovators - including characteristics of the main drivers behind Apple, Amazon & Ebay.

They distilled their finding into 5 "discovery skills":
"The first skill is what we call "associating." It's a cognitive skill that allows creative people to make connections across seemingly unrelated questions, problems, or ideas. The second skill is questioning - an ability to ask "what if", "why", and "why not" questions that challenge the status quo and open up the bigger picture. The third is the ability to closely observe details, particularly the details of people's behavior. Another skill is the ability to experiment - the people we studied are always trying on new experiences and exploring new worlds. And finally, they are really good at networking with smart people who have little in common with them, but from whom they can learn."

 They then summarized these into a characteristic - inquisitiveness. A good summary I think.

Initially, I was just browsing but the more I read the more I thought "yes, this is exactly what I'd look for in a tester". A tester has to be inquisitive. Throw the communication and networking skills into the mix and you have a very sound foundation.

So, as far as skill sets go: Great Innovators = Great Testers.

Have you thought about the skills that make you successful? You are inquisitive, right?

Sunday, 27 September 2009

Is the message getting through?

Sat on a bus this weekend on the way to Stockholm airport.

Approaching the airport the driver announced that the bus would stop at all terminals. He did this in both Swedish and English.

Straight after the announcement, and coming up to the first terminal stop, the chap in front of me reached up and pressed the "next stop" button. Hhmm. Got me thinking:

Did he not understand Swedish or English?
Did he understand the language but not the message?
Did he understand the message but have an ingrained/reflex reaction?
I don't know, but I got curious as this is behaviour I can sometimes identify in my daily work. A message is transmitted  (sometimes several times - both written & verbally). However, the expected change doesn't necessarily occur.

A typical example might be a change in a routine. After doing X and before doing Y we should now do ...
But the message about the change doesn't always sink in. Why?

I suspect that the changes don't happen when there is a small tweak in a routine, process or activity. It's a case of someone not really listening or paying attention.

So, if it's a small change -> get the attention first. A big change from a standard practice or routine will probably get the attention anyway - but don't take it for granted!

Similar experience anyone?

Tuesday, 22 September 2009

More notes on testing & checking

Background
Read Michael's series on Testing vs Checking. I've also made observations here and here.

Why?
Since seeing the first post I've been thinking, why?

Why the need for the distinction. I've been getting along ok without the distinction. I'm by no means a perfect example of how a tester should communicate - but one thing I do quite well in my work environment is to tailor the message to the receiver (or at least I try!)

That may be one reason that I haven't been bitten by the testing/checking bug. Or has my mind been blown? Ahhhh....

Digging
Anyway, about 5 days ago I wanted someone to point out the root problem, the root cause for the distinction (or at least one of the root problems.) My idea was that the distinction arose from communication problems and so I twitter'd to Michael about this:-

YorkyAbroad: @michaelbolton Nice latest post. The root cause of the need for the distinction hasn't been addressed yet. Or?
about 6 days ago

michaelbolton: A test, for you: what *might* be the cause for the need of the distinction?
about 5 days ago

michaelbolton: That is, I'm asking you: if you *see* the distinction (between testing and checking), what use could you make of it yourself?
about 5 days ago


YorkyAbroad: @michaelbolton Understand the diff bet. checking & testing as presented. A symptom of a problem is being addressed not the cause. Waiting...
about 5 days ago


michaelbolton: Sorry that you're not willing to respond to my request. You'll have your answer within a few days; please bear with me.
about 5 days ago


YorkyAbroad: @michaelbolton See checking akin to part of a test question. Little scope for checking that is not preceded or followed by sapience.
about 5 days ago


YorkyAbroad: @michaelbolton Am willing. Distinction needed in the communication (a root issue) of observations & feedback into a questioning approach.
about 5 days ago


michaelbolton: I&#39m finding it hard to read you in < 140. Why not send an email; I&#39d appreciate that.
about 5 days ago


YorkyAbroad: @michaelbolton I was "testing" if "the woods from the trees" was being distinguished. Will work on my own answer. Look forward to yours.
about 5 days ago


If anything is obvious from the exchanges it's how limitting 140 chars can be when trying to communicate more complicated questions... You might also notice the Socratic fencing/avoidance - wanting each other to answer their own question. So time to resort to email.

Bad checking? Communication?
The following is extracted from a mail I sent to Michael detailing some thoughts on bad/good checking and communication, I thought they'd be interesting to insert here:-

In my own area of work we have a fair amount of both testing and checking.

I can think of good checking (the methodolgy not the action) and bad checking.

Bad checking is where a regression suite is used to give a "verdict" on a software build. I liked your last post questioning if the tester is looking at the whole picture rather than just waiting for a "pass" result and moving on. But it can also be bad checking if we set off a suite of tests without asking the questions; "what is the goal here with these tests? What areas of the product am I more/less interested in? Should I pay particular attention to feature X?"

These questions have to be asked before executing a test manually or by automation - hard work maybe but if we don't ask them then we slip into the "regression coma" or your green bar pacifier. This is bad checking - or maybe just checking but "bad" testing.

So, to distinguish "good/bad checking" - good checking is preceded by a bunch of questions - some more vague than others - but they are there for a purpose.They have to be there: Why test something without any purpose? Learning, going on gut instinct, "I've seen a problem in this area in earlier releases..." are all valid purposes for targetting a test session. But testing without thinking is a problem and dangerous - as the "question/goal" mutates into "I don't want to find a problem" and becomes a self-fulfilling goal (in the long run!)

That's why I think that checking and sapience can never get very far apart. The "check" is an element of the test - where the test is preceded by the question or goal of the test (pre-defined or not.) So understanding the distinction (between testing and checking) also means needing to understand if a tester is asking any questions and appreciating that a check is an element of a test.

On to one of the root problems that I alluded to: Communication, specifically between tester and non-tester (although it can very easily be tester-tester also.) I'm thinking of the issue where people are talking past each other - a good example was in a post by Elisabeth: http://testobsessed.com/2009/05/18/from-the-mailbox-fully-automated-gui-testing/

Communication is a must in testing - whether it's understanding a requirement, a scope of work from a stakeholder, relating the observations made during a test session, making recommendations about areas further work, etc, etc. BUT, also communicating what's going on at any stage of a testing - different types of test and their different purposes - maybe even the different goals/questions going along with each.

If testers (or test leaders) can't get that information over to the receiver then there will always be the "testing trap" - the testing information black hole. If a tester now thinks they can answer a dev or stakeholder by throwing "check, test, sapience" terminology at them they're on equally shaky ground as they were before (if they didn't understand the differences in test types and how to communicate them.)

The check vs test distinction can help but only if you had a problem communicating about your testing activities and understand that you have/had that problem. The distinction goes part-way but I think it'd be even better to start with an "enabler": a question that a tester can use to discover/realise that this problem exists for them.

If you read Michael's posts you'll see that he's answering questions that are coming into him left, right and centre. It really is a discussion (rather than a monologue) - exactly in the spirit of a good blog. That's good to see, and commendable!

More digging
I wanted to follow-up the communication angle by starting a discussion on STC & TR about communication experiences where the testing/checking distinction might have been a factor (a help or hindrance), but I think it's all getting a bit obscure now - either people don't follow the whole thread, don't understand (mine & other's bad communication) or they just don't have the need/motivation to be interested (a big proportion I suspect!)

Communication Follow-Up
However, I'm fascinated by communication and information flow - so I might re-jig the question to be more general and follow-up on how testers experience communication issues and get around them...

I'm very much in the Tor Norretranders school of information flow - that the sender and receiver have to share perspectives (context even) before they understand each other!!!

This synch'ing of perspective takes time! So if someone doesn't understand your message the first time around it could be poor articulation, but also that you're both not sharing common ground. Yet!

Pause
I'm not quite signing out of the testing/checking discussion but I'm following other things in the pipeline, blog posts I want to finish, a couple of conference presentations plus my regular day job :)

Have you ever experienced difficulty getting your message across?

Saturday, 5 September 2009

Sapient checking?

#softwaretesting

The recent discussions about checking vs testing have been interesting. My spontaneous response, here, looked at the label (or container) of a test/check but also alluded to the fact that an automated batch of tests are much more than checks.

Yesterday, I saw a clarification of checking from Michael Bolton, here, and a tweet from James Bach:-

jamesmarcusbach: We think a check is "an observation with a decision rule that can be performed non-sapiently" example: junit asserts.
about a day ago
After reading Michael's post and then seeing James' tweet I wanted to test putting sapience (the concious thought process) and checking together, resulting in the following tweet discussion:-

YorkyAbroad: @jamesmarcusbach RT: check is "an observation with a decision rule ... performed non-sapiently". ME: Selection of the observation=sapience?
about 4 hours ago

jamesmarcusbach: Sapience is whatever cannot be automated about human intellectual activity.
about 4 hours ago

jamesmarcusbach: I apply to term specifically to focus on things my programs can't do, but that I need to get done.
about 4 hours ago

YorkyAbroad: @jamesmarcusbach I was thinking of sapience as in http://bit.ly/10797H . You shouldn't make an observation with some thought to select it.
about 3 hours ago

YorkyAbroad: @jamesmarcusbach Freudian slip? You shouldn't make an observation //without// some thought to select it.
about 3 hours ago

jamesmarcusbach: That's what I'm talking about, too. So, are you saying that if I don't think about selection, I should keep my eyes closed?
about 3 hours ago

YorkyAbroad: @jamesmarcusbach I'm saying if you don't think about it then don't make the observation.
about 3 hours ago

YorkyAbroad: @jamesmarcusbach If you make the observation without thinking then what question is it answering?
about 3 hours ago

YorkyAbroad: @jamesmarcusbach If you have no question in mind then why make the observation?
about 3 hours ago

jamesmarcusbach: I'm simply asking then: are you telling me to keep my eyes shut until I specifically know what to look at?
about 3 hours ago

jamesmarcusbach: I may make observations because they are inexpensive and they supply my mind with fodder for generating questions.
about 3 hours ago

YorkyAbroad: @jamesmarcusbach I'm thinking of 'checking' and sapience. Would you make a check without applying any thought beforehand?
about 3 hours ago

YorkyAbroad: @jamesmarcusbach The thinking weeds out and refines which checks you might want to make.
about 3 hours ago

jamesmarcusbach: Good point. That is one of the little traps of checking-- GOOD checking still must be wrapped in what Michael & I call testing.
about 2 hours ago

YorkyAbroad: @jamesmarcusbach Precisely! I wouldn't advocate checking without thinking beforehand. I just advocate testing - implying thinking first...
about 2 hours ago
I had two things in mind with the sapience and checking combination. The first thing was to show that, according to the descriptions of sapience and checking in the above links, they are incompatible.
Did I do this? Well, if you accept that you wouldn't run a test/check for no reason then the answer is yes.

In this respect, the title of the post, sapient checking, is nonsense.

The other item: An example. I was thinking of a re-running of test cases - in my view this requires the objectives of running those cases to be understood beforehand.


Thinking before Testing?
I'm still of the view that all testing requires thought beforehand - otherwise how can you interpret results and try to help with their meaning? This has several implications.

One implication is that any automated batch of test cases to be used for re-running should always go through a selection process (review for relevance and effectiveness.) I know this might sound radical to some people but it shouldn't be!


Example
Suppose an existing/old test case is to be executed to ensure/check/verify that a new feature has not negatively impacted the existing feature?

Possible reasoning that was an input to the selection of the test case:
1. If the old feature was expected to be impacted then the test case would be opened for review/modification (I'm not distinguishing between new design and maintenance here.)

2. If the old feature had no expected impact then the test case is expected to "work", as is, then that's the measure of evaluation to make this a test. (The underlying software of the product has changed - the new feature - and the same inputs and outputs are to be verified in this new configuration/build.)

3. If we don't know if it's reason 1 or 2 then it may be for one of two main reasons: 3a) It's an exploratory/learning exercise -> testing, 3b) It's a test case that has "always" been run or never been reviewed for relevance -> time for reassessment?


Room for Testing?
For old/legacy test cases then the tester is making an assumption/assertion that a test case (or group of test cases) should work (on a new build/configuration/env.) The tester tests that hypothesis.

It's very important to keep the test base up-to-date and relevant! Filter out redundancies and keep them under review. In terms of legacy cases this should be budgetted as part of the maintenance of the product.

All of this work with legacy test cases is what I consider testing - it isn't anything without the thought process to guide it.


Cloudy? Foggy?
The problem with trying to define checking is that it is so tightly intertwined with testing that I don't think it adds value into the communication about a product or project.

If, as a tester, you're trying to help your project understand the risks associated with a project and communicating issues found then I don't think there is any need to say x issues have occurred during testing and y issues have been observed during checking.

I can think of checking as an element of testing, just like a test step might be an element of a test case. But on its own there is no value in communicating it to stakeholders.

I think there have been many communication problems tester<->non-tester in the past (and probably still continue today) - the information needs to be tailored to the audience, which is a great skill for a good tester!


And Finally...
I can't see the example where a test/check can be selected without any thought process. To distinguish a group of test steps, under a given context, as either a check or a test is totally unecessary in my working environment.

Keeping the storytelling good and the quality of the reporting and feedback high and consistent will lead to reduced misunderstandings and communication improvement. I know, easier said than done!

I suspect communication problems (and not a lack of terminology) have been the root causes for some of the cases where a distinguishment between testing and checking has perceived to be needed.

Checking without sapience doesn't hang together. Why would you do something without a purpose?



This post is my effort to understand the discussion - using my combination of learning, critical thinking and past experience.

What does your experience and analysis of the discussions tell you?

Monday, 31 August 2009

To test or not to test, just checking...

#softwaretesting

To test or not to test, that is the question....

There was a very good and incisive post from Michael Bolton, here, the other day, distinguishing testing from checking. A lot of good points and items to trigger reflection.

The evidence of a good post is that it gives you something to think about. I started thinking about my own experience, specifically in areas of performance, load & robustness testing and automated regression suites.

I asked myself a couple of questions - with Michael's post in mind:
When is a test not a test?
Does it matter to distinguish a test from a check?
Note, when I talk about a test check or "check" below I'll use it in the meaning that Michael used.


A Test Case

In my particular working area (telecomms) a test case is a generic term. It's a label that can be attached to a group of test steps - to be used in a test, a "check" or a measurement.

In performance testing we have scenarios executing from which measurements about the system under test are gathered. These scenarios are labelled as test cases even if they don't explicitly test or check.

The output is used for comparison and tracking against established benchmarks, or even creating benchmarks. The results are analysed with stakeholders to determine whether further investigation is needed.

There is no problem in the organisation in calling this measurement a test case - it's purely a way of grouping some steps that contribute to an activity.

So test cases, checks or performance measurements can be designed and executed but they are collectively referred to as test cases - or even just tests. The test case, check or measurement relates to the activity and not the unit container.


Test (Case) or Not Test (Case)

The need to define, distinguish and be clear about terminology is a natural part of a critical thinking approach.
So, is there a conflict in using a general label such as test or test case?
A test or test case is a container. It doesn't say very much unless we know something about how they are applied - e.g. whether it's a unit, functional, robustness or other type of test case.

With most testing the point at which a test check emerges is when it has gone through one successful iteration - even a unit check is a unit test until it "works" - meaning the correct behaviour is verified (or agreed). The first times it is executed may reveal information about the unit design or the design of the unit test that may need modification to get an accepted result.

So from first inception it is a test case - once the correct behaviour has been verified (and agreed, if need be) then it can be used for regression purposes - at which point it might be thought of as a check. However, I'll probably still think of it as a test, see below.


To Test

When I think about software testing I like Gerry Weinberg's definition of testing about information gathering. In lay terms I like Michelle Smith's description as an investigative reporter, here.

Taking a dictionary definition of test (not specific to software - from the Oxford Concise dictionary): "A critical examination or evaluation of the qualities or abilities of a person or thing." I can agree with this also.

So is a test case written with lots of expected results a test or a check?

An expected result from the test is defined. However, the determination of the regression test/suite as being suitable to give input to the stakeholders is the critical evaluation step, ie the decision that the outcome will give valid input to the stakeholders.

A regression suite should be kept under review for relevance and value of the information that it provides. Note, not all of these tests will be automated and this will always throw up an interesting dimension.

So, in this context, it's a test and not a check.


Checking vs Testing?

In Elisabeth's example, here, the difference between preforming tests for the GUI and exploring the system for usability is really a problem of communication - it wasn't clear (maybe to the developer, tester or both) that the type of testing being talked about was different.

Labelling it as checking and testing is one way to distinguish. Another way, using an agile example, is to say that one type of testing belongs to quadrant 1 and the other is quadrant 3 (referring to agile testing quadrants.)

So, what's in a name? Or what's in a label?


Q&A

Q: When is a test not a test?
A: When it's a performance measurement, but even then I group it as a test. Test is just a label for grouping.

Q: Does it matter to distinguish a test from a check?
A: In general, no. Items are grouped as tests. In my experience regression tests are selected for the information about the system they will provide.

If we need to distinguish between checks and tests then there may be other issues that are at the root cause of the problem.


Are you putting the evaluation into your regression selection criteria?

Do you or your co-workers know how to distinguish between different types of testing?

Are you "checking" that you're "testing"?

Tuesday, 25 August 2009

My Latest Metric

#softwaretesting

There has recently been a debate over metrics when applied to software testing. Stories of usage and examples have run the whole range from the good, the bad, the ugly to the incomprehensible.

Linda Wilkinson has bravely decided to start a series on metrics and her viewpoints, here. I anticipate a good discussion - if there is any negative feedback I'm sure it'll be given in a professional way - rather than just being unfriendly feedback.


My latest metric
The latest metric that I have used is the pedometer - or step counter. It shows me how many steps I have taken and total elapsed time with a conversion to distance and calories used.

But that's just raw data - I need a certain history and environment in which to interpret the data.
It doesn't tell me about the terain I was walking in.
Was it hilly - so some steps counting double for the exertion?
Was I walking fast/slow?
Was I walking fast and taking lots of breaks or did I do the whole thing in one go?
Was I carrying any baggage?
Was I pushing or pulling something?
How comfortable were my shoes?

Test Metrics
If using them, then understand the data - what it's representing, what it's an instance of and question what it's not telling you.

Know and work with examples of what might be deducable from the data.

Know and work with examples of what cannot be deduced from the data.

Know how to deal with data that's missing - sometimes nothing more than to acknowledge it - but that's an important step in itself.


Extensions?
Some would say that the answer to my pedometer problem is to get a better pedometer or ultimately some sort of human-tachometer. However, I just want a simple comparison - something that gives me some background data. It's just data until I can set it in a context with some meaning and I'm happy to do that step.

Even a super-duper-human-tachometer couldn't tell me about my motivation on a given day. The final story always needs the narrative!


Have you thought about the problems and limitations of your metrics?

Feedback: Friend or Foe?

#testing #feedback

I'm currently making notes on learning opportunities and one aspect that I thought needed separate treatment was on how testers give feedback to each other.

If you think about your experience in the work environment, on testing forums, twitter and blog comments are you keeping it professional - with the emphasis on professional courtesy?

I've seen examples on forums where sometimes a "seemingly innocent" comment or question is met with annoyance, condescension, rebuttal or frustration. Let's call it "unfriendly feedback".

Some Unfriendly Feedback Types
Annoyance: Maybe the responder had a bad day?

Condescension: What's the responder trying to show? Hopefully they are not falling into the "knowledge is power" syndrome. This is not a team-player response.

Rebuttal: This can be a legitimate "critical thinking" response. However, remember that the essence of critical thinking is about understanding and clarifying communication and not about being "critical".

Frustration: Maybe the question was something that could have been googled or someone has just asked a re-formulation of a previous question.

My Take?
One of my professional mottos is that "I'll help anyone but won't do the work for them."

This can be a fine line - especially for newbies - sometimes they don't know where to start, don't know which questions to ask first, how to formulate the question and are just keen to "start". That enthusiasm is good; it's an ingredient for future interest and development.

So, yes, answering questions from newbies can sometimes be hard work - the key is to direct and discuss threads for investigation. But remember, questions from non-testers can also sometimes be hard work!

These feedback types are equally visible in many peoples' work environment. They are not limited to tester<->tester, it occurs tester<->non-tester and non-tester<->non-tester also.

I've seen consultants treated less than courteously precisely because they were consultants - to their credit they responded professionally. This falls into a wanabee-superiority response and I don't think of it as a feedback type - more a cooperation problem.

Give it to 'em
If you're giving feedback keep it professional, courteous or light/humorous. If you can't then count to ten and try again. The important thing is not to withhold feedback just because you're biting your lip or counting to 10, 100, 1000,.... Give the feedback - it helps the receiver and it helps you!

If you're on the receiving end of the above type of feedback then rise above it. Don't bite back and lead by example. Give it back to 'em - an example of professionalism that is!
Note. When I was a new tester I was on the receiving end of the condescension type of response from a fellow tester in my organisation. I persevered - and saw that tester subsequently sidelined. If you're not a teamplayer then you're heading for a team of one!
I know! Easier said than done at times!


Have you been on the receiving end of unfriendly feedback? How did you respond?

Have you recently given unfriendly feedback? How do you think it made the receiver feel?