Thursday, 28 January 2010

Taking the pulse of the Carnival of Testers

A short overview of the last 6 carnivals.

For all those fans of metrics out there, sorry, there are none - just a smattering of stats with little assessment!

There have been 6 carnivals so far. The first took a look at some forums and Q&A sites. Tester's blogs made their first appearance in edition #2.

Originally, I had intended to alternate between blogs and forums - to try and get a rounder picture - or do a bit more sampling when I was "taking the pulse". However, the blogs have been popular and they are slightly easier to group for upcoming posts, so I stuck to them.

So, back to blog posts. Yes, my dip is a fairly random sample - there is no DoE (design of experiment) or actual goal set out beforehand. I just set out, almost aimlessly, and see where I end up - I'm allowed to use that phrase when not testing!

A look back/reflection is in order to try and get a feel on some of the ground/variety of posts and testers that have been represented.

A total of 90 different authors have been mentioned in the last five carnivals, covering 99 blog posts. The range of authors has been prolific, occasional and new bloggers (I enjoy finding interesting posts from new bloggers.)

The topics have ranged from technical to flippant, the odd mini-rant, advice, experiences, thoughtful and thought-provoking to slightly off-the-wall and humourous pieces.

The most common topic area has been learning and team skills - very appropriate for any tester. The next two most represented topics have been exploratory testing and conferences.

Taken as a whole it's intended to be a smörgåsbord (a broad selection buffet verging on pick'n'mix) - sometimes accurately taking the pulse of the test blogs, sometimes missing the pulse - "only-human" disclaimer!

Carnivals so far:


Something for nearly everyone with more to come...

Wednesday, 27 January 2010

Carnival of Testers #6

Three weeks since the last carnival, and there has been lots of activity - maybe people are refreshed after the holiday break or the retrospectives are out of the way and it's full steam ahead with 2010!

A two-sided approach this time. First, a selection of some of the more serious posts

  • Pradeep Soundararajan highlighted the feedback loop required in test estimation.
  • At the same time Ainars Galvans was also writing about test estimation, here.
  • Matt Heusser looked into ideas around reducing costs for attending conferences, here.
  • Parimala Shankaraiah wrote about team types that she had experienced.
  • Ben Kelly was a bit irritated by some of the testing questions from people "who should know better." He makes some good observations, here
  • Weekend Testing landed in Europe with a flock of posts from Anna Baik (link), Phil Kirkham (link), Markus Gärtner (link), Anne-Marie Charrett (link), Zeger Van Hese (link) and a first post from Markus Deibel (here) who was also joining in with the weekend testers too.
  • Thomas Ponnet made a first post on issues around tester integrity and result orientation.
  • Yours truly wrote a piece on silent evidence in testing, here.
  • If you're having trouble running your automated test suites Catherine Powell suggests trying them in reverse order, here.
  • Michele Smith wrote several posts in January, one of them about the revelation in her new read, "the inmates are running the asylum" (can't we all tune in with that now and then?)

That allusion to insanity almost gives a nice segway into the second half....

On a lighter note
There was a distinct thread of humour bubbling up to the surface in the last 3 weeks.

  • Interview questions doing the rounds this month... Ladybug popped up onto the radar with a post on interview questions, here.
  • Then Eric Jacobson's lightswitch testing questions had me smiling. Enthusiatic interviewee or a bit barking? Tough call...
  • Lanette Creamer's idea of certifications gets my backing any day. No pants jokes please...
  • Andy Glover serialised himself onto the blogging circuit with a refreshing cartoon approach to testing topics. For a "tester in supermarket check" have a look here.
  • Matt Heusser floated (pun intended) the idea of conferences on a boat in a Friday fun post here

Until the end of February for the next carnival...

So, you're a tester? What's that?

 #qa #testing #softwaretesting

This was an internal blog post that I thought I'd share.

So, you're a tester? What's that?

Have you ever been asked this question? Where to start the answer...

Are you into Quality Assurance (QA), Quality Control (QC) or just Quality?

I think about this question from time to time, depending on the articles, blogs or tweets that I'm reading. There's been plenty of tester twittering about quality in the last couple of days so the question re-surfaced for me again.

QA:
If you think of yourself as being in QA, do you really "assure" quality?

Isn't the assurance really taken care of by the developers/designers (that implement changes) and the project leaders that direct the effort?

Ok, QC then:
Do you control quality? Do you make it better or worse? Your testing effort surely feeds back into the design/development loop - and this is a valuable (sometimes I'd use the word crucial) contribution. But is it the tester that's controlling quality?

In some senses "quality control" is taken from production line uses. This essentially is a sampling and checking exercise. Well, sampling definitely fits into the activity of testing. Checking is one element of testing - it's a comparison against a static (unchanging) output. However, this only covers a subset of testing.

Production lines also produce the same output (or at least intend to), so software does not fit this model. By definition, a development project is producing something new/unique - so the idea of a production line check only works for test cases that are known to already work - ie regression test cases.

Ok, what about new test cases and the learning (and even exploring) side of testing. Yes, QC doesn't quite do it here either...

Ok, so are you just into quality?
Well, there's a problem. Quality is not a judgement that you as a tester can make about a product! Some fault-prone products may need to go out during a certain quarter to keep the cashflow ticking over - without that there's a risk for less R&D, jobs, products and testing needed etc.

So how do you think about quality? I like Weinberg's definition that "quality is value to some person". This might ultimately be your project sponsor or product owner - and their values are "usually" a reflection of what they think the customer's value are / will be.

So, when you're maybe so far detached from the customer how do you make sense of quality?

Even if you are making reports against your project sponsors definition of "good enough" quality, your findings (reports) are still only one factor in the "release" equation.

Diamonds in the rough?
It's those reports, investigations and approach to digging for that evaluation that is the real key to a tester's role. I think of it as being part investigative journalist (credit to Michele Smith for that analogy that I'm happy to borrow) or researcher. You're digging for the story, the real story, the story behind the facade - it's serious and not a tabloid activity - the story/evaluation is a professional finding.

But, it's a report (or set of findings). Don't get emotionally involved with the product so that you're "disappointed" if a product releases with what you think is "bad quality".

Your reports are used to frame the product - put the story of the product in a setting - you're certainly not any type of police (hopefully all are agreed on that.)

Remember, if you are "disappointed" then take the team approach and sit down to look at improving that investigation (test execution & design) and reporting next time - this is tester + developer + project manager together.

Downbeat? No!
The best approach you can have towards testing and quality is that your testing is providing information about the product (telling a story as I usually say) that will help your project leaders / sponsors understand if their idea of quality (their values) are being backed-up or disproved by your findings.  [Credit to @michaelbolton for parts of this sentence!]

The tester is providing very valuable information about areas of the product that are meeting expectations, or not, issues and questions about the product's usage and suggestions for further areas of investigation.

It's only the tester that's providing this vital information - remember when the project sponsor is making a decision about releasing a product they don't fret over the results of the code desk check...

Confused?
You still don't know what a tester does? Well take a smattering of investigative journalism, professional researcher, scientific experimentor, philosopher and put into a devil's advocate mould and you're getting close....

What would you add into the mix?

Monday, 25 January 2010

Mind the Information Gap (Black Swan-style)

Over Christmas I picked up the Black Swan (by Nassim Taleb.) It was tipped in the Guardian top non-fiction podcasts/books list for the noughties. But I think my awareness of the book came some time ago via Michael Bolton's writing (I don't remember if it was blogs or tweets where I first saw it...) - so a special tip of the hat to him!

I'm only about a third of the way through it but so far this is an interesting look at unexpected events and why people are susceptible to the unexpected.

There's a whole range of interesting ideas but one I found immediately interesting was that of silent evidence - the information that is missed or not included in the analysis. The first example that Taleb uses is from Cicero:-
Diagoras, a nonbeliever in the gods, was shown painted tablets bearing the portraits of some worshippers who prayed, then survived a subsequent shipwreck. The implication was that praying protects you from drowning.
Diagoras asked, “Where are the pictures of those who prayed, then drowned?”
This idea could also be a form of the availability heuristic or fallacy of exclusion.

When I was reading this section I started thinking about Bill Bryson's A Short History of Nearly Everything where some of the problems with paleantology are described - one being that fossils only represent a small subset of dinosaurs and other sepecies that could've been fossilised (due to fossilisation requiring favourable conditions over many millions of years) - so we only see a small subset of possible fossils.

Actually, Taleb includes the problem with the fossil record as an example of silent evidence but I think Bryson's description was better. (Interestingly, my copy of Bryson's book was published by Black Swan publishers - fate, coincidence?)


Silent Evidence in Testing
I identify with the idea of silent evidence in my day-to-day testing work. Whether it's a test plan, analysis, report or other finding one of the areas I consider first are the areas that have been excluded, not considered or not covered.

This part of the picture is very important as there is a different motivation and reasoning behind each.

  • Excluded: This can be deemed to be outside the scope of the test area - maybe covered by another team or agreed as not relevant to the test purpose.
  • Not considered: In a report this might reflect an area that was missed, overlooked, discovered late without any judgement about investigation.
  • Not covered: Very common for a time/tool/config constraint to fall into this category, ie the area is considered relevant to test/investigate but it wasn't covered for reason x, y & z.


Note, I'm not trying to be exhaustive with these categories!

Whether you categorise in this or a different way it's very important to understand what is not covered, tested or analysed. This helps make a judgement/recommendation about any risk associated with this "non-coverage".

I commonly use this type of information to weigh up future test direction - maybe something needs following-up soon or putting on the backlog.

It's also this type of information that makes the findings more complete - building the picture of the testing of the product. This information can be reported with or without judgement and recommendation (preferable with some commentary) as this creates a picture of what has been considered, analysed or thought-about during the activity.

Considering the information that we are excluding (or not considering) also helps to understand some of the assumptions (modelling and model reduction) that is being applied to the testing problem - so this is a natural part of the feedback loop - helping to improve future test activities.

So, keep a look out for the silent evidence and "Mind the Information Gap"!

Do you actively think about the areas you're not looking at?

Thursday, 21 January 2010

Analogy! He's got an "ology" (almost)!

I find myself using analogy more and more these days. I like it for a number of reasons:-

  • It helps get the message across
  • I like humour
  • I like to approach the problem from different angles


Getting the message across
Sometimes the message is difficult. Not everyone is tuned-in to your way of thinking. Tor Norretranders wrote in the User Illusion that people need to go through a "synch'ing" process before "real" communication takes place. This is a bit like a protocol handshake to establish that both sides are ready to transmit and receive.

Analogy plays a part here. It helps find a common ground, history, context, experience or whatever you might want to call it.

Humour and off-the-wall ideas
I'm a divergent thinker. It's in my nature to look at problems from different angles. I work a lot with strategy decisions - sometimes that means taking a new/radical/different view about something or wanting to try something "new" out. One of these "problems" is communication. Getting people on-board and getting your message across and gaining acceptance for the message

Humour can be like an ice-breaker for new ideas, a door-opener. It allows the salesman to get his foot in the door and start a dialogue. Humour is not the subject or main attraction it's just the warm-up routine or the introduction.

Traps
There are pitfalls or traps associated with analogy. It should be clear what the subject/purpose is - that it isn't the analogy itself - "why is he talking about comfy chairs again?" There is a danger with being too divergent, too tangential, that you actually lose the audience....

This has happened in the past and will probably happen in the future. This is where perseverance comes in (and trying alternative approaches.) If you want to get your message across you have to work at it!
I sometimes joke that if someone isn't agreeing with me then I haven't discussed the topic with them enough.
Sounds arrogant?
Absolutely not - you find common ground - the discussion becomes a dialogue (from the  ancient Greek meaning where the group discovers insights that the individual cannot achieve) - and everyone learns together.

So, when I think about testing problems and concepts I use analogy a lot. I've used it in the past when thinking about testing myths, communication and I'll be using it (probably coupled with a bit of left-field humour) in the future.

I have plenty of humour candidates and problems/concepts that could do with a new look - so keep an eye out for the analogy-ometer ticking away!

So, when did you last look at something from a different angle?

Thursday, 7 January 2010

Carnival of Testers #5

This is a short summary of some of the highlights of the past couple of weeks (from just before Christmas to a little after New Year). Things were a bit quieter than usual but there was still plenty of output.

As usual, with this time of year, there was a range of well-wishing, reflections and assessments of the year-gone-by and looking into the coming year.

  • Seth Eliot made a great commentary of the debate/discussion that has been ongoing around code coverage involving Matt Heusser, Alan Page & BJ Rollison.
  • Chris McMahon reflected on the past decade and took a look ahead.
  • Quick Testing Tips was churning out plenty of good ideas - as though there was no holiday - here they pointed out a useful resource for some cheatsheets, here.
  • Matt Heusser floated the idea of wirting about testers, here.
  • On the subject of writing Lannette Creamer wrote about respectful discourse here and here.
  • BJ Rollison wrote an interesting piece on thinking about test design, with some interesting discussion in the comments, here.
  • Cem Kaner summarised a collection of his articles, with some recent additions, here.
  • uTest gave a recap on their guest-blogger series.
  • Mark Waite mentioned his use for Google Alerts, here.
  • rocketkeis posted an online list of some of the Beautiful Testing chapters, here. If you haven't read it (I'm hoping to do a review soon) then you can check out some of the chapters here.
  • Nathalie reflected on Eurostar2009 and some of the learnings she was putting to use.
  • Rob Lambert wrote an article about "old" being the new "new" here.

The list is more or less chronologically backwards and I don't usually say it, but I have saved the best for last. Jon Bach wrote a great article here. I'm not saying any more - if you haven't read it go do that now.

All are worth a read, so if you've missed any of these then take a peek.

Until next time...

Wednesday, 6 January 2010

I foresee problems...

It was a toss-up between a couple of topics to start off the new blogging season (Series X, episode I) - one was quite heavy and another was lightweight-serious-amusing - but I haven't got my act sorted out on them yet - notetaking is still ongoing.

Then, a flyer popped through the letterbox today. It seemed to chime with the constant background "noise" of snakeoil, best practices, my dad's bigger than your dad, etc, etc. The translation reads:
Born with spiritual powers. I am known around the world. I can solve all of Your problems for example with love, health, family problems, business, legal matters, financial transactions and weight loss. You will know how to defend yourself and your family from enemies and how to get your nearest and dearest back into your life. Gauranteed results.
French and English speaking.
Just when I'm pondering some strategic changes in my work area this pops up!

Cool!

I'm going to call the stakeholders and get this guy in to predict where the bugs are! Or even better to tell the developers before the code is written... Gauranteed results? "Everyones a winner", as Dell Trotter would say!

On the other hand he might have spiritual powers but maybe I should follow-up on his protocol experience. Damn, there's that questioning tester alter-ego popping up again...

I can't predict the future but I can be fairly sure of one thing for the coming year: Challenges, problems, bugs to be found and maybe the odd giggle along the way. Bring it on!

Got any good predictions?