Sunday, 19 December 2010

Thoughts on Test Framing

 #testing #softwaretesting

 Some thoughts on the ideas presented in Michael Bolton's post.

Firstly, I like it. A lot. I commented on the post about a day after it was published. But I knew there was some more thinking I needed to do on the topic. So here we are, over two-and-a-half months later (yes, I think slowly sometimes - sometimes one-dimensionally and sometimes in a very divergent way - meaning I create lots of strands/threads to follow-up - but I do continue thinking, usually! )

Powerful Elements
Showing your thinking is crucial to any activity that involves questioning, observation, hypothesis forming, evaluation, reporting, learning and feeding back under constraints of time, cost and equipment availability (this is testing, if you didn't recognise it already).

Showing the provenance of your test ideas is a powerful communication tool - both for yourself, team and any stakeholder. Doing this with many of your ideas will naturally show up if many of the thoughts/considerations you'd intended to include were indeed included. (What?) Ok, example time.

Pseudo-Example
A simple example here is to construct a test idea around a "happy path" of a function, the most basic and usual case for that function. This will then lead to thoughts around:
  • The use case itself (variants and alternatives);
  • Platform (how many?);
  • Environment (Small with many simulated elements going right through to a full customer acceptance configuration);
  • Application configuration (Based on what? observed or projected?);
  • User data (Based on what?  Supposed, observed or projected?);
  • User behaviour (Based on what? observed or projected?);
I work in situations where these are distinct variables - not so much overlap, but in some shops some of these could be less distinct.
Evaluating and selecting which of these elements to use, include and vary is part of the test design of the test idea in conjunction with the test strategy. So, in this example we can see (hopefully) that there is a range of test ideas that could be distilled.

What More?
In the example I alluded to elements of choice/decision based on risk, priority, test design technique, strategy and availability. Sometimes this is about scoping the area for test, what to include (for now) and exclude (for now).

When I am telling the story of my testing I like to emphasise the elements that could be missed - areas I've skipped and the reasoning behind that, areas that maybe were discovered late that could have an impact on the product and for which it might be useful to recommend additional testing.

So, I want my story to be very explicit about what I didn't do. There is nothing new in this - traditional test reports were including this information over 20 years ago. But, I want to link it into my test framing in a more visible way. If I start talking about all the things I didn't do it's easy for a stakeholder to lose focus about what I did do.

So, as well as the explicit chaining of test framing showing the roots and reasoning behind the test idea I also want to include (explicitly) the assumptions and decisions I made to exclude ideas (or seeds of new ideas). In the context of the test frame this would represent the a priori information in the test frame/chain (all of the framing (so far) is a priori information).

But, there might also be scope to include elements of ideas that would affect the test frame (or seeds for new ideas) with information discovered during the testing. Then from a test frame perspective it could be very useful to include this a posteriori information.

Examples of a posteriori information would be feature interactions that were discovered (not visible before) during testing - which didn't get covered before testing stopped. There might be aspects where test ideas couldn't be 'finished' due to constraints on time, feature performance (buggy?), third-party sw issues (including tools) that couldn't be resolved in time or some other planned activity that didn't get finished (a feature element that needed simulation for example).

I usually think of these aspects that are not 'included' in the testing story as the silent evidence of the test story. Making this information available and visible is important in the testing story.

There are aspects of these assumptions and choices that are implicitly in the test framing article, but for me it's important to lift them forward. (Almost like performing a test framing activity on the test frame post.)

Maps
As a side-note: Test framing as a form of chaining / linking around around test ideas fits well into a mind map, threads and test idea reporting. But that exploration is for another time...

So... Have you been thinking about test framing?

Risk Compensation and Assumptions

 #softwaretesting #thinking #WorkInProgress

This is very much a thought-in-progress post...

Cognitive bias, assumptions, workings of the subconcious. They're all linked for me in one way or another. Then the other day in a twitter chat with Bob Marshall and Darren McMillan I was reminded of a potential link to Risk Homeostasis (sometimes called Risk Compensation).

Risk Homeostasis is the tendency for risk in a system to have some form of equilibrium, so that if you devote more attention to reducing risk in one part of the system you increase the risk in another part. This is a controversial theory, but there is an intuitive part that I like: devoting increased attention to certain risks inevitably implies devoting less attention to other risks. To use economic terminology there is potential for increased risk.

This also ties in with an intuitive feeling that if you are devoting less attention in one area the "risk" components there could increase without you noticing. This is almost a tautology...

I first encountered risk homeostasis in Gladwell's description of the investigation into the Space Shuttle disaster (in What the Dog Saw) and I see the similarities to testing.

Stakeholders and Risk Compensation?
A software testing project example. Taking the project/product/line manager's view about testing:-

A correction; release is coming up and a group related to release testing is performing a release test activity. That, in some minds could be seen as a "safety net" - could the PM be influenced in the level and amount of testing "needed" prior to the release test activity? I think so, and this is risk compensation in action - so-called safety measures sub-conciously allow for increased risk-taking elsewhere.

It could apply to new unit, function/feature testing needed for a "hot fix" - an "isolated" decision about what's needed for this hot fix - PM thoughts: ok, let's do some extra unit/component test as we can't fit in the functional test (which is dependant on a FW update - that can't be done at so short notice), we'll do extra code reviews and an extra end-2-end test. Seems ok?

Then comes the other deliveries in the "final build" - maybe one or more other fixes have had this "isolated decision making". Then, for whatever reason, the test analysis of the whole is missed or forgotten (PM thoughts: it is short-notice and we're doing an end-2-end test) - i.e. putting all these different parts into a context with a risk assessment of their development and "the whole caboodle", from the PM perspective there was a whole bunch of individual testing and retesting, there will be other testing on the whole - that's enough isn't it?

Assumptions
Where do assumptions come into this? Focussing on certain risks (displacing risk) results in some form of mitigation (usually). But, the sheer act of risk mitigation (in conjunction with risk compensation) implies that focus is reduced elsewhere. The danger of assumption here is that the mitigation activity is coping with risk, when in certain cases it's just 'not noticing' risk.

The act of taking action against risk (mitigation) is opening possibilities to trust (and not question) our assumptions. But how do we get fooled by assumptions. There are two forms that spring to mind:

  • The Black Swan. Something so unusual or unlikely that we don't consider it.
  • A form of the availability heuristic. We've seen this 'work' in a previous case (or last time we made a risk mitigation assessment) and so that's my 'reference' - "that's just how it is" - "all is well".

An everyday example of Risk Compensation
Topical in the northern hemisphere just now: Driving in the snow with winter tyres and traction control. I see it a lot these days (even do it myself) - when moving off, don't worry too much about grip - just let the car (traction control) and tyres find the right balance. So the driver is trusting the technology - when I used to drive with winter tyres and no traction control, there was a lot more respect about accelerating and 'feeling' the road, ie using feedback.

Maybe that's where risk compensation plays a part. It reduces the perceived importance of feedback.

Coping with Risk Compensation and Assumptions
The most important aspect in coping or dealing with this is: awareness. Any process that has a potential for cognitive bias is handled well by awareness of the process. Be aware for when feedback is not a part of the process.

Understanding those situations where short-cuts are being taken or where more emphasis is put in one area we should ask ourselves the questions:

  • How does the whole problem look? 
  • Am I missing something? 
  • Am I aware of my assumptions?
  • Am I using feedback from testing or other information about the product in my testing?

Have you noticed any elements of risk compensation in your daily work?

Friday, 3 December 2010

Carnival of Testers #16

November was cold, much colder than normal - but there is no such thing as bad weather, just inadequate clothing (there are several test analogies there!), so whilst deciding on the right clothes I read and have been entertained by several blog posts...

  • First up this month was Michael Kelly with a reminder that sometimes it's necessary to "just say no!"
  • Ever get the feeling that explaining any test-related thinking to a non-techie is tricky and full of traps? If so, then you'll recognise something in this cartoon from Andy Glover.
So, have you recognised anything so far?
  • On an aspect of recognition Albert Gareev wrote about a typical trap that testers can occasionally fall into, inattentional blindness. Recognising it, and understanding it helps your learning.
  • Related to traps, bias and fallacies then black swans have been known to surface. Have a look at this 'humble' story from Pradeep Soundararajan and his sources of learning.
  • Of course, sometimes nothing will disrupt you and your testing. Then, maybe you're in the zone, as Joel Montvelisky encourages us to recognise and learn about the contributing factors.
  • A short note from Peter Haworth-Langford on his first year of blogging. Happy blogging birthday!
  • The guys at the test eye produced a two-page sheet of aspects for consideration when testing a product. It's partly based on earlier work by others, but take a look and see what you recognise and if there's anything new to you. 
  • Communication, communication, communication. Take a look a Pete Walen's post on some communication aspects related to documentation.
  • A nice example of the availability heuristic in Gerald Weinberg's account of The Sauerkraut Syndrome. Recognose it?
  • A view on testing debt and some tips to counteract it came from Martin Jansson.
  • Weekend Testing landed in the Americas during November. Here are some thoughts from one of the organisers, Michael Larsen.
  • Bob Marshall raises some pertinent questions about the state of Agile - thinking way outside the tester's box. Recognise anything?
  • Some more interesting questions raised by Mark Crowther on burndown. You must reconise something here.

I'm sure there was something there that all would recognise, and maybe something new.

Until the next time...

Wednesday, 17 November 2010

Deliberated Exploration - A Walk in the Woods

There is a view of exploratory testing that believes it is spontaneous (as in combustion) - one just turns up at a keyboard (or other lab/test equipment) and you "just do it"!

Nice, if you can get it.

But what if your system under test is not trivial - in terms of data set-up, configuration, simulated subsystems, user profiles, behaviour or network elements? Does this mean that you can't have an exploratory approach?

Well, of course you can.

Think of it as going on a walk/hike/trek in an area you haven't covered before. To actually get to the "start" of the hike you might use a series of public or scheduled transport (a pre-determined schedule) to get to the start point.

I emphasize the uncertainty around the "start" as you might move this depending on how you defined where the trip should start.

You might decide to start your walk when stepping off of the bus - the bus is a pre-determined script to get you to a point where you will start - but perhaps you notice something odd on the journey; maybe the bus takes a route that isn't signposted as your destination - is it a pre-determined route or a diversion, will the journey take longer ... The questions start.

Using a script to get to your starting point doesn't mean that you can't ask questions - remember, this is testing with your eyes wide open as opposed to eyes wide shut. You finally get to your start point - where the trek can begin.

Just like an exploratory approach - there might be a whole series of set-up and configuration to get the system into a state ready to start working with your test ideas. It might be that you need to construct a simulation of a network element to start testing in the desired area. All of this may take some time to achieve - a whole series of hoops and hurdles.

In some of the systems that I work with this is exactly what we do - it's about a certain design (simulation, config and test ideas) - getting to the starting point where we can start working with our test ideas.

Sometimes the test ideas yield more investigative tracks than others - but that is exactly where we "intend" the exploration/testing to start. But like any (good) tester we notice things on the way, we raise questions about our simulation assumptions, about the problem/test space in our target area - and we learn about the product (along the way and in our target areas) - we learn about our test ideas, we learn about our test ideas, if we're missing something vital in our simulation data, configuration or how the whole might relate to a real end-user.

That was our walk in the woods, and the scripted element is very much a part of the trek/exploration. The scripted part is setting the conditions for the exploration - an enabler - whether I am questioning the set-up as we go is a matter of preference, time, priorities and feedback from previous testing.

Do you observe or get curious on the way to your walk in the woods?

Wednesday, 3 November 2010

Carnival of Testers #15

A lot of conferences this month - a real feast - almost an Octoberfest. Or was it a case of watching repeats on TV?

If you're a festival fan, a watcher of repeat TV or on the look-out for something new there was something to be had this month.

Repeat Questions?
  • Pradeep Soundararajan wrote a piece about labels in testing, specifically the domain label. Read the post and the comments, here.
  • A question about measuring tester efficiency prompted Shmuel Gershon to not follow the numbers game and show just how difficult that is to answer.
  • Dave Whalen his view on the QA vs QC in testing topic, here.
  • Testers who code or not was a topic of a James Christie post and he even managed to get a football reference in!

Agile Testing Days
  • Markus Gärtner was doing some sort of blogging marathon producing reports from many of the talks he visited: Lisa Crispin's talk on agile defect management, Janet Gregory's talk on learning for testers and Rob Lambert's talk on structures killing creativity.
  • Zeger Van Hese made some write-ups, this post on the day 3 talks could make you wish you'd been there.
  • Rob Lambert was giving his alternative angle on conference with a series of video snippets, here, here and here. And to show he hadn't forgotten to blog he also did a write-up, here.
STARWest
STPCon
  • A fun presentation of some aspects of combinatorial testing was presented by Justin Hunter.
  • A sense of variety at STPCon was had from reading Matt Heusser's roundup, here.
SWET
The first Swedish Workshop on Exploratory Testing was held this month. It generated a certain amount of buzz and energy, with several people setting down their thoughts:
  • A view from Simon Morley, here; from James Bach, here; a semi-transcription from Rikard Edgren, Henrik Emilsson and Martin JanssonAnn Flismark's view, hereChristin Wiedemann's view, here; Oscar Cosmo's reflections (in Swedish), and here (in English).
Challenges
Challenges are popular and this month saw a couple of interesting ones.
Miscellany
  • A review of Naomi Karten's book on presentation skills was the subject of a Lisa Crispin post. I'm tempted to purchase...
  • A good reminder on the importance of interpretation was given by Zeger Van Hesehere.
  • If you're wondering about how to think about test estimation in your project, you'll be set on the road to clearer thinking by reading Michael Bolton's post on test estimation in projects, here. It's the fifth installment of a group of worthy reading.
  • An angle on pro-active (or is it pre-emptive?) testing was given by Darren McMillan.
  • An intriguing idea of Test Proposals was presented by Martin Janssonhere.
  • It's been fascinating reading about James Bach's journey across Europe - this one from a stop in Romania. I just can't get the theme tune for Rawhide outta my head now...

Until the next time...

Sunday, 17 October 2010

Swedish Workshop on Exploratory Testing - #SWET1

Place: Högberga Gård
Date: 16-17 October 2010
Twitter hashtag: #SWET1 (a couple of early tweets with #SWET)


Attendees: Michael Albrecht; Henrik Andersson; James Bach; Anders Claesson; Oscar Cosmo; Rikard Edgren; Henrik Emilsson; Ann Flismark; Johan Hoberg; Martin Jansson; Johan Jonasson; Petter Mattsson; Simon Morley; Torbjörn Ryber; Christin Wiedemann


These are just some very quick reflections on this weekend's activity (more to come after further reflection) - SWET - for which I've seen two meanings (SWedish Exploratory Testers and Swedish Workshop on Exploratory Testing).

This was an inaugural peer conference on Exploratory Testing in Sweden.

Peer Conference
A first time attending a peer conference for me - I'd seen the rules beforehand, but didn't really know what to expect - either in terms of process or result. The great idea with a peer conference is that someone presents an experience report and then there is a moderated question and answer session.

Open Season
James described that part of the rules were designed to stop disruption from fellow peers (maybe himself being a prime culprit in the past.) The idea of having open season was both challenging and interesting - not just for the presenters but also their peers (James mentioned that reputations could be made or broken depending on what went on there - no pressure there then ;-) )

The open season was much longer than any of the presentations - I think the first session had an experience report of 20-30 minutes followed by 2-3 hours of questioning! Cool. Or? Think about some of the talks you may have attended at a conference - would the presenters stand up to 2-3 hours of questioning?

This was not just banal questioning either - this was serious searching and reflection from interested and passionate testers!

Learning, Buzz and New Faces
The peer conference is a great model for learning about and exploring people's experiences, opinions and views. This didn't just happen during the presentation and open season - it continued in the coffee breaks, meal breaks, during the after-session activities - I was still talking testing past 02:30. Everywhere you looked people were discussing testing and sharing opinions. A great vibe and buzz from the event!

I was able to put faces to names I recognised from blogs and online forums and got to know a lot of new testing colleagues - for all of them I say: respect!

There was some offline discussion about blogs and I think the blogosphere will see at least two new blogs (hopefully soon) -  and that's something we can all look forward to - there are some great stories, thoughts, ideas and questions just waiting to get out there!

So, as you can maybe guess, I loved it. Special thanks to Michael for a lot of hard work with the arrangements and to Michael, Tobbe & Henrik for moderating duties.

Until SWET2! I can't wait.


View from the window on Sunday morning

BTW, Henrik E & Martin will be running the test lab at Eurostar this year. After meeting them I think the test lab is going to be pretty cool!

Friday, 1 October 2010

Carnival of Testers #14

September, apples falling off of trees, picking apples, apple pie, lots of new apple sorts in the local shops, sometimes you make an interesting, tasty discovery. Autumn's here (for me), leaves on trees changing colour - lots of different shades and nuances - just like some of the blog posts this month....

Old Regulars
Taste Sensation
  • If you've missed Elisabeth Hendrikson's piece on Agile wakeup call (backlash vs wakeup call), then go read it now.
  • Zeger Van Hese made my day, no, week,with his cautionary message about delivering the message. It wasn't Zeger but the content that was delivered AT FULL VOLUME.
  • And talking about things being broken (the delivery style in the previous case), Michele Smith highlighted this talk about things being broken - yes, I could see plenty of relations to my daily work. Can you?
  • Marlena Compton came out of the testing closet this month. Have you?
Different Tastes?
  • Test Framing was a new "term" to appear this month, with this post from Michael Bolton. BTW, Michael wrote a piece urging people to hire Ben Simo - but it doesn't need any plugs now :)
  • A dip into the sub-concious and concious distinction was observed in Blink Comparator Testing by Simon Morley.
  • Joe Strazzere made some good points about willingness to learn in testers, here.
  • A fish baking story was the way Shrini Kulkarni asked a good question.
Colour Explosion
  • A colourful post from a colourful tester. Was Dave Whalen testing with gumbo or gusto, or both?
  • Gerald Weinberg pondered questions around SW projects hitting the wall.
  • uTest posted an interview with James Bach which in itself made interesting reading, but if you haven't seen it then I'd encourage you to go and read the comments too!
Seasonal Favourites
Yes, there seems to be several conferences or meet-ups happening right now.
New Discoveries
Nice to see some new apples (faces) this month.
  • Darren McMillan used Jing to record an issue with Jing. I wasn't familiar with Jing but I am now, thanks.
  • Off to a blogging bang with a very readable self-analytical piece from Lena Houser.
Did you notice I avoided bad, rotton and half-eaten apple metaphors? Oh, the temptation...

Until next time.

Thursday, 30 September 2010

Iqnite Nordic 2010 - Day 1

The first day of the nordic Iqnite conference was a varied affair.

The morning kicked-off with a keynote from Mary Poppendieck with a talk looking at how "requirements" usually get it wrong - i.e. they are commonly not what the customer wants. I think I'd seen this message before but it was presented in quite a different way - it was looked at from the perspective of a new start-up, where there often aren't so many (if any) customers - and so the businesses that succeed are the ones solving a problem in a better way than the available competition. Examples used were the google search engine and eBay, amongst others.

The message that I took away was that the requirement handling is often too formalised and kept as too much of an internal activity - and that loses contact and feedback with the customers. An alternative is to develop a requirement capture partnership between customer and supplier.

The talk was finished with an illustration of the push vs pull production method that I've read about in a previous Poppendieck book, Implementing Lean Software Development.

My talk, Test Reporting to Non-Testers, was next up in that room. I had an interesting intro from the track chair (Pål Hauge) who used a Norwegian metaphor for my intro - as I was coming on after Mary (and so using some of her momentum?) - I was doing a "Bjørn Einar Romøren" (?? famous ski jumper) - however, I realised afterwards that a Brit being associated with ski jumping (Eddie the Eagle?) is maybe not the best association :-) I was glad that my cold-ridden voice lasted (it packed in by the end of the day). My message about not relying only on the numbers, with some example test reports, was understood, I think!

I attended other talks about requirement capture and team member make-up. The team member make-up alluded to a concept of "social capital" and emphasized the value of trust. I agree with the trust angle, whether in a team or as an individual and have pointed this out before in my encouragement for testers to build their brand.

I enjoyed meeting some interesting people and chatting about problems in daily work, some points about agile, scrum and kanban which I'll explore another time.

The speaking-part of the day was rounded off with a keynote from Gunnar "Gurra" Krantz, a swede who's competed four times in the round-the-world yacht race - his talk on team and process dynamics was very interesting. Not having any walls around the toliet (or even a curtain) in order to save on weight! Plus, it was placed next to the galley so they could always lend a hand!

Yes, software is a different world!

(I won't make the second day - customer meetings call :-(  )

Monday, 20 September 2010

Lessons in Clarity: "Is this OK with you?"

Sources of confusion, mis-communication and unclear expectations lie all around - more than you suspect if you look carefully. Communication and clarity is something I'm very interested in - although I like to hope all testers are. Well, a recent email exchange made me laugh at the lack of clarity - to me - or maybe the joke was on me!(?)

I (person A) sent an email to person B with a CC to persons C and D. The email contained two distinct pieces of information, item 1 and item 2.

There was a fairly quick reply - the reply went from person B to person C, with CC to persons A (me) and D. The content of the reply was very simple: "Is this OK?"

Mmm, did this mean all the information in the first email or only one piece of it (item 1 or 2)? Who knows? I didn't, but I was keen to see the reply (if any). Was it only unclear to me, all, or just some of us?

The next morning I saw the awaited reply from C: "Item 1 is OK with me." These emails are breaking some record in brevity - simplistic - and also reducing the information content, and not necessarilly reducing the confusion-potential content. This reply raised other questions about item 2 (for me):

  1. This is not OK.
  2. No opinion on it.
  3. Oblivious - the question from B was misunderstood.
  4. This is OK - then the question from B was known/understood (somehow) to apply to item 1.

And?
Who knows what this last reply meant - who cares (I hear you say) - but for me this was another example in potential sources of confusion.

One way I try to avoid this:
If I can't speak to the person and clarify the situation then I state my assumptions up-front, then they can reply with a clarification or correction - either way we reduce the scope for confusion and misunderstanding.

Note on brevity: It's OK if those involved are on the same wavelength. A great example of this was when Victor Hugo sent a telegram to his publisher after a recent new publication:
VH to Publisher: ? 
Publisher to VH: !
But, if in doubt, spell it out...

Sunday, 19 September 2010

New "What's it all about?"

Since I started this blog in April 2009 I had the word "quality" placed somewhere near the top of the page - specifically in the "What's it all about" box. I've just made a change from:

The Tester's Headache is sometimes about issues connected to the balancing act of executing a test project on Time, on Budget and with the right Quality. Other times it's a reflection on general testing issues of the day.
to:
The Tester's Headache is sometimes about issues connected to the balancing act of executing a test project on Time, on Budget and trying to meet the right expectations. Other times it's a reflection on general testing issues of the day.
I've been doing a bit of sporadic writing/thinking about this recently, the occasional tweet and the odd comment on selected blog posts. There will be more to come soon (I hope - all to be revealed soon) on my thoughts around the word 'quality' in the testing-specific domain.

I'll still occasionally use the #qa hashtag on twitter - hey, I can't start a revolution/re-think from the outside can I? Or can I?

Thoughts, comments? Provocative? Thought-provoking or sleep-invoking?

Friday, 17 September 2010

Blink Comparator Testing vs Blink Testing

I saw an article on the STC the other day about a competition to go to the ExpoQA in Madrid.

I was immediately interested as I backpacked around Spain years ago and Madrid is a great place to spend time, but I digress...

I scanned the programme (top-down), got to the bottom and started to scan bottom-up and immediately saw an inconsistency. Then I started wondering why/how I'd found it. Was it a particular type of observation, test even?


I realised that when I started scanning upwards I had some extra information and that was why it popped out. I twittered a challenge to see if anyone else spotted anything - at the time I added the hashtag #BlinkTest. After further reflection I wasn't so sure that was correct - I started thinking about the blink comparator technique that astronomers used once upon a time.


It seemed like a blink test (that I've seen described and demonstrated) - but after some thought it seemed closer to a blink comparator test. Was there a distinction, what was lying behind why I'd spotted the inconsistency? Was one a form or subset of the other?


Could Blink Testing be a general form of information intake (and subsequent decoding) and Blink Comparator Testing being a special form where the pattern, legend or syntax is specified and the scanning is aimed at spotting an inconsistency. Maybe. Hmm...




Blink vs Blink Comparator?

I'm going to think of a Blink Comparator Test as being one that takes a particular type of priming (concious) - here's a pattern (or patterns) and this is used to compare against the observation - maybe for the absence or presence of the pattern/s.

I'll think of a Blink Test as being an observation without priming (subconcious) - although there will always be some priming (from your experience), but that's a different philosophical discussion - and it's the subconcious that waves the flag to investigate an anomaly.

Of course, both can be used at the same time.




Why distinguish?

It's about examining why I spot an inconsistency and trying to understand and explain (at least to myself) what process is going on. So, why the interest in understanding the difference? For me, that's what helps get to the bottom of the usefulness and potential application, and indeed recognising the technique 'out in the wild'.

This started out as looking at an anomaly (maybe even a typo) in an article, and now I have an addition to my toolkit - I probably already had it, but now I'm aware of it - and that means I have a better chance of putting it to good use. I can see uses in documentation, test observations and script design (to aid observation/result analysis). Cool!

Oh, the inconsistancy I spotted was the use of (Sp)* in the time schedule, which wasn't in the legend. Simple stuff really producing all that thinking...

Thursday, 2 September 2010

Carnival of Testers #13

Number 13 edition of the carnival. Lucky, unlucky, is it all going to fall apart in a dis-jointed mess? Let's see...

No Breakages?

  • The August selection started off with a bang. Trish Khoo reflected on a mindset of breaking stuff as just being one of many tester mindsets.

(Look!) No Hands?

  • Anyone else out there that has helium hands? If so, or wondering what that is, then read Michael Larsen's view, here.

No Nonsense!

  • CAST got some reporting from Anne-Marie Charrett guest posting on the STC. Read the report on the first day, here.

Not Scary enough?

  • Do you have scary test plans, test ideas or are just a bit scary yourself? Catherine Powell ponders the value to the customer of a scary test plan.

Balls?

  • It's a good while since I juggled balls successfully. Juggling daily work is also filled with problems, as highlighted by Joe Strazzere.

No Testing?

  • Ajay Balamurugadas wondered about how or if he was testing when he wasn't testing. Confused, intrigued? Then go read the story, here.

No Competition?

  • Competition time again. Pradeep Soundararajan wrote about some of the competitions that have been occuring, with some plus points and some room for improvement.

Interviews!


No Comparison?
  • Tim Western described his thoughts about testing being an inter-disciplinary skill.
  • A comparison between BBST and RST was made, here, by Geir Gulbrandsen.
  • Thread-Based Test Management was coined by Jon Bach, here, and James Bach, here. Go read them, think about it, question and think some more.
No Punctuation? No Problem?
  • Well, made it to the end. This month's carnival was brought to you by the punctuation marks ! and ? Finally, a reminder of how punctuation can trip up testers is here.

Until next time ...

Monday, 23 August 2010

Strung up by the Syntax and Semantics

Ouch!! Sounds unpleasant doesn't it?

I've just re-read Eat Shoots and Leaves; originally read it a few years ago, then forgot about it after leaving my newly finished copy on an airport coach in some foreign land.

So, it was a joy to re-discover this book and read again over the summer.

What's all the fuss about? It's just punctuation!
That's the thing! A lot of testing communication - whether the requirement specification of the product or the reporting of the results and findings usually happens in written format, and it's the written format that has potential pitfalls for misunderstanding and confusion.

So, when you receive a document/description/mail is it always unambiguous. Is it ambiguous on purpose (ie unknown behaviour not documented) or by accident? Is your own writing similarly unambiguous?

True, it's possible for the spoken word to be ambiguous, but then we usually have the luxury or putting in a question to clarify our understanding.

So, the moral is: ambiguous? Start digging - is it accidentally unclear or is there a source of potential problems in your testing - the 'swampy area' on your testing map!

Superfluous hair remover
I'm getting into that age range where more hair is developing in "strange" places - ears - and less on the top of my head. I'd love to hear an evolutionary explanation for hairy ears - that were not needed until now... (Oh, I'm not being exclusive either: any creationist can chip in with the design explanation also. Agnostics: press the "don't know" button now.)

So when I read the phrase (superfluous hair remover) it immediately struck me on two levels - the age-related one that I just alluded to and the tester in me that wonders "what does that mean?" or "how do I interpret that?".
Do you see what I mean?
Is it talking about a hair remover device too many (a device that is superfluous) or is it talking about a remover of hair when you have more hair than is wanted (superfluous hair)?


What's the solution to this written problem? Well it's a hyphen. A "superfluous hair-remover" might be something I'd see advertised in a "Unwanted items" newspaper column. A "superfluous hair remover" or "superfluous-hair remover" might be something seen in a "For Sale" column or store. Note, there's still room for ambiguity though!

Lesson?
Don't always assume that the writer of a product description, specification or requirement outline or specification is writing exactly what they mean.

What, I hear you say, why wouldn't they write what they mean. Well, that comes down to ambiguity in the way someone (maybe not a technical writing specialist) describes the product - both the wording and the punctuation.

If it looks fishy/suspect - there's a good chance there is a need for testing investigation (and/or a bunch of questions).

Also if you're writing reports - whether test progress or bug reports - make sure that the written word is saying what you think it is (or at least your interpretation - true your interpretation may be skewed ...).

My sympathies to anyone who is writing or speaking in their non-native language. I make the odd slip in Swedish - although not as bad as a colleague who whilst giving a presentation got the emphasis wrong, so what should have been "you can see from the following six pictures" came across as "you can see from the following sex pictures" - that's one way to catch the attention of the audience!

Watch those hyphens and pauses!

Wednesday, 18 August 2010

Good Testing & Sapient Reflections

It was a good while since I read James Bach's posts about sapient testing, here, plus some of the previous ones (here and here). I even contributed some observations on it in the testing vs checking discussion.

I have an understanding (my interpretation) of what is meant by sapient testing and it's something I can tune in with completely. In my book there is no room for a human tester to be doing non-sapient testing. Agree?

So, do I call myself a sapient tester?

No.

I work with a lot of testers and non-testers. Sapient testing wouldn't be any big problem for the testers I work with to understand and accept. The non-testers might be a different story - for the past 3 years in my current role I've been working on introducing intelligent test selection (initially applied to regression testing) within a part of the organisation.

I've made a big deal about thinking - about the testing we're doing, what we're doing and when, what we're not doing and trying to get a grasp on what this means - for me I've been contrasting this with the abscence of these aspects. I haven't called it non-intelligent testing, I've just made a point of not calling it intelligent testing.

At the same time I've also started introducing the phrase "good testing" and implying that if we're not thinking about what we're testing (and why, what it means, what we're not covering, what the results say/don't say etc, etc) then we're not doing "good testing".
Of course, there's scope for people to say "I think about my testing" when they might not - I observe, question and discuss, and together we come to a consensus of whether they're thinking about their testing: what assumptions are involved, what are the observations saying, what are the observations not saying, should the test approach be changed, what else did they learn...
By using the phrase "good testing" I'm also priming the audience - especially the non-testing part - they want the testing to be good and now they're learning that this implies certain aspects to be considered. Eventually I hope they automatically connect it with the opposite of "just" testing.

So, changing an organisation (or even a part of it) takes time - just like changing direction on an oil tanker - there's a long lead-time. That's one of the reasons why I don't use sapient or sapience in my daily vocabulary - I'm using the "keep it simple" approach towards non-testers.

Pavlov or Machiavelli?
If that means using "good testing", "intelligent testing" or "thinking tester" to produce a type of Pavlov reaction from non-testers then I'm happy with that. Does that make my approach Machiavellian?

So, although I might be sapient in my approach and identify with a lot of the attributes of being a sapient tester, I do not call myself sapient. Does this make me a closet sapient tester?

Are you a closet sapient tester? Have you come out? Are you Machiavellian or otherwise nicely-manipulatively inclined?

Are you a 'good tester' or do you just do 'good testing'? Do you know or care?

Problems and Context Driving

I got involved in a small twitter exchange about problems, bugs and perception of the problems the other day with Andy Glover and Markus Gärtner.

Problem Perception
There was a view expressed that a user's perception of a problem is a problem.

I'd agree with this in most "normal" cases. But then there was a knock at the door and Mr. D. Advocate lent me his hat.

So putting the devil's advocate hat on I thought about how this view might not be enough. Or borrowing De Bono's phrase, "good, but not enough".

Cars
My way of looking at this for a counter-example was to think of driving a car. If I press a button, or depress a lever, expecting a certain response or action and something totally different occurs is this a bug? It might be, depending on what the button/lever was and the response.

If I'd pressed the button marked AC and the radio came on - I might think, "there's a problem here".

If I'd pressed a lever for the windscreen wipers and the indicator blinkers started then I might double-check the markings on the lever.

Blink Testing
BTW, Anne-Marie Charrett labelled this lever mix-up as an alternative form of "blink" testing!

Further Ramblings
I then started having flashbacks to my own encounters with cars in different countries and how I'd understood the issues:
  • Being stuck in an underground carpark at SFO, engine running, facing a wall, unable to reverse as I couldn't release the parking brakes - there were two and one was hidden. Manual to the rescue. Perception issue about where I'd thought the parking brake release would be.
  • Driving to San Jose (same car) and a heavy rain shower starting - how the heck do I switch the wipers on? Pull over and get the manual out. Perception problem about where the lever normally is.
  • Opening a taxi car door in Japan - taboo. Soon followed by the next of trying to tip the driver. Applying customs of one culture to another. My problem.
  • Driving someone round a roundabout (left hand side of road) when they'd only ever experienced driving on the right hand side. Were the muffled screams a "problem" or just a perception issue?
  • Trying to park in Greece - not on the pavement, unlike everyone else. Problem with local customs using my perception of a norm.
  • Sitting in the back of a taxi on the way to the office outside Athens going round a mountain pass whilst the driver is reading a broadsheet. Is my anxiousness a cultural thing?
  • Being surprised by overtaking customs in Greece. Flashing lights before the maneuvour starts. Irritation and confusion. My problem, not being familiar with the local customs.
  • Driving round the motorways of France and Italy - minimal gaps between cars - my problem, perception problem?
  • Driving in Sweden - coming face-to-face with a moose at speed. My problem. Obviously the moose has right of way!
Moose EncounterImage by Steffe via Flickr


Problem vs Perception?
Problems can have fixed interpretations (this is an agreed issue) and areas of vagueness. Is this my perception or interpretation?

As testers I think we try to root out and understand what our perceptions are and then understand whether they are reasonable, on track or in need of further investigation.

Some of this might be working against a well documented expectation or at other times not knowing what to expect - I deal with both extremes.

One way I handle this is to keep an open (and skeptical) mind and then work out what my hypothesis (interpretation according to the observation) is and compare that with the product owner's.

Sometimes there's no clear-cut right or wrong interpretation.

But it helps to keep an open mind with perception (and borrow Mr. D.A.'s hat now and then).

Have you had any problems with perception lately?

Enhanced by Zemanta

Tuesday, 17 August 2010

The Testing Planet has a site!

I read a post from Rob Lambert the other day about The Testing Planet, about the next submission deadline, a short Q&A and how it's open for all types of contributions.

What I didn't really notice was that the testing planet has it's own site now. I didn't really see this announced - although I've been on holiday so very possible that I missed it. I think I saw a pointer too in a tweet from the @testingclub.

If you haven't seen it yet go and check out the new site for The Testing Planet. There you can look through both editions (the first being the software testing club mag from February), click through the contents list (for the 2nd edition) and even click through the different tags in both editions.

Very nice looking site and layout!

Have you seen it yet? If not I'd recommend a look.

Creative Thinking Challenge Follow-Up

During the summer I read a fairly interesting book, Think! Before It's Too Late, by Edward De Bono. Brief review of the book below.

In one section on creative thinking in 'Knowledge and Information' - just after being quite negative about multiple choice exams, something I've reflected on before, (here) and (here) - he poses a couple of challenges. One of these challenges was the basis for the Creative Thinking Challenge I posed.

Go read the comments to the above challenge to see the different approaches - I give a brief view on the approaches later in this post.


My Approach
My approach was to adapt the "Gauss method" and apply it to even numbers. After experimenting a little I was able to see the pattern for even numbers. So, my approach to the problem...


First listing the two rows, one in ascending and the other in descending order:-

    2   4     6   ...  298 300
300 298 296 ...    4     2

finding the pair sums (300 + 2, 298 + 4, etc) all summing to 302, noting that there are 150 such pairs and that the sum will be double the required amount, so I must divide by 2. Giving

(302/2) * 150 = 151 * 150

As for the mental arithmetic, well I took 15*15 to equal 225, then added a couple of zeros (one for each one I'd removed from 150*150 - ie divided by 100 before, so I must multiply by 100 afterwards), giving 22500, then I must add the final 150 (to get to 151*150) giving 22650.
(we all know our 15-times table right? 
I, for some reason, know it's 225, but doing it long-hand in your head (?) would be 15*10 = 150 + 10*5 = 200 + 5*5 = 225)
This method is the so-called Gauss 'trick' or method. 



The Gauss Method?
I saw a suggestion (on twitter I think) that this was the famous Gauss problem, a problem that he is supposed to have solved as a school child.

I found a reference that apparently shows the first known solution to this type of problem was presented by Alcuin of York; a translation from the Latin of the problem and solution are (here) - it's the one about the doves on ladders!


The Challenge Answers
Firstly, many thanks to those that took up the challenge and were willing to submit their answers! It was really interesting to see the different trains of thoughts. There were some different and inventive solutions - and that's what it's all about.



The first effort came from Trevor Wolter. I think he took the approach that I would've taken if I hadn't known the 'trick'. He wrote out a few observations, looked for a pattern, came up with a hypothesis and presented it. Right answer!

The next entry from anonymous either knew about the formula for arithmetic progressions or looked it up. He/she then presented the modified version. I assume they tried it out. There is potential for mis-interpretation in the formula though -  the 'n' value must be the highest even value in the series. If I'd said add up the even numbers from 1 to 301, there would be some pre-filtering needed.

Next up was Timothy Western, who presented a simple logic case and then did the actual calculation with the aid of excel. Short and sweet - right answer!

Abe Heward gave a close rendition of the "Alcuin of York" method. Creative thinking and right answer!

It looked like the arithmetic progression angle was the basis for Stefan Kläner's answer. Right answer!

The "Alcuin" approach was in use by Ruud Cox - well demonstrated steps and the right answer!

The arithmetic progression approach was used by utahkay. Right answer!

Finally, Parimala Shankaraiah gave a detailed walk-through of the initial observations, spotting patterns, following a hunch about arithmetic progressions, looking that up, modifying the formula, trying that out and matching with the observation to present the final hypothesis. Good analysis, research and explanation. Right answer!


Summary
What's interesting about these answers is that there are different methods of approaching and solving the problem - there are no right or wrong approaches. I found this problem intriguing purely from the "creative thinking" or angle - this is something that can be practised, to add as another tool in the testing toolbox. Something I'm going to explore more.

Another good point about showing your thinking is that it's a natural step for writing bug reports. So I reckon a lot of the answerers can write bug reports with enough detail.


Book Review
The book is part complaint on the current state of thinking, part re-hash of previous work and part self-promotion. A fairly ok read but I do have quite a few niggles with it. The book has had mixed reviews - although I read it quite avidly, which is slightly unusual for me!

There is a repeating statement through the work: Good but not enough. After a while it grates a bit. I understand his point and perhaps the continued repetition is the "drilling it home" method, but I'm not always receptive to those types of approaches.

The book gives short summaries of his previous work on Lateral Thinking, Six Thinking Hats and Six Value Medals. The summaries are fairly superficial - in theory not enough for someone to pick up and use in depth, although not impossible to grasp and use at some basic level.

Another niggle I have with the book is the lack of references or bibliography. De Bono states that this is because all his ideas are his own - however he uses Gödel's theorem to state a point about perception without referencing the theorem. Some references and bibliography would be nice, otherwise it just comes across as opinions without the back-up.

There are some reasonable points made, but when they're not backed up by references to the source material the reader has reached a dead-end in this book (if he wants to dig deeper into the source material).

I haven't read any previous De Bono work - I'll probably delve into the Lateral Thinking work at some point, but for now I have some pointers.

More on some creative thinking reflections connected to this book in another post.


Getting inspired about creative thinking for testing?

Sunday, 1 August 2010

Carnival of Testers #12

In the northern hemisphere it's been a hot July. That's both weather and blog output!

The month started out with introspection (or was it just a talking to yourself fad?)

Rikard Edgren was asking himself questions in the corridor whilst Jon Bach was having a team meeting with himself. Two very insightful posts worth reading!

In this time of humid weather and lightning strikes it was good to see some advice on lightning talks from Selena Delesie.

Michael Larson wrote about the "Law of Raspberry Jam" and "Army of One" and what it he means to him.

Abe Heward was asking good questions related to survivorship bias of bugs and quiet evidence.

Most bloggers hit a writer's block once in a while. Rob Lambert describes how he regained his mojo, here.

Stephen Hill drew some interesting parallels to software testing after visiting an English courthouse.

In case you missed it the Software Testing Club announced the launch of the Testing Planet.

Making the release call for software, the problems and parallels with mysticism, were pondered by Curtis Stuehrenberg.

If you'd tried Ben Kelly's puzzle then you can read his reflections and what he learnt here.

Trish Khoo drew an interesting analogy between blog writing and shipping software, here.

Test case counting was a popular subject this month. James Christie wrote about his objections, several others pitched in and John Stevenson did a worthy round-up and addition to the subject.

More on the metrics angle was explored by James Bach, here.

Last, but not least this month was Markus Gärtner. He's been exploring aspects related to testing and quality with his latest post here.

Until next time...

Tuesday, 27 July 2010

Yet more Monty Python & Software Testing

I drift in and out of Monty Python analogies for Software Testing now and then. Here's a previous reference <link>.

Whilst falling asleep the other night I remembered a Phil Kirkham post about Monty Python and my comment on it, so I thought I'd indulge myself and do a little bit of Monty Python - Software Testing analogising(?) - just for therapy :)


Here's the "Life of Brian" based comment I made:

Coordinator: Crucifixion? 
Prisoner: Yes. 
Coordinator: Good. Out of the door, line on the left, one cross each. 
[Next prisoner] 
Coordinator: Crucifixion? 
Mr. Cheeky: Er, no, freedom actually. 
Coordinator: What? 
Mr. Cheeky: Yeah, they said I hadn't done anything and I could go and live on an island somewhere. 
Coordinator: Oh I say, that's very nice. Well, off you go then. 
Mr. Cheeky: No, I'm just pulling your leg, it's crucifixion really. 
Coordinator: [laughing] Oh yes, very good. Well... 
Mr. Cheeky: Yes I know, out of the door, one cross each, line on the left. 

Or, as applied to testing...

Coordinator: Scripted test?
Tester: Yes
Coord: Good. Other there on the shelf, one scripted test case each.
[Next Tester]
Coordinator: Scripted test?
Tester: Er, no, an exploratory approach please.
Coord: What?
Tester: Yes, they said I could come and do some testing with my eyes open.
Coord: Oh, I say, that's sounds very nice. Well, off you go then.
Tester: No, I'm just joking my PM gets scared if we don't follow the script.
Coord: Oh, well in that case...
Tester: Yes I know, over there, one scripted test case each.


Life of Brian is a rich source:

Spectator I: I think it was "Blessed are the cheesemakers". 
Mrs. Gregory: Aha, what's so special about the cheesemakers? 
Gregory: Well, obviously it's not meant to be taken literally; it refers to any manufacturers of dairy products.

Or, as applied to testing...

Tester I: I think it was "Blessed are the certified".
Tester II: What's so special about the certified?
Tester III: Well, obviously it's not meant to be taken literally; it refers to any arbitrary label or categorisation.


There's lots of potential in the Meaning of Life too:

Three Project Managers (PM) and a management consultant (MC) discuss the state of affairs:

PM#1: Ah! Morning Perkins.
PM#2: Morning.
PM#1: What's all the trouble then?
PM#2: Test reports in disarray. During the night.
PM#1: Hm. Not nice numbers eh?
PM#2: Yes.
PM#1: How's it feel?
PM#2: Stings a bit.
PM#1: Mmm. Well it would, wouldn't it. That's quite a lot of extra information you've got there you know.
PM#1: Yes, real beauty isn't it?
All: Yes.
PM#1: Any idea how it happened?
PM#2: None at all. Complete mystery to me. Woke up just now... one piece of detailed analysis too many.
:
PM#1: Hallo Doc.
MC: Morning. I came as fast as I could. Is something up?
PM#1: Yes, during the night old Perkins (PM#2) had his test progress reports disrupted.
:
MC: Any headache, bowels all right? Well, let's have a look at this test report of yours then. [Looks at sheet] Yes... yes... yes... yes... yes... yes... well, this is nothing to worry about.
PM#2: Oh good.
MC: There's a lot of it about, probably a virus, keep warm, plenty of rest, and if  you're reporting progress remember to stick to statistics.
PM#2: Oh right ho.
MC: Be as right as rain in a couple of days.
PM#2: Thanks for the reassurance, doc.
:
MC: Jolly good. Well, must be off.
PM#2: So it'll just sort itself out then, will it?
MC: Er... I think I'd better come clean with you about this... it's... um it's not a virus, I'm afraid. You see, a virus is what we doctors call very disruptive. So it could not possibly have made a positive impact on the quality of these reports. What we're looking for here is I think, and this is no more than an educated guess, I'd like to make that clear, is some multi-cellular life form the genu *bonus extertus*. What we management consultants, in fact, call a good tester.
All: A good tester...!!
:
PM#3: A good tester - on this project?
PM#1: Hm...
PM#3: A good tester on this project...?
PM#1: Ah... well he's probably escaped from a zoo.


And remember don't be complacent in testing:

A tester has been asked for his certification in testing:

Tester: "I didn't expect a kind of Spanish Inquisition." 

Certification advocate: "Nobody expects the Spanish Inquisition! Our chief weapon is surprise...surprise and fear...fear and surprise.... Our two weapons are fear and surprise...and ruthless efficiency.... Our *three* weapons are fear, surprise, and ruthless efficiency...and an almost fanatical devotion to the certification syllabus.... Our *four*...no... *Amongst* our weapons.... Amongst our weaponry...are such elements as fear, surprise.... I'll come in again."


I was working on the dead parrot and test tool vendors, but it got messy - so I'll stop there!


Thursday, 22 July 2010

Test Case Counting Reflections

James Christie wrote an interesting piece on some issues with test case counting, here. I started writing some comments but realised it was turning into a post in itself. So, here it is...

I recognise the type of scenario that James writes about - of a project manager, stakeholder or even fellow tester being obsessed with test case statistics.

I'm even writing about this from the other side - why the non-tester (manager / stakeholder) might be interested in tc counting... But I'll leave that to another post.

I think it's a case of the test case being the least common denominator - it's an easy measure to ask for - to get a handle on "what's happening". The key for the tester, though, is to convey the value (lack of, or limited) of such a number/measure (although the first step for the tester is to understand the problems and limitations with tc counting..)

What are the figures saying, but also what pieces of the picture are they not showing?

I purposely tell my teams not to talk to me in terms of tc numbers - this is quite challenging at first - but once they understand the information I'm interested in (the feature, aspects of coverage, risk areas, limitations with the model/environment, built-in assumptions and other aspects of 'silent evidence') then it actually generates a lot more creative and searching thinking (I believe.)

How might a conversation with a stakeholder on test case statistics go? Let's do a thought example and see how it might go and problems it may hide/show.

Stake Holder: "How many TC's have been executed, and how many successfully?"
Tester: "Why?"
SH: "So I can gauge progress..."
T: "What if I said all but 1 TC had been executed successfully?"
SH: "Sounds good. What about the 1 remaining one?"
T: "That /could/ be deemed a show-stopper - a fault in installation that could corrupt data"
SH: "Ok, have we got anyone looking at it?"
T: "Yep, 2 of the best guys are working on it now."
SH: "Good"
T: "But..."
T: "It might cast a question mark over a bunch of the 'successful' TC's that were executed with this potential configuration fault"
SH: "Mmmm, what's the likelihood of that?"
T: That's something we're investigating now. We've identified some key uses cases we'd like to re-test, but it really depends on the fix and extent of the code change.
SH: Ok, thanks.
T: During our testing we also noticed some annomalies or strange behaviour that we think should be investigated further. This would mean some extra testing.
SH: Ok, that we can discuss with the other technical leads in the project.

The stakeholder has now got a (more) rounded picture of the problem/s with the product - he's also getting feedback that it's (potentially) not just as simple as fixing a bug so that the last remaining TC will work. Concerns have been raised about the tests already executed as well as the need for more investigation (read maybe more TC's) - all this without focussing on test case numbers

Not all examples will work like this, of course - but maybe it's a case of not talking about test cases, or talking about test cases and saying this is only a fraction of the story.

There is a whole different angle about getting the stakeholders to understand the limitations about units that are called test cases. One of James Bach's presentations comes to mind, here.

Got any strategies for telling the whole story rather than just the numbers?

Creative Thinking Challenge

I've been reading about a form of alternative education (here) recently, and been thinking about the need for creativity (research) in my daily work. I've also been reading about approaches to thinking, and creative thinking.

I saw an example of creative thinking recently to add quickly the numbers 1 to 10, or 1 to 100. The method was quick but not necessarily intuitive - which is part of the point.

I started thinking about this to see if I could adapt the method to add the even numbers between say 1 and 300. I did the mental arithmetic fairly quickly and then checked my working with pen and paper afterwards.

There have been some interesting tester challenges doing the rounds recently, and I thought that a thinking challenge was in order.

So, the challenge is to add the even numbers (2, 4, 6,...) from 1 to 300.

Bonus points if you can do it in less than 30 seconds! Less than 10 seconds if using a calculator. (I'm not counting alternative thinking time here - that I'll leave to you to work on.)

If you want to post a comment with the answer here then you'll need to show your thinking!
(I'll delay comment publication a few days in case there's any latecomers that wants to try it - without being tempted by the comments.)

There will be a follow-up post - where I identify the book and some other things I've learnt from it.

Friday, 16 July 2010

Presenting at Iqnite Nordic 2010

The Iqnite Nordic 2010 programme was just released, here.

I'm happy to be presenting "Test Reporting to Non-Testers".

It'll be primarilly an experience report with some real data and problems they have caused in their presentation and interpretation.

The presentation will build on themes around assumptions (in the report author and receiver), statistics (their use and misuse) and reading too much into the reported information.

By looking at problems that the examples have caused I'll be able to suggest some alternative approaches for testers when communicating with non-testers.

Communication with non-testers is a big interest of mine - not only do testers have to deal with modelling and assumptions of the product and stakeholders, they also have to handle the communication channels towards non-testers. This has lots areas for pitfalls (a whole different type of modelling and assumption awareness!)

Unfortunately, the mind-map I made when drafting this is huge - it needs to be printed on A3 to be readable. The result is that I'll only be touching on some areas in the presentation - but I think they're some of the key ones and worthwhile.

So there's room for a part II of the presentation!

Thursday, 15 July 2010

Testing Refresher from Sanderson of Oundle

Today I was "philosophising" with my 5 yr old daughter - wondering about why she thought the sun was so big if I could make a circle of it with my hand (she had a good answer) - and then we started discussing which of the moon and sun was bigger and reasons why one might "appear" bigger than the other. This generated a great number of questions and hypotheses.

This activity reminded me of "Sanderson of Oundle", I'd recently read an article and a booklet on the subject. Just the title sounds intriguing doesn't it?

Who?
F.W. Sanderson was headmaster of Oundle public school from 1892-1922 - and the story of his approach to teaching sounded both revolutionary and visionary and generated interest from pupils and parents alike.

After I read the article I searched and managed to track down a copy of a booklet that the school produced on the seventieth anniversary of Sanderson's death. It's only 24 pages but gives a great insight into a great enabler.
Sanderson said: 
"We shall see what changes should come over shcools. They must be built in a large and spacious manner, the classrooms being replaced by halls or galleries, in which the children can move in the midst of abundance, and do and make research: not confined to a classroom."
"The methods will change from learning in classrooms to researching in the galleries; from learning things of the past to searching into the future; competition giving place to co-operative work."

He had two main methods:

  • Let there be no work which is not in some sense creative.

To him, classrooms were just tool-sharpening rooms and the real work was done in the laboratories, library, museum, art room or power station. The creative work took part in research outside of the classroom.

  • Let all work be co-operative rather than competitive.

He held creative work (through research) to be higher than examined work and he believed the best way to achieve this was by co-operation. His opinion was that all had areas in which they can excel and it's for the teacher to help the pupil to find those areas - not to constrain them to a norm. BTW, he didn't believe in bad students - just students that the teacher hadn't found the right angle or area of interest yet.

Why the interest to me?
I've recently "regressed" into the scientific approach - or maybe I'm a born-again scientist (mathematician actually)? I'm on a little bit of an evidence-based refresher... So I found the work that he was doing (and the way he was doing it) to be so enlightening - and with a lot of common sense.

Just take his two main methods:

Creative work:
This is so true for me as a software tester - I don't do things just because they're there or that's the way it's always been done. I try to evaluate and understand what I'm doing, the selections I'm making, the assumptions that are there (both obvious and less obvious) to be able to give a full account of my actions. That gives me the piece of mind that I'm trying to do a good job.

Take an automated test case (or test suite) - I wouldn't run it without considering the reasons for running it, what information I'm getting out of it, what it's not telling me and even considering if this selection is still relevant. If I can cover those bases - and be able to discuss them with colleagues - then I'm making a creative input into the decision around that test.

Co-operative vs Competition:
Co-operation in project environments revolves very much around communication. That's not just writing a report on time or emailing someone to say that all plans have changed. It starts before that - framing the expectations around how the communication is going to work - a sort of trust and confidence-building exercise. If X knows I'll treat all information in a professional way and be a good ball-plank for him then I'm much more likely to know about any potential changes/problems sooner.

Research and being creative is very much key to most work connected with software development and testing - and co-operation is a natural part of this.

Competition in project environments can have several connotations. Most of the "competition" whether it be one-up-manship or the CYA syndrome is usually connected with non-cooperation. Non-cooperation in the sense of a non-functioning or not-open two-way dialogue does not help the project.


Takeaways
I loved reading about Sanderson of Oundle - I wish I'd gone to a school like that and with a teacher like that.

It was also a reminder for me about the importance and relevance of creativity and cooperation in my daily work. Without creativity there is no thinking (and vice-versa) and if I'm not being creative (or thinking) about my testing then I'm not doing good testing.

Discussing testing with my colleagues (cooperation) is a basic need for good testing.


Got any Oundle-like opinions?

Wednesday, 14 July 2010

EuroSTAR BlogSTAR Candidate Roll-Call(?)

Is the popularity/talent contest format crossing over to the testing world now? Maybe, and maybe that's a good thing. It has the potential to generate interest and involvement, publicity for the conference and a way-in and impetus for testers whose company budgets can't stretch to the conference.

EuroSTAR's search for a BlogStar sounds interesting - and worth taking a look.

Is this the next X-factor or "The Testing Blogger's got Talent"? It's got the popular vote factor - and that's a really interesting angle.

There is the potential for a great showcase! There's a lot of great bloggers - some who didn't make it to Eurostar via the VideoStar competition that I'd be really interested in seeing blogging.  Note, I haven't re-freshed myself on the video entries (and maybe my memory is a bit hazy) - but I voted for the 3 mentioned below.

So here's my short wishlist for entries.

Possible Line-Up?

Rob Lambert - always got something topical to say. His video entry was slick - something about roadsigns? :)

Pradeep Soundararajan - his "joker" video gave me willies! His writing has many learning observations.

There are other Indian bloggers that could easily enter and generate a lot of interest - Parimala Shankaraiah and Ajay Balamurugadas to name but two!

Then there's the group that are regularly involved in WTANZ - Marlena Compton, Trish Khoo, Keis & Oliver Erlewein - all generating good work - with opinions aplenty!

Anne-Marie Charrett - her "don't vote for me" video was fun (or was it "yes but, no but"?) There's always a lot of intelligence in her posts.

I saw a tweet that Adam Goucher might think about entering - cool!

Peter H-L always has an interesting take on topics - I liked his suggestion about a group entry - but not sure the prize budget would stretch to that!

Markus Gärtner - he's already going but it could be fun to see an entry from him - although his blog output would probably be more than anyone else ;-) But always worth reading!

This isn't an exhaustive list of interesting test bloggers - just the ones not in my current inattentional blindness blind-spot!


And From Leftfield
The search has been promoted as looking for: "Not for the faint hearted", "with an opinion", "not being afraid if everyone doesn't like what you've got to say": Well prompted by the national origins of the eurostar team I'd say that calls for a "Father Jack" blog! Oooh, tempting...

Or maybe we should follow Peter's train of thought with a team entry - who wants to be Dougal, Ted and Mrs Doyle????

(In case folks are getting confused - these are Father Ted references.)

There's actually been many an absurd observation in "FT" that you could draw a testing angle on - "near, far away..."


Final Thoughts & Question
Interesting competition and a clever way to get people involved. Some folks might wonder about the opportunity cost involved in generating posts on another site - but just think of it as it being another sounding board for all those posts you have in prep! I know, it can be an effort though...

I do have a question about the voting though - I wonder how that will be done - and do hope it's not a simplistic poll. I mean if I get my friends and family (that are not testers) to vote for me does that really show my blogging as being more interesting to testers?

I'd be inetersted in seeing other factors in the voting: the most-read blog post, most read blogger, most comments, most retweets, etc, etc,  Just thinking...


I'm tempted to enter - but have other things to sort out first. So, for me, the jury's out just now.


If I enter and somehow reach the conference I promise not to come to the gala in the persona of Father Jack! Maybe ;-)