The first day of the nordic Iqnite conference was a varied affair.
The morning kicked-off with a keynote from Mary Poppendieck with a talk looking at how "requirements" usually get it wrong - i.e. they are commonly not what the customer wants. I think I'd seen this message before but it was presented in quite a different way - it was looked at from the perspective of a new start-up, where there often aren't so many (if any) customers - and so the businesses that succeed are the ones solving a problem in a better way than the available competition. Examples used were the google search engine and eBay, amongst others.
The message that I took away was that the requirement handling is often too formalised and kept as too much of an internal activity - and that loses contact and feedback with the customers. An alternative is to develop a requirement capture partnership between customer and supplier.
The talk was finished with an illustration of the push vs pull production method that I've read about in a previous Poppendieck book, Implementing Lean Software Development.
My talk, Test Reporting to Non-Testers, was next up in that room. I had an interesting intro from the track chair (Pål Hauge) who used a Norwegian metaphor for my intro - as I was coming on after Mary (and so using some of her momentum?) - I was doing a "Bjørn Einar Romøren" (?? famous ski jumper) - however, I realised afterwards that a Brit being associated with ski jumping (Eddie the Eagle?) is maybe not the best association :-) I was glad that my cold-ridden voice lasted (it packed in by the end of the day). My message about not relying only on the numbers, with some example test reports, was understood, I think!
I attended other talks about requirement capture and team member make-up. The team member make-up alluded to a concept of "social capital" and emphasized the value of trust. I agree with the trust angle, whether in a team or as an individual and have pointed this out before in my encouragement for testers to build their brand.
I enjoyed meeting some interesting people and chatting about problems in daily work, some points about agile, scrum and kanban which I'll explore another time.
The speaking-part of the day was rounded off with a keynote from Gunnar "Gurra" Krantz, a swede who's competed four times in the round-the-world yacht race - his talk on team and process dynamics was very interesting. Not having any walls around the toliet (or even a curtain) in order to save on weight! Plus, it was placed next to the galley so they could always lend a hand!
Yes, software is a different world!
(I won't make the second day - customer meetings call :-( )
Looking at issues affecting Software Development, with a special interest in Testing
Thursday, 30 September 2010
Monday, 20 September 2010
Lessons in Clarity: "Is this OK with you?"
Sources of confusion, mis-communication and unclear expectations lie all around - more than you suspect if you look carefully. Communication and clarity is something I'm very interested in - although I like to hope all testers are. Well, a recent email exchange made me laugh at the lack of clarity - to me - or maybe the joke was on me!(?)
I (person A) sent an email to person B with a CC to persons C and D. The email contained two distinct pieces of information, item 1 and item 2.
There was a fairly quick reply - the reply went from person B to person C, with CC to persons A (me) and D. The content of the reply was very simple: "Is this OK?"
Mmm, did this mean all the information in the first email or only one piece of it (item 1 or 2)? Who knows? I didn't, but I was keen to see the reply (if any). Was it only unclear to me, all, or just some of us?
The next morning I saw the awaited reply from C: "Item 1 is OK with me." These emails are breaking some record in brevity - simplistic - and also reducing the information content, and not necessarilly reducing the confusion-potential content. This reply raised other questions about item 2 (for me):
And?
Who knows what this last reply meant - who cares (I hear you say) - but for me this was another example in potential sources of confusion.
One way I try to avoid this:
Note on brevity: It's OK if those involved are on the same wavelength. A great example of this was when Victor Hugo sent a telegram to his publisher after a recent new publication:
I (person A) sent an email to person B with a CC to persons C and D. The email contained two distinct pieces of information, item 1 and item 2.
There was a fairly quick reply - the reply went from person B to person C, with CC to persons A (me) and D. The content of the reply was very simple: "Is this OK?"
Mmm, did this mean all the information in the first email or only one piece of it (item 1 or 2)? Who knows? I didn't, but I was keen to see the reply (if any). Was it only unclear to me, all, or just some of us?
The next morning I saw the awaited reply from C: "Item 1 is OK with me." These emails are breaking some record in brevity - simplistic - and also reducing the information content, and not necessarilly reducing the confusion-potential content. This reply raised other questions about item 2 (for me):
- This is not OK.
- No opinion on it.
- Oblivious - the question from B was misunderstood.
- This is OK - then the question from B was known/understood (somehow) to apply to item 1.
And?
Who knows what this last reply meant - who cares (I hear you say) - but for me this was another example in potential sources of confusion.
One way I try to avoid this:
If I can't speak to the person and clarify the situation then I state my assumptions up-front, then they can reply with a clarification or correction - either way we reduce the scope for confusion and misunderstanding.
Note on brevity: It's OK if those involved are on the same wavelength. A great example of this was when Victor Hugo sent a telegram to his publisher after a recent new publication:
VH to Publisher: ?
Publisher to VH: !But, if in doubt, spell it out...
Sunday, 19 September 2010
New "What's it all about?"
Since I started this blog in April 2009 I had the word "quality" placed somewhere near the top of the page - specifically in the "What's it all about" box. I've just made a change from:
I'll still occasionally use the #qa hashtag on twitter - hey, I can't start a revolution/re-think from the outside can I? Or can I?
Thoughts, comments? Provocative? Thought-provoking or sleep-invoking?
The Tester's Headache is sometimes about issues connected to the balancing act of executing a test project on Time, on Budget and with the right Quality. Other times it's a reflection on general testing issues of the day.to:
The Tester's Headache is sometimes about issues connected to the balancing act of executing a test project on Time, on Budget and trying to meet the right expectations. Other times it's a reflection on general testing issues of the day.I've been doing a bit of sporadic writing/thinking about this recently, the occasional tweet and the odd comment on selected blog posts. There will be more to come soon (I hope - all to be revealed soon) on my thoughts around the word 'quality' in the testing-specific domain.
I'll still occasionally use the #qa hashtag on twitter - hey, I can't start a revolution/re-think from the outside can I? Or can I?
Thoughts, comments? Provocative? Thought-provoking or sleep-invoking?
Friday, 17 September 2010
Blink Comparator Testing vs Blink Testing
I saw an article on the STC the other day about a competition to go to the ExpoQA in Madrid.
I was immediately interested as I backpacked around Spain years ago and Madrid is a great place to spend time, but I digress...
I scanned the programme (top-down), got to the bottom and started to scan bottom-up and immediately saw an inconsistency. Then I started wondering why/how I'd found it. Was it a particular type of observation, test even?
I realised that when I started scanning upwards I had some extra information and that was why it popped out. I twittered a challenge to see if anyone else spotted anything - at the time I added the hashtag #BlinkTest. After further reflection I wasn't so sure that was correct - I started thinking about the blink comparator technique that astronomers used once upon a time.
It seemed like a blink test (that I've seen described and demonstrated) - but after some thought it seemed closer to a blink comparator test. Was there a distinction, what was lying behind why I'd spotted the inconsistency? Was one a form or subset of the other?
Could Blink Testing be a general form of information intake (and subsequent decoding) and Blink Comparator Testing being a special form where the pattern, legend or syntax is specified and the scanning is aimed at spotting an inconsistency. Maybe. Hmm...
Blink vs Blink Comparator?
I'm going to think of a Blink Comparator Test as being one that takes a particular type of priming (concious) - here's a pattern (or patterns) and this is used to compare against the observation - maybe for the absence or presence of the pattern/s.
I'll think of a Blink Test as being an observation without priming (subconcious) - although there will always be some priming (from your experience), but that's a different philosophical discussion - and it's the subconcious that waves the flag to investigate an anomaly.
Of course, both can be used at the same time.
Why distinguish?
It's about examining why I spot an inconsistency and trying to understand and explain (at least to myself) what process is going on. So, why the interest in understanding the difference? For me, that's what helps get to the bottom of the usefulness and potential application, and indeed recognising the technique 'out in the wild'.
This started out as looking at an anomaly (maybe even a typo) in an article, and now I have an addition to my toolkit - I probably already had it, but now I'm aware of it - and that means I have a better chance of putting it to good use. I can see uses in documentation, test observations and script design (to aid observation/result analysis). Cool!
Oh, the inconsistancy I spotted was the use of (Sp)* in the time schedule, which wasn't in the legend. Simple stuff really producing all that thinking...
I was immediately interested as I backpacked around Spain years ago and Madrid is a great place to spend time, but I digress...
I scanned the programme (top-down), got to the bottom and started to scan bottom-up and immediately saw an inconsistency. Then I started wondering why/how I'd found it. Was it a particular type of observation, test even?
I realised that when I started scanning upwards I had some extra information and that was why it popped out. I twittered a challenge to see if anyone else spotted anything - at the time I added the hashtag #BlinkTest. After further reflection I wasn't so sure that was correct - I started thinking about the blink comparator technique that astronomers used once upon a time.
It seemed like a blink test (that I've seen described and demonstrated) - but after some thought it seemed closer to a blink comparator test. Was there a distinction, what was lying behind why I'd spotted the inconsistency? Was one a form or subset of the other?
Could Blink Testing be a general form of information intake (and subsequent decoding) and Blink Comparator Testing being a special form where the pattern, legend or syntax is specified and the scanning is aimed at spotting an inconsistency. Maybe. Hmm...
Blink vs Blink Comparator?
I'm going to think of a Blink Comparator Test as being one that takes a particular type of priming (concious) - here's a pattern (or patterns) and this is used to compare against the observation - maybe for the absence or presence of the pattern/s.
I'll think of a Blink Test as being an observation without priming (subconcious) - although there will always be some priming (from your experience), but that's a different philosophical discussion - and it's the subconcious that waves the flag to investigate an anomaly.
Of course, both can be used at the same time.
Why distinguish?
It's about examining why I spot an inconsistency and trying to understand and explain (at least to myself) what process is going on. So, why the interest in understanding the difference? For me, that's what helps get to the bottom of the usefulness and potential application, and indeed recognising the technique 'out in the wild'.
This started out as looking at an anomaly (maybe even a typo) in an article, and now I have an addition to my toolkit - I probably already had it, but now I'm aware of it - and that means I have a better chance of putting it to good use. I can see uses in documentation, test observations and script design (to aid observation/result analysis). Cool!
Oh, the inconsistancy I spotted was the use of (Sp)* in the time schedule, which wasn't in the legend. Simple stuff really producing all that thinking...
Thursday, 2 September 2010
Carnival of Testers #13
Number 13 edition of the carnival. Lucky, unlucky, is it all going to fall apart in a dis-jointed mess? Let's see...
No Breakages?
(Look!) No Hands?
No Nonsense!
Not Scary enough?
Balls?
No Testing?
No Competition?
Interviews!
No Comparison?
No Breakages?
- The August selection started off with a bang. Trish Khoo reflected on a mindset of breaking stuff as just being one of many tester mindsets.
(Look!) No Hands?
- Anyone else out there that has helium hands? If so, or wondering what that is, then read Michael Larsen's view, here.
No Nonsense!
- CAST got some reporting from Anne-Marie Charrett guest posting on the STC. Read the report on the first day, here.
Not Scary enough?
- Do you have scary test plans, test ideas or are just a bit scary yourself? Catherine Powell ponders the value to the customer of a scary test plan.
Balls?
- It's a good while since I juggled balls successfully. Juggling daily work is also filled with problems, as highlighted by Joe Strazzere.
No Testing?
- Ajay Balamurugadas wondered about how or if he was testing when he wasn't testing. Confused, intrigued? Then go read the story, here.
No Competition?
- Competition time again. Pradeep Soundararajan wrote about some of the competitions that have been occuring, with some plus points and some room for improvement.
Interviews!
- A three-part interview of Ben Simo was made by uTest. Worth reading!
- The Software Testing Club started publishing a series of short interviews, featuring Lisa Crispin, Anne-Marie Charrett, Parimala Shankaraiah and Simon Morley.
No Comparison?
- Tim Western described his thoughts about testing being an inter-disciplinary skill.
- A comparison between BBST and RST was made, here, by Geir Gulbrandsen.
- Thread-Based Test Management was coined by Jon Bach, here, and James Bach, here. Go read them, think about it, question and think some more.
- Well, made it to the end. This month's carnival was brought to you by the punctuation marks ! and ? Finally, a reminder of how punctuation can trip up testers is here.
Until next time ...
Subscribe to:
Posts (Atom)