I recognise the type of scenario that James writes about - of a project manager, stakeholder or even fellow tester being obsessed with test case statistics.
I'm even writing about this from the other side - why the non-tester (manager / stakeholder) might be interested in tc counting... But I'll leave that to another post.
I think it's a case of the test case being the least common denominator - it's an easy measure to ask for - to get a handle on "what's happening". The key for the tester, though, is to convey the value (lack of, or limited) of such a number/measure (although the first step for the tester is to understand the problems and limitations with tc counting..)
What are the figures saying, but also what pieces of the picture are they not showing?
I purposely tell my teams not to talk to me in terms of tc numbers - this is quite challenging at first - but once they understand the information I'm interested in (the feature, aspects of coverage, risk areas, limitations with the model/environment, built-in assumptions and other aspects of 'silent evidence') then it actually generates a lot more creative and searching thinking (I believe.)
How might a conversation with a stakeholder on test case statistics go? Let's do a thought example and see how it might go and problems it may hide/show.
Stake Holder: "How many TC's have been executed, and how many successfully?"
Tester: "Why?"
SH: "So I can gauge progress..."
T: "What if I said all but 1 TC had been executed successfully?"
SH: "Sounds good. What about the 1 remaining one?"
T: "That /could/ be deemed a show-stopper - a fault in installation that could corrupt data"
SH: "Ok, have we got anyone looking at it?"
T: "Yep, 2 of the best guys are working on it now."
SH: "Good"
T: "But..."
T: "It might cast a question mark over a bunch of the 'successful' TC's that were executed with this potential configuration fault"
SH: "Mmmm, what's the likelihood of that?"
T: That's something we're investigating now. We've identified some key uses cases we'd like to re-test, but it really depends on the fix and extent of the code change.
SH: Ok, thanks.
T: During our testing we also noticed some annomalies or strange behaviour that we think should be investigated further. This would mean some extra testing.
SH: Ok, that we can discuss with the other technical leads in the project.
The stakeholder has now got a (more) rounded picture of the problem/s with the product - he's also getting feedback that it's (potentially) not just as simple as fixing a bug so that the last remaining TC will work. Concerns have been raised about the tests already executed as well as the need for more investigation (read maybe more TC's) - all this without focussing on test case numbers
Not all examples will work like this, of course - but maybe it's a case of not talking about test cases, or talking about test cases and saying this is only a fraction of the story.
There is a whole different angle about getting the stakeholders to understand the limitations about units that are called test cases. One of James Bach's presentations comes to mind, here.
Got any strategies for telling the whole story rather than just the numbers?
That was a sensible conversation between adults who hadn't got too hung up about "test cases".
ReplyDeleteSadly that sort of rational exchange can be difficult because some people understand the arguments against test cases, but don't really care. As I hinted in my piece they really are more focussed on the metrics than the reality; in fact the metrics /are/ the reality.
This is closely related to the phenomenon that James Bach has observed, i.e. "faking the testing". These guys are faking the project. It's one of unfortunate side effects of project management becoming a specialist discipline, separate from the management of the work.
When I started out, the route to project management was serving your time as a developer, and/or business analyst. Project managers knew about development. Sadly, they sometimes didn't know much about project management, or even management, but at least they understood what their teams were doing.
Don't get me wrong. I don't think most project managers are faking the project, but enough to be a worry.
What can we do? I think it's all just damage limitation unless you can get involved early enough to shape the test strategy. You can then ensure everyone understands the need to shape the testing, and hence the measurements, in a way that is relevant, and that is helpful to the project, rather than a distraction and imposition.
It's easier if you've got good diplomatic skills, so you can persuade people without leaving them feeling you regard them as stupid. It's tough if you're just a contractor, or supplier test manager, parachuted in for one assignment. The sort of discussions that challenge the culture are easier to handle if both sides have a mutual history of constructive collaboration and respect.
First of all, I like your take on that. I only see one problem in your discussion there in that I don't think that the underlying question has actually been answered.
ReplyDeleteTo me, the question is not, who many test cases have been written/executed. It's not what progress has been made. The question, from a PM point of view is
When are you finished testing?
We know that this is an impossible question to answer. Counting TCs seems to be simple for answer it though because numbers suggest a linearity that doesn't exist in reality.
At the point where SH says OK, thanks, I'd be expecting a best case / worse case scenario prediction. For example, if you have to re-test x number of test cases, barring finding new problems that will take you at least x hours/days. Adding some time for likelyhood of new bugs occuring will give the SH at least a ballpark figure as to when the tests have been run.
They might assume that testing is finished then and you might educate them that there are additional risks that you identified (like the one about the need to rerun some tests) that might cause the need for more tests.
I don't think that the no of test cases is the important bit but the time and risk attached to them. That still doesn't give the whole picture but I think that it's a lot nearer than the pure numbers.
@James
ReplyDeleteYes, I think there's a certain amount of getting "hung up" on the wrong thing. Not so much deliberate faking - but getting attached to the figures without understanding them.
Moving away from that takes time - nobody wants to hear that the way they've been working for years is wrong/inaccurate - so it's a case of leading people by the hand (sometimes indirectly) rather than pushing or pulling.
Thanks for the comments!
@Thomas
ReplyDeleteYes, the view of the PM is usually "when are you finished" - you're right, this discussion is usually going to be accompanying any "test case" discussion. "How much additional time is needed?" is a question I recognise :)
Thanks!
This post gave me a sense of deja vu as I recently spent a good fifteen minutes cornered by a rather famous person I won't name drop who was very concerned over this very topic. He'd read my comments via Drucker about communicating information rather than data and wanted to bounce some ideas off me.
ReplyDeleteWe arrived at nearly the same point as you, although we went a little more theoretical. My theory about why people ask for data rather than information is its the same as people calling solutions "requirements." We're all engineers and mostly males who are tool users and problem solvers. Combine that with a tendency to rush ahead inherent in most type A personalities and voila, you have a mistrust of conclusions mixed with an over reliance on stats.
@Curtis
ReplyDeleteInteresting theory, thanks!
A lot of the time there's a tendancy to rush to judgement - statistics can be seen to facilitate this is some peoples' eyes - it's the easy answer (whether it's right or wrong doesn't get considered when it's a simple answer to the problem.)
It's for the testers to show why the some of these simple "answers" are "not fit for purpose".
The management, stakeholders and project managers will always ask for numbers and like you said its up to us to manage it and I think its a cultural and language change that the testers have to accpet. If we choose to oblige we have to make sure we give the complete picutre and do the right story telling instead of just stating 10 test failed 20 test passed.. Its all about presenting the data and supporting information.
ReplyDelete@shilpa
ReplyDeleteThanks for the comments.
I try and use the test case stats as back-up evidence - a footnote to the story rather than the main part of the story.
If I'm consistent and communicate in this way then the stakeholders soon learn (some sooner than others) that the story about the test effort is the focus and the stats are just something in the background - but I try and keep them in the background (not always easy...)
Getting the picture across without discussing the stats is hard work - but doable - then hopefully the receivers learn that they don't need the stats to get the picture either.