I recognise the type of scenario that James writes about - of a project manager, stakeholder or even fellow tester being obsessed with test case statistics.
I'm even writing about this from the other side - why the non-tester (manager / stakeholder) might be interested in tc counting... But I'll leave that to another post.
I think it's a case of the test case being the least common denominator - it's an easy measure to ask for - to get a handle on "what's happening". The key for the tester, though, is to convey the value (lack of, or limited) of such a number/measure (although the first step for the tester is to understand the problems and limitations with tc counting..)
What are the figures saying, but also what pieces of the picture are they not showing?
I purposely tell my teams not to talk to me in terms of tc numbers - this is quite challenging at first - but once they understand the information I'm interested in (the feature, aspects of coverage, risk areas, limitations with the model/environment, built-in assumptions and other aspects of 'silent evidence') then it actually generates a lot more creative and searching thinking (I believe.)
How might a conversation with a stakeholder on test case statistics go? Let's do a thought example and see how it might go and problems it may hide/show.
Stake Holder: "How many TC's have been executed, and how many successfully?"
SH: "So I can gauge progress..."
T: "What if I said all but 1 TC had been executed successfully?"
SH: "Sounds good. What about the 1 remaining one?"
T: "That /could/ be deemed a show-stopper - a fault in installation that could corrupt data"
SH: "Ok, have we got anyone looking at it?"
T: "Yep, 2 of the best guys are working on it now."
T: "It might cast a question mark over a bunch of the 'successful' TC's that were executed with this potential configuration fault"
SH: "Mmmm, what's the likelihood of that?"
T: That's something we're investigating now. We've identified some key uses cases we'd like to re-test, but it really depends on the fix and extent of the code change.
SH: Ok, thanks.
T: During our testing we also noticed some annomalies or strange behaviour that we think should be investigated further. This would mean some extra testing.
SH: Ok, that we can discuss with the other technical leads in the project.
The stakeholder has now got a (more) rounded picture of the problem/s with the product - he's also getting feedback that it's (potentially) not just as simple as fixing a bug so that the last remaining TC will work. Concerns have been raised about the tests already executed as well as the need for more investigation (read maybe more TC's) - all this without focussing on test case numbers
Not all examples will work like this, of course - but maybe it's a case of not talking about test cases, or talking about test cases and saying this is only a fraction of the story.
There is a whole different angle about getting the stakeholders to understand the limitations about units that are called test cases. One of James Bach's presentations comes to mind, here.
Got any strategies for telling the whole story rather than just the numbers?