Monday, 9 January 2012

Model Based Testing: Some Assumptions and Traps

" #softwaretesting #testing #modelling "

Subtitle: Model Based Testing: What Aren't You Telling Me?

Full Disclosure
I worked on a project that used 'formal' model based design and 'formal' model based test in 1996 (I lead the test team). It was a very successful project, we found lots and lots of issues earlier in the design chain than we'd ever achieved previously. The resulting software did the job that was intended with very few downstream faults.

So, I've seen some examples of formal design and test models up close, and understand some of the benefits and issues they raise. I have been able to use them in the past to explore early design models, and find ambiguities early.

Now
In December I had a very interesting discussion - about model based testing - I was there as a 'customer' and model based testing was being 'sold'. I made my skepticism known before the meeting, by my assertion that all testing is based on models - and that some people are better at observing, constructing, expressing and examining them than others. I also stated I'd seen the power of modeling techniques that allowed testing earlier but that they are far from the complete picture.

Traps
Modeling and Model Based Testing has a certain lure, a siren call if you will. But it's not all plain sailing. So, one of my questions was, "what aren't you telling me?" This was quite naturally met with, "what?" "Ok, what assumptions have you built into the story I'm hearing?" "What?"

Trap #1: Are the assumptions about the model visible?

Completeness and Correctness
"Formal methods, and formal specifications" - these produce models that are "correct". Not correct in the sense that they are fault free, but usually syntactically correct - that the grammar holds together. However, "correct" is a sticky word. Say it in the presence of a manager and it's hard to undo the preconceptions that get stirred up.

Trap #2: There is a short distance from formal models are correct (grammatically) to 'the model is correct'.
 
I probed this idea of completeness, where I was able to find that due to the modelling separating business and implementation/platform logic then it would inevitably mean a group of implementation type activities could not be modeled (like installation, start-up/shutdown/restart, upgrade/update procedures - messy, but crucial for customers in operation).

The implementation of the test model I was looking at was able to be used in 2 main ways: 1: to test against a design model; and 2: to be used to generate scripts for execution on a target system.

I questioned the need for the 2nd step - "tell me why that is useful..." - this showed an assumption that the test model is correct. My assertion was that it's useful but not necessarily correct. This raised a question to me from my "salesman" if I trusted my devised "test cases" (i.e. are they "correct") more than those generated from the model. My answer being that I didn't believe they were "correct" either, but useful.

I stated that the idea of being "correct" - whether in terms of a model (a legacy from the idea of a formal stated model being grammatically "correct") or manually devised test cases (ideas) - was a misleading discussion. Both could be useful, but ultimately fallible.

Trap #3: Models can gain a 'halo effect', ref [1] - meaning that you exclude other sources.

Bias
Assumptions sometime abound. Beware of assumptions!

A not uncommon assumption: The design model is more complex (detailed) than the test model (and by implication more complicated and demands more expertise.)

This was presented in the meeting, to which I was able to demonstrate that the way the modeling had been implemented inevitably meant that the test model would need to include more information than for any corresponding design model. Ooops!

Trap #4: The model simplifies the test space modeling.

Useful?
With all of my skepticism and testing of the assumptions and ideas presented to me, I did actually find some useful elements to it.

One, being a visualization of a complex test space - to aid exploration of it.

Another being as an aid to separate business and implementation logic of scripted and automated test frameworks (a bunch of these being necessary as code checkers for legacy systems).

Note, these are context specific to my working domain.

Recap
Be careful about people assuming the model contains everything (trap #2) and that it becomes a sole source of test ideas (trap #3). If the model doesn't make clear what assumptions it is built on, then they need to be clear and understood somewhere (trap #1). And finally, beware of simple models (trap #3 and #4) as something that advertises the need for less 'thought' - they may be ok, but not always (as I showed).

I had an interesting discussion - and was thanked for my skepticism and the way I'd highlighted problems with the business case. To me it was just another session testing...

Reference

[1] http://en.wikipedia.org/wiki/Halo_effect