I read Rikard Edgren's post the other day, on some good ISTQB definitions, here, and it reminded me of a common trap that people sometimes fall into with documented claims or requirements. (I hasten to add that Rikard didn't appear to fall into this trap!)
The most common form of this trap in testing circles (where documentation is involved) usually crops up as a question, "why didn't you/someone spot this problem earlier (as it's specified in document X)", or statement, "this requirement should been tested because it's specified in document X".
The problem with these types of statements and questions is that there is an important assumption being made: They are using documented specifications or claims in their current form and assuming that was the same form that was an "input" into some tester's/team's mind.
In my experience this assumption occurs more often when:
- The one making the analysis doesn't understand the problems (challenges) that testers face with making documented and undocumented requirements visible.
- The one making the analysis assumes that testers have the same information as everyone else.
- The one making the analysis assumes that test problems are "owned" solely by testers.
Ask yourself at least one question (but preferably more) about documented claims / specifications:
- Was this information available to the tester at the time of testing?
- Silent evidence? Was this information available but not used/tested for a particular reason?
In the first bullet the onus is on the one making the analysis. In the second bullet the onus is on good test reporting (story-telling) - highlighting areas not done (for whatever reason), more on silent evidence in ref .
Potential for these problems exist where code design and test are separated in different teams. But, more generally, they exist where communication problems (or lack of transparency) exists - I see these most often where parts of teams receive external requests that isn't "hand-shaken" within the team.
One way to help make these problems visible (whether from an RCA or team retrospective viewpoint) is to consider the frames that different roles have with respect to the problem, how does the developer look at the documented claim and how does the tester - any external influence that changes their priorities? More in ref .
So, being knowledgable after the fact (with hindsight) usually misses potentially significant parts of the story. And remember, a statement is very often not the whole story -> good story-telling takes effort!
Hindsight trap - sound familiar?
 Mind the Information Gap
 Framing: Some Decision and Analysis Frames in Testing