Sunday 14 August 2011

Taylorism and Testing

" #softwaretesting #testing "

I've had this in draft for a while, but was triggered to "finish" it by reading Martin Jansson's recent posts on working with test technical debt and nemesis of testers.

Taylorism
When I think about Taylorism I'm referring to the application of scientific principles to management that direct division of labour, "process over person" and generally anything that is the result of a time and motion study (however that might be dressed up).

The result of this is categorising "work" into divisible units, estimating the time for each unit and the skills required to do each unit. Once you have these, then plugging them into a gantt chart is a logical management step.

Estimating
Estimating work using some estimation guide is not the problem here. The problem is when that guide becomes the truth - it becomes some  type of test-related "constant". It's used as such and, more importantly, interpreted as a constant.

Problems with constants might occur if you discuss with your stakeholder items such as:
  • Time to write a test plan
  • Time to analyse a new feature
  • Time to review a requirement
  • Time to write/develop a test case
  • Time to execute a test case
  • Time to re-test a test case
  • Time to write a test report
Traps?
Stakeholders don't usually have time to go into all the details of the testing activity, therefore it's important as testers to not let the items that affect any estimation be considered as constants. This is highlighting to the stakeholder that the assessment of the question depends on the specific details of the project, organisation and problem at hand.

So, re-examining the above items might give some additional questions to help, below. (This is just a quick brain-dump and can be expanded a lot)
  • First questions:
    • How will the answers be used?
    • How much flexibility or rigidity is involved in their usage?
  • Time to write a test plan
    • Do we need to estimate this time?
    • What's the purpose of the plan?
    • Who is it for?
    • What level of detail needs to be there now?
    • What level of detail needs to be there in total?
    • Am I able to do this? What do I need to learn?
  • Time to analyse a new feature
    • Do we need to estimate this time?
    • How much do we know about this feature?
      • Can we test it in our current lab?
      • New equipment needed?
      • New test harnesses needed?
    • Am I able to do this? What do I need to learn?
  • Time to review a requirement
    • Do we need to estimate this time?
    • Are the requirements of some constant level of detail?
    • How high-level is the requirement?
    • Are the requirements an absolute or a guide of the customer's wishes?
    • How often will/can the requirements be modified?
    • What approach is the project taking - waterfall or something else?
  • Time to write/develop a test case
    • Do we need to estimate this time?
    • Do we all have the same idea about what a test case means?
    • Do we mean test ideas in preparation for both scripted and ET aspects of the testing?
    • Do we need to define everything upfront?
    • Do we have an ET element in the project?
    • Even if the project is 'scripted' can I add new tests later?
    • What new technology do we have to learn?
  • Time to execute a test case
    • Do we need to estimate this time?
    • In what circumstances will the testing be done?
      • Which tests will be done in early and later stages of development?
      • Test ideas for first mock-up? Keep or use as a base for later development?
    • What is the test environment and set-up like?
      • New aspects for this project / feature?
      • Interactions between new features?
      • Do we have a way of iterating through test environment evolution to avoid a big-bang problem?
    • What are the expectations on the test "case"?
    • Do we have support for test debugging and localisation in the product?
    • Can I work with the developers easily (support, pairing)?
    • What new ideas, terms, activities and skills do we have to learn?
  • Time to re-test a test case
    • Do we need to estimate this time?
    • See test execution questions above.
    • What has changed in the feature?
    • What assessment has been done on changes in the system?
  • Time to write a test report
    • Do we need to estimate this time?
    • What level of detail is needed?
    • Who are the receivers?
      • Are they statistics oriented? Ie, will there be problems with number counters?
    • Verbal, formal report, email, other? Combination of them all?
Not all of these questions would be directed at the stakeholder.

Depending on the answers to these questions will raise more questions and take the approach down a different route. So, constants can be dangerous.

Stakeholders and Constants
When stakeholders think in terms of constants then it's very easy for them to think in taylorism terms, think of testing as a non intellectually challenging activity and ultimately think of testing as a problem rather than an opportunity for the project.

Some questions that might arise from taylorism:
  • Why is testing taking so long?
  • Why did that fault not get found in testing?
  • Why can't it be fully tested?
Working against Taylorism in Stakeholders

The tester needs to think and ask questions, both about the product and what they're doing. Passive testers contribute to mis-understanding in stakeholders - the tester is there to help improve the understanding of the product.

The relationship of a tester to a stakeholder changes as the tester adds value to project, product and organisation. So, ask yourself the question if and how you're adding value. It's partly about building your brand, but also it might be about understanding the problems of the stakeholder.

The stakeholder frames a problem and presents that to you in a way which might be different from how the customer intended. A good starting point with some questioning is to think in terms of context-free questioning (see Michael Bolton's transcript from Gause and Weinberg, here).

Build your brand, add value to the organisation and product and ultimately the problem with Taylorism will recede.


References
  1. Wikipedia intro on Scientific Management 
  2. Wikipedia intro on Time and Motion Studies
  3. Building your Test Brand
  4. Transcription of Gause and Weinberg's Context-Free Questions

2 comments:

  1. ET? I am not sure what this means

    ReplyDelete
  2. I mean Exploratory Testing. Good catch! For an introductory explanation see Wikipedia: https://en.m.wikipedia.org/wiki/Exploratory_testing

    ReplyDelete