preload

Beyond 140 characters

Posted by Albert Gareev on Mar 04, 2015 | Categories: DiscussionsNotes

Even though I’d wish to do it more often, I very rarely blog on real-time topics. But I’m taking some time tonight.

I tweeted a few thoughts on Tuesday, 2015/03/03. Expanding them to address responses received.

Test scripts are exclusive. Testing charters are inclusive.

Quoting Cem Kaner‘s statement and fact, early in the project we know least about how it will unfold and about how exactly the software product will be implemented.

This makes scripted testing approach inherently exclusive: the business goal is to “execute” all pre-planned test cases within the time estimated. That’s it. Mission accomplished.

Really? But where do we include test ideas based on the new information that is to be discovered, that we don’t know of yet? Do we estimate for number of bugs to be discovered and investigated? Retesting of fixes and unplanned code changes caused by them? Do we estimate for a new device or a browser that are yet to be released? Do we estimate for vulnerabilities that are yet to be discovered?
Yes, there’s this thing – “buffer time”. But it’s not an estimate that accounts for all the unknown. It’s just another time box.

Testing should not end when we run all pre-planned test cases – good testing never runs out of test ideas. Testing must be able to stop at any point though and have the most important tests done within the time allotted.

Test charters incorporate test ideas while staying inclusive testing missions. Priority of test charter execution changes dynamically based on the real-time (or nearly real-time) information about the product and project. Such testing is also responsive and proactive.

Test scripts are infected with confirmation bias and focalism. Testing charters seek information, not confirmation.

Now, there’s a metaphor here – “infected”.

Here’s how it works. When executing her own test scripts, the tester no longer refers to the sources of information used to derive mental models and design test cases – that is “done” job, and questioning any “good” job doesn’t make sense. And there is no time for that. Similar but worse with someone else’s test cases – the testers aren’t even aware of the original sources of information. “Execution steps” and “actual results” in scripts drive focus: following them, fulfilling them, completing them is what makes test cases done. Count of test cases executed is a productivity measurement. Therefore, stepping aside, exploring around are counter-productive delays. This is not encouraged by management looking at metrics.
Questioning completeness of test cases offends both testers’ and managers’ confirmation bias. “What? The job was reported as done to me. I reported that this job was done. Are you saying that..?”

Test cases inflict this confirmation bias infection that spreads and dominates.

On the other hand, test charters are aimed at discovery of the information about the product – mission by mission, piece by piece. They have internal structure (typically, as checklists) to guide (but not to limit or enforce) testing focus. They require digging in the original sources of the information, cross-reference, and discussion within the team. This facilitates informed and accountable decision-making as opposed to narrow binary “pass/fail” confirmation.


Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported
This work by Albert Gareev is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported.