Automated Testing, so called
Here is yet another discussion about automated testing, and whether it’s purely checking or some more, that Arjan Kranenburg published on his web-site. (You may also want to read Matt Heusser’s “How are you going to *test* that?” Update: the link is no longer valid. STP web-site closed public access to the blogs hosted there.)
I posted my comments there and I want to expand some of them in my blog.
[ My comment cited.
I’m not an “automate everything” enthusiast neither am I an exploratory testing crusader. I stand for a systems thinking approach.
The first thing to keep in mind while reading “Testing vs. Checking” series is a kind of automation talked about. It is unit testing, mainly with regards to TDD approach. Sure, an executable process which is essentially a calling of a function with a bunch of inputs and checking results it returns perfectly fits definition of automated checking. However, it’s a shame to downgrade unit testing activities to unit checking only. Code review is as much sapient activity as exploratory testing.
I tried to emphasize that aspect in one of the questions I asked.
The main question, that wasn’t answered, is also – why compare code testing with product testing? While sapient testing skills can be (should be) applied at any level, call-checking of code components pursues a different goal than testing of application’s functionalities. One does not substitute another.
Opposition towards using automated tools is called Ludditism. Yes, machinery threatens to scripted types of jobs. For others, it helps increasing productivity or reaching new levels of exploration.
Tools help in different areas. Sometimes they do not substitute manual activity: you can cut one’s body to see what’s inside or you can X-Ray it. Just keep in mind that a tool doesn’t actually see (i.e. perceive) – it brings a picture for a human being to perceive.
You can manually dig an archeological site or you can use electromagnetic scanning and build a 3D picture with a computer. Once again, keep in mind, a program won’t care what is on display.
In software testing, activities where engagement of automation tools helps reaching new levels are simulated load testing or certain kinds of security testing.
In automation of GUI activities, recording/checking tools occupy a thin niche of regression testing approach, based on the heuristic “if a functionality is not broken then the test passed last time will pass again”. But this heuristic is not really reliable, it doesn’t work too often. ]
And here is what I want to add.
A check, as James Bach and Michael Bolton defined, is characterized by 3 attributes.
A check, then, has three attributes:
1) It requires an observation.
2) The observation is linked to a decision rule.
3) The observation and the rule can be applied without sapience.
While looking at this definition in motion, we can see the following.
- Deciding when and what observation to make requires sapience
- Deciding how to interpret an observation requires sapience
- Deciding what rule to apply requires sapience
- Designing a rule requires sapience
- Evaluating result requires sapience
Also, the same attributes of checking can be disconnected chronologically.
- Same observation could be (and, sometimes, should be) interpreted differently, if new factors become known.
- Decision rules can be designed ahead of time (before testing session).
- Evaluation of results can be done afterwards. Moreover, knowing all results gives advantages of big picture view.
So checking could be as complex as testing, or, better to say, together with exploration they make a good testing, as Yin and Yang.