preload

7 questions on "Testing vs. Checking"

Posted by Albert Gareev on Nov 25, 2009 | Categories: Reviews

While I was enjoying reading the series of articles and discussions around the subject, some points of concern questions were crystallizing in my mind, and now I feel ready to join the discussion by asking them.

Part I – Overview

What’s it about?

In the nutshell (in my humble opinion), a dire need in terms separation was inspired by highly analytical nature of authors on the one side, and a serious misunderstanding of the subject (Software Testing) by business (and I mean Sr. Management and all kinds of recruiters here) on the other side.

Long time ago

Historically testing derived from debugging. Since programs that times didn’t have much of “User Interface”, testing was closely involving looking at the source code and tracing it. Once program functionalities were wrapped around with user interface, functional testing (“Black Box”) arose. With years, bad coding practices were identified, good coding practices were proven, and code testing (“White Box”) separated from debugging. In the mean time, functional testing was growing mature on its own, no longer requiring programming knowledge and skills, but indistinctly separating to functionality-oriented testing and defect-oriented testing.

Added 11/25/2009

With years, bad coding practices were identified, good coding practices were proven”

Coding, i.e. creation of program code, could be done in a variety of ways, utilizing different logic and following different patterns.

Programming language (compiler or interpreter) looks after syntax but not the logic. Some logics may seem to be working but not for the all user scenarios. Some other logics work perfectly but they are hardly maintainable or they may impact other areas, security, for example.

 
Creation of code that is defect-prone, hardly maintainable, or may impose other issues is a bad coding practice.
Following coding standards and using right programming patterns is good coding practice.

Please refer to “Making Wrong Code Look Wrong” article by Joel Spolsky for detailed examples. 


Added 11/25/2009

indistinctly separating to functionality-oriented testing and defect-oriented testing”

Functionality-oriented testing is not a new definition or type of testing but is about verification and validation testing types, like User Acceptance Testing and Business Acceptance Testing processes used in Waterfall software development model.

On the other hand, User Story Testing in Agile methodology, is also a process of verification of implemented functionalities, allowing to confirm that it works as expected.

Defect-oriented testing is an exploratory process targeting any unwanted (defective, inconsistent, unsafe, etc.) functionalities, side effects, any other behavior of an application. That involves “improper” interaction with the application-under-test (Negative Testing, Stress Testing, etc.), or putting the application-under-test into “improper” conditions (Disk Failure, Low Memory, Network Timeout, etc.), or “hacking” the application (Security Testing, DB Attacks, etc.).
“Improper” is because it’s not a regular interaction way or environment state but most likely may accidentally happen or be created on purpose with harmful intentions and thus have to be tried.

 

New branches on a tree

Certain types of tests were impossible to conduct purely manually  sapiently, and they were called “non-functional” (load/performance testing, security testing, etc.). However, those tests are generally conducted NOT on development team’s side.
Certain testing activities (i.e. GUI and non-GUI interaction, data entry, verification, reporting, etc.) became possible to conduct with help of other programs, and this is how computer-aided testing appeared. In turn, it could be separated to manual sapient testing with help of a tool, and automatic test case execution by a tool.
Certain managers found out that when requirements are clearly documented, and the all possible “needed” test cases are created, test execution tasks do not require much of tough testing skills. Testing becomes simple data entry task which can be done by virtually anyone.

Automation of “Black Box” testing activities

Creation of automatically executable test cases requires programming skills; the more comprehensive tests are, the more powerful test automation framework should be, and the more skilled and experienced developer is required to create the framework. Note that it is still about testing activities automation with test results as an output, and the final judgment is still on human. Anyway, here’s how we got automated testers (obviously, oxymoron, but look how many positions are named so), and automation developers (ironically, hands-on testing skills very often are not considered mandatory, while they should be critical in automation skillset).

Automation of “White Box” testing activities

Apart from code reviews conducted by a human being, isolated pieces of code (functions, procedures) could be verified by calling and executing them. The core idea here that for a call with particular arguments a function is expected to return specified value. If the value is wrong then the test is failed. This is how automatic unit testing appeared. Once test rules were created (either manually by programmer or by using code-generator), tests could be run by a person without programming skills. Note that even if “right” result was returned by a function-under-test, it does not 100% guarantee that the functionality is always correct, or even the function will work the same way in production environment.

“Data entry testing”

Degradation of testing to data entry opens wide saving opportunities for business. Surprisingly, some managers also find it beneficial because they get more [junior] people to manage. All kinds of outsourcing perfectly fit here too, from summer students to off-shore companies. However, down this road company will face two types of critical issues. First of all, “data entry testing” is purely verification-oriented; except of trivial ones, defects won’t be revealed. Second, as automatic test execution requires final human judgment, “data entry testers” are incapable of qualified analysis and investigation of defects they may encounter. Outsourced teams require heavy coaching and support. As a result, either somebody has to do re-testing, or software product’s quality degrades. 

Why separate?

From a hiring perspective, job requirements for QA/testing positions are total mess. Irrelevant subjects are often thrown in, and mandatory skills are overlooked. Separation and, more importantly, clear description of the each role in testing world might help in getting higher quality candidates. That in turn will benefit teams with higher quality resources, and companies – with higher quality of testing.
Clear distinction will benefit professionals too. At the end, 10 years of “data entry testing” are not nearly equal to 1 year of sapient testing, and such experiences must be treated differently. 

Conclusion

I strongly support the initiative of distinction and clarification. However, looking on how it evolves so far I see that it becomes unclear itself. Certain subjects and concepts that are distinct by nature are now mixed up.

I hope my questions will be considered by authors. (I don’t put any obligation to reply, of course). 

Part II – Questions

1. Code Testing vs. Product Testing – why mixing up?

Any program code becomes a software product after the build. Before that happens, code modules and atomic functions also can be (and should be) tested. This phase of testing does not substitute Functional Testing in any manner. Code testing is not meant to be only function checks. Primarily, it is code review, which is purely sapient activity.
The original article, however, fully disregards the sapient part of code testing, and also sets code testing as opposite to functional testing. Why?

Added 11/25/2009

Testing vs. Checking, “Testing Is Not Quality Assurance, But Checking Might Be”

“Checking, when done by a programmer, is mostly a quality assurance practice. When an programmer writes code, he checks his work. He might do this by running it directly and observing the results, or observing the behaviour of the code under the debugger, but often he writes a set of routines that exercise the code and perform some assertions on it. We call these unit “tests”, but they’re really checks, since the idea is to confirm existing knowledge. In this context, finding new information would be considered a surprise, and typically an unpleasant one. A failing check prompts the programmer to change the code to make it work the way he expects. That’s the quality assurance angle: a programmer helps to assure the quality of his work by checking it.”

The whole chapter and the quoted block put label “checking” on programmer’s part of testing – the code testing. Since “checking” is posed as non-sapient, and code testing is checking only, does it mean programmers don’t do any sapient testing as opposite to software testers?

In fact, when a programmer writes code, he reviews every created block. Before code is checked-in to the code base it has to be reviewed.

In the article I see “compliant” examples were elaborated (e.g. Automated Unit Testing) but “non-compliant” (what about Pair Programming ?)  were omitted.

2. Why checking is a confirmation?

As per suggested definition, checking is rule-based, while the rule itself is comparison-based. It is also assumed that the comparison rule returns either “TRUE” or “FALSE”. But that’s not the end! Any verification (or checking) also needs to be validated. Validation is a context-specific rule, outside-of-the-box rule, which is applied with sapience.
Example: “Check if the door is open”. Both TRUE and FALSE could be VALID, depending on the context. Without validation, checking results are useless.

3. Why testing must be done only through exploration and investigation?

“A person who does nothing but to compare a program against some reference is a checker, not a tester.”

A Tester may not know how a transaction is expected to be calculated but Business Analyst does. Does asking BA for the information versus manually investigating the App mean the Tester is not a Tester anymore but only a Checker?
If a Tester knows an application very well, and can predict an expected result, could he/she test those functionalities without becoming a Checker?
 
4. If testing is about asking questions, isn’t checking about answering them?
 
Any defect report contains in its core reproduction steps, actual result, expected result, and the comparison rule.

Any sapient investigation, broken down to atomic steps, involves obtaining actual results, defining or retrieving expected results, defining or retrieving a comparison rule, applying the rule, and finally validating the check performed, based on the context.

5. Testing programs do not create new rules. Testers do. Why didn’t you clearly state that? 

Added 11/25/2009

Regular computer programs may strictly follow the predefined rules, may come up with one of the predefined rules, even may build-up a new statement from the predefined blocks, but they do not learn and do not create.

The whole idea of “Testing vs. Checking” is in “Testing is a sapient activity”. Large part of the article is dedicated to proving of that with examples and logical chains. Did it have to be so complicated?  

Testing programs do not create new checking rules. Testers do.

What could be more sapient than the act of creation of something new?  

6. What is the value of testing if it doesn’t help improving the quality?

“Testing Is Not Quality Assurance, But Checking Might Be” is stated in another paragraph. As the purpose of sapient testing is concern, not confirmation, why the ultimate goal of testing is not assurance (at least – improvement) of software quality?
Added 11/25/2009
  
If a tester finds a lot of defects and throws reports via email or into ticketing system is it the end of tester’s job? Developers may reject them [defect reports]; sales people may urge with the release; PM may not realize severity of issues…
  
Bug fixing improves the quality. Bug finding without hunting them down until they’re fixed has zero business value. That’s useless (no profit, no saving and minus tester’s paycheck) gathering of information.
 
Testers should not and don’t have to be able forcing bug fixing through management or business power. They have other means to do it. Communication, first of all.
 
Not having power is not an excuse. It’s just stepping back from quality.
I don’t know what business would hire people interested in “gathering of information” only, and careless about product’s quality.
 
7. “Checkers Require Specifications; Testers Do Not “. Or maybe it’s exactly the opposite? Would you consider that?
 
Checkers require execution steps. They don’t care about specification. If clear and detailed specification is presented, but not covered with execution steps, checkers won’t bother.
 
Testers need specification so much, that if it’s not presented or unclear they will make it up and clarify, through communication, from documentation, and they will practically prove it on the product. (“There are ALWAYS requirements“, by Joe Strazzere)
Added 11/26/2009
  
There is an old good game of playing semantics. It allows disputing everything, and simply ignoring any argument.
Here I can’t help but put links to online dictionaries to give an idea why “specification” and “requirements” can be used interchangeably, and why “execution steps” are not the same as “specification”.
 

References

 

  • One response to "7 questions on "Testing vs. Checking""

  • Michael Bolton
    25th November 2009 at 11:56

    Hi, Albert…

    I’m not sure if I agree with your history (“With years, bad coding practices were identified, good coding practices were proven…”; “separating to functionality-oriented testing and defect-oriented testing.”) and the like. However, I don’t think that’s terribly important for the answering of your questions.

    The original article, however, fully disregards the sapient part of code testing, and also sets code testing as opposite to functional testing. Why?

    I don’t see how the original article does that. I don’t set code testing as opposite to functional testing (if you see a passage that suggests that, I could speak to it). But a lot of people seem to read that sort of thing into the work. In any case, to address that kind concern, I wrote this:

    http://www.developsense.com/2009/11/merely-checking-or-merely-testing.html

    Any verification (or checking) also needs to be validated. Validation is a context-specific rule, outside-of-the-box rule, which is applied with sapience.

    Yes. The above-referenced post talks about that too.

    A Tester may not know how a transaction is expected to be calculated but Business Analyst does. Does asking BA for the information versus manually investigating the App mean the Tester is not a Tester anymore but only a Checker?

    No. Asking the BA for information is an investigative activity, isn’t it?

    If a Tester knows an application very well, and can predict an expected result, could he/she test those functionalities without becoming a Checker?

    Absolutely. The issue here is that for any given test, we have more than one expectation in play. That expectation might be predicted. It might also be generated during the test (“Hey… that’s weird. I didn’t expect to see that. Now that I see it, I realize I expected to see something else.”) The expectation might even be developed long after the test. (“Now that I know this new piece of information now, I realize what I saw the other day was a bug.”)

    If testing is about asking questions, isn’t checking about answering them?

    Checking is about answering questions for which we already have an answer, in the form of a decision rule. Testing is about asking and answering questions for which we might not yet have a decision rule. The cool thing about a human is that we can ask and answer many of those questions instantly, and often subconsciously.

    5. Testing programs do not create new rules. Testers do. Why didn’t you clearly state that?

    I don’t know. I don’t understand what you mean, for one thing.

    What is the value of testing if it doesn’t help improving the quality?

    This may be a little hard for some people to take, but it’s important: Testing is not about improving quality per se. Testing is about understanding what is there, and what is not there. Testing is the gathering of information. Gathering information, in and of itself, does not improve quality. It’s what people do with the information gathered that improves quality. One could test a product and provide results to management such that management’s conclusion would be “We’re sufficiently happy with the product as it is.” Indeed, that’s what generally happens just before the product ships. In that case, testing has done nothing to improve the quality of the product, since no changes have been made.

    Testing does not assure quality either. If you have the power to change the code, the staffing, the schedule, the budget, contractual obligations, responses to market forces, etc., then you can make decisions that help to assure the quality. Testers don’t do that. We’re reporters, not businesspeople or lawmakers.

    “Checkers Require Specifications; Testers Do Not”. Or maybe it’s exactly the opposite? Would you consider that?

    I don’t understand the question.

    Checkers require execution steps. They don’t care about specification.

    Execution steps are specification. That is, execution steps are specific actions to take or ideas to follow.

    Testers don’t need requirements documents or specifications to discover or reveal or infer requirements. Testers can then compare their discoveries or inferences with the client’s actual requirements – the advantage being that testers can alert clients to requirements of which they were not previously aware.

    —Michael B.

Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported
This work by Albert Gareev is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported.