Testers listen to Skeptoid

Posted by Albert Gareev on Oct 14, 2015 | Categories: HeuristicsNotesReviews

“How can I learn about critical thinking? What are the examples? How to apply it in testing?”

If you’re asking such questions for yourself or for coaching of your team I can help you with a perfect source.

Skeptoid is a podcast, public research projects,  quality learning material, and source of excellent examples of critical analysis.

Skeptoid: Critical Analysis of Pop Phenomena is an award-winning weekly science podcast. Since 2006, Skeptoid has been fighting the good fight against the overwhelming majority of noise in the media supporting useless alternative medicine systems, psychics preying upon the vulnerable, the erosion of science education in the classroom, xenophobia of advanced energy and food production methods, and generally anything that distracts attention and public funding from scientific advancement.

Sometimes Skeptoid episode reminds me of a requirements (User Story) review – or, at least, how it should be done:

  • Review statements or claims
  • Critically analyze the semantics (what is actually said?)
  • See if it was backed up with references, facts, and research
  • Do fact-checking
  • Do plausibility assessment

Sometimes it’s a valuable learning. Remember, the other day I referred Fallibility of Memory? Now put it together with the concept of “Agile retrospectives”. Oh, no..

And sometimes we get heuristics adaptable for testing. Let’s use examples from How To Spot Pseudoscience episode.

Does the claim meet the qualifications of a theory?
Very few claims that aren’t true actually qualify as theories. Let’s review the four main requirements that a theory must fulfill.

  • A theory must originate from, and be well supported by, experimental evidence. Anecdotal or unsubstantiated reports don’t qualify. It must be supported by many strands of evidence, and not just a single foundation.
  • A theory must be specific enough to be falsifiable by testing. If it cannot be tested or refuted, it can’t qualify as a theory. And if something is truly testable, others must be able to repeat the tests and get the same results.
  • A theory must make specific, testable predictions about things not yet observed.
  • A theory must allow for changes based on the discovery of new evidence. It must be dynamic, tentative, and correctable.

In Testing: widely applicable from testing of marketing claims about software to our very own bug reports.

Is the claim based on the existence of an unknown form of “energy” or other paranormal phenomenon?
Loose, meaningless usage of a scientific-sounding word like “energy” is one of the most common red flags you’ll see on popular pseudoscience.

In Testing: pay attention to red flags like: always, never, all, just, should, may, very, soon, up to, only. This is a phenomenon called Lullaby Language.

Does the claim sound far fetched, or too good to be true?
When something sounds too good to be true, it usually is. Extraordinary claims require extraordinary evidence. Does the claim truly fit in with what we know of the way the world works?

In Testing: evaluate software in a much broader sense than just documented requirements. Use consistency heuristics as a guide.

Is the claim supported by hokey marketing?
Be wary of marketing gimmicks, and keep in mind that marketing gimmicks are, by themselves, completely worthless. Examples of hokey marketing that should always raise a red flag are pictures of people wearing white lab coats, celebrity endorsements, anecdotes and testimonials from any source, and mentions of certifications, colleges, academies, and institutes.

In Testing: there are institutes, boards, and committees claiming the right to certify professional skills and defining pseudo-standards (criticized by professionals). They invest massively in marketing gimmicks to sell certifications and exams. Be skeptical: do these organizations care about the craft or care about gaining some exclusive rights? Which organizations really do something for professional community, offer educational and mentoring opportunities?

Are the claimants up front about their testing?
Any good research will outline the testing that was done, and will present all evidence that did not support the conclusion.

In Testing: provide well-structured and concise reports, outline risks you considered and identified, what was tested and what was not tested.

Does the claim have support that is political, ideological, or cultural?
Some claimants suggest that it’s moral, ethical, or politically correct to accept their claims, to redirect your attention from the fact that they may not be scientifically sound.

In Testing: sometimes we get the pressure to “tweak” our testing, or “cut corners”, or “adjust results” for reasons like schedule, team’s performance, client relationships, and so on. Sometimes the pressure is big, and it is hard to stand up to our professional values and ethics. But please do. Discuss, explain, negotiate. Bend but not break.
Just remember, by agreeing to cheat you lose your credibility and become a hostage of that person.
I may have to put a blog post dedicated to it. Or, maybe, someone can share their experiences?

  • Leave a Reply

    * Required
    ** Your Email is never shared

Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported
This work by Albert Gareev is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported.