preload

Automation that sucks

Posted by Albert Gareev on Dec 10, 2008 | Categories: NotesProblems

A company hired another “automated tester”…

Process description

In the nutshell, the actual job description is:

  • Bring up an application-under-test
  • Start “testing” script
  • Babysit “testing” script, i.e. manually click/type on the GUI when the script is stuck, then resume script
  • After execution is done, review “test logs”

 

Reviewing the “test logs” includes:

  • Go through them to see if a mismatch was reported
  • For a mismatch, review comparison between captured previosly and captured in the last run GUI data
  • Figure out (without going to the actual GUI!) is the mismatch a bug or not

 

To reflect changes in the GUI and test data, in the beginning of release cycle scripts are run in “capture” mode. Captured GUI data will be used as “expected result” later on.

Such process sucks

First of all, test design sucks

  • Test design becomes a hard-coded sequence of steps
  • “Testing” scripts are coded by some remote people, maybe even in offshore, who never spoke with testers or developers of an application
  • Script coders suck at testing
  • “Testing” scripts follow a happy path of business scenarios
  • “Testing” scripts suck at general observation. At best, they capture what they were explicitely coded to capture

 

Second, working process with an application-under-test sucks

  • It lacks of live observation
  • It is bound to a limited set of test data, that is kept unchanged as long as possible to evade recapturing of GUI
  • Automated testers are focused on getting scripts executed, not bugs found

 

Third, verification sucks

  • Only what was prescripted is verified
  • Static 1-1 comparison rule is used
  • Internal inconsistencies are overlooked if they match comparison rule

 

Fourth, investigation sucks

  • Automated testers suck at investigation and problem-solving
  • Mismatches that were reported are not investigated in live GUI

 

Fifth, capture/replay method sucks

  • An application in the beginning of test cycle always has more bugs and yet it’s recorded as an “expected result”
  • Automated testers, capturing expected results do not verify application thoroughly
  • …because they suck at testing

Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported
This work by Albert Gareev is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported.