preload

Test Automation Problems (4) – Implementation Approach – Verification

Posted by Albert Gareev on Aug 12, 2009 | Categories: Problems

Parent page: Test Automation Problems

Verification

 Manual verification is complicated and informal. For test automation purposes, making it logical and specific requires time and effort of a seasoned professional. Workarounds like “picture-based” verification do not require formal logic but do not save man hours at the end due to high maintenance cost and manual validation and retest.

Having detailed and reproducible test execution and verification report, embedded from Testing Tool, might be critical. 

Problems and impact 

1)     “Picture-based” GUI regression testing (GUI Checkpoints) 

  • Taking “picture” (as a checkpoint data) of a screen visited by script and using it as expected result in the subsequent runs seemed to be a perfect solution providing verification without implementing verification. However such a verification is applicable only to a static windows (or else picture of the window should be taken at every change of its GUI), requires static data (or else consumes huge amount of time for checkpoint maintenance), and still not complete: reporting the mismatch on a window requires full manual investigation 
  • Such a verification is based on the “equal – not equal rule” only while business testing may require rules like “less then / greater then”, “within the range / outside the range”, “belongs to set”, etc. 

2)     Investigation and validation 

  • The final result of the fail encountered is a defect report. So if the script just reports “something wrong” in the test case then full manual investigation is still required. The following basic automatic steps will help in manual investigation, increase reproducibility and save the time: window and object name where operation failed, actual data and test data used, verification rule applied, etc. 
  • Different fails have different impact on the test flow. Failing to find window or use mandatory input control cause test case execution to stop, and failing to use optional input control or encountering mismatch in a displayed text still allow continuing test case execution. Ability of the script programmatically distinguish those fails and make a decision based on the severity brings automatic testing to a more smart level of validation

  • Leave a Reply

    * Required
    ** Your Email is never shared

Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported
This work by Albert Gareev is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported.