preload

Business Value – Purpose

Posted by Albert Gareev on Jun 27, 2014 | Categories: DocumentsRequirements

By industry standard definitions, test automation is created with the purpose of extending coverage, saving in test execution time, allowing for increase of test iterations, earlier detection of defects, and everything altogether. Therefore, only created and successfully used on a regular basis test automation should be considered as adding value.

Coverage assessment must be done with high caution. Despite of the purpose, conversion from manual to automated testing may result in significant overall loss of coverage, because humans observe and catch all kinds of issues while executing test cases, and automation will do exactly and only what was scripted, which is fundamentally limited.

 

Requirement

Coverage Description

Assessment

Smoke Testing Broad but shallow functional coverage. A few test cases per application area. Coverage of main business rules only. Valuable asset in a project with frequent (daily) code deployments.Otherwise – very limited value.
Basic Regression Testing (sometimes also called Sanity Testing) Area-focused functional coverage aiming to check all business rules. Valuable asset if it provides complete coverage without a need to go to the same area and complete this level of coverage manually.
Full Regression Testing Area-focused functional coverage aiming to check all business rules and important system specification rules. Valuable asset if it provides complete coverage without a need to go to the same area and complete this level of coverage manually.
GUI Testing Screen-oriented coverage, verifying UI elements.However, it does not include “Look and Feel” coverage. Valuable asset in a project with UI appearance changing based on the business rules; or in a project with high chance of unwanted UI changes.
Data Entry Data creation (non-testing task). Valuable asset if it saves time in preparation of test environment.

 

Failure criteria

  • Unfulfilled purpose – created automation can’t be used for the purpose it was created.
  • Cost of purpose – using automation costs more than the cost of purpose itself.
  • False negatives – automation consistently misses defects it was specifically targeted to find.
  • Too many* false positives – automation reports failures that are not failures.

* Certain context-defined amount of false positives is anticipated, when automation detects a change, but the change is not a defect.



  • One response to "Business Value – Purpose"

  • Jeff Lucas
    2nd July 2014 at 11:08

    Albert – Thanks for posting this. My particular context for automation has been a single tester supporting teams of 4 – 12 people, so I am wondering if there are differences between large teams and small teams. Could you address some of these?

    [ Albert’s reply –
    Hi Jeff,
    You’re nailing many critical points that can turn automation either into a waste or into an asset.
    My post was given from the perspective of Automation Lead, who’s team supports multiple projects in a large organization.
    See my answers below.
    Thanks! ]

    1) I found it better to treat automation as a tool to help manual testing go faster after a build is deployed. If that is the purpose, then does the cost consideration still apply? I spent a lot of resources in preparing for deployments, including a well-maintained script set.

    – Everything still has a cost on the project. You might be sheltered from this aspect by your lead or manager, but they still have to justify job and cost.
    I treat different kinds of automation differently. Business purpose is the main category. It might be a cheap, “one time use” script for data creation, that simply saves manual effort – which allows to do more testing as a trade-off. Or it might be a large scale test suite, performing 10,000 verifications in unattended fashion while automatically documenting all coverage, – a great asset in another context, so cost of its creation and maintenance is justified.

    2) All of my automation was created using a tree structure, where leaf and twig tests were dependent on the underlying branch and trunk tests. If that is the structure, then wouldn’t association with smoke, functional, etc. become more of a matter of selection than form?

    – Not sure I understand here.
    You described *form* of coverage, I wrote about *purpose* of coverage.
    For example, purpose of a “Smoke Test”, or as I call this type – “Build Acceptance Check” – is a quick feedback of the health of the build: if it’s too broken we’d require redeployment instead of further testing. The feedback is needed as soon as possible, typically, in an hour or less. This dictates level of functional coverage: broad but very shallow.
    At the same time, if you’re in a context where instead of daily deployments you get the build monthly, value of Build Acceptance Check becomes minimal, and it might be not worth implementing and maintaining it.

    3) I found that it was best to avoid association of result to “pass/fail” or “false positives”. If a problem is encountered, would it be better to report “Expected xxx but got yyy”? In that paradigm, every result is verified before submitting defects.

    – Every result must be investigated in any case.
    Automation reporting is a little art by itself. I have blogged about it quite extensively. Here’s an example of requirements and coded example.

    4) With an “computer assisted manual test” approach, if a large number of false positives were encountered, I would tend to delete the automated test and simply schedule manual verification until it could be investigated. Does your post more directed toward teams trying to implement fully automated tests?

    Thanks in advance for your responses.

    – I rather see whole testing as exploration and discovery of new information PLUS confirmation that known and existing behavior hasn’t changed. If you delegate the latter to automation in any extent, you better make sure that you can rely on it. I prefer NOT to create unreliable automation, though it takes an effort to educate the stakeholders about distinctions between skilled human testing and mechanical verification.

    Hope this helps – feel free to follow up.
    Thanks,
    Albert

Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported
This work by Albert Gareev is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported.