QA Test Automation Requirements (4) – Scalability

Posted by Albert Gareev on Jun 09, 2009 | Categories: DocumentsRequirements

Original date: 6 May 2009, 12:30pm

Quality Assurance Functional/Regression Automated Testing and Test Automation Requirements: Usability, Maintainability, Scalability, and Robustness.

Scalability Requirements Matrix

– What applications should be able to handle the Framework?

The business need for automation of QA/testing activities appears when testing volume, turnover and coverage requirements make it beneficial enough. Volume means a big number of test cases which means a big set of functionalities. (High) turnover and (wide) coverage requirements mean a demand for test results within a tight timeframe and with the specified (high) level of extent.
With the mentioned above it comes to Enterprise level software products, tested from the front-end and back-end, and often having different front-ends: Windows Desktop, Standard Web-based, Rich Web App (Web 2.0 and higher), and UNIX-based accessible from Terminal Emulator (Text-based Screen Interface).

– When requirements are extended or revisited what efforts required and which way the upgrade of testing scripts supposed to be performed?

Upgrade requires rebuild from scratch. Testing is not available until it’s done.
Individually developed (or recorded) test cases could be upgraded only through rebuild.

Upgrade requires replacement of a module. Testing is not available during replacement only.
Structured (modular) development allows keeping common functionalities in reusable function libraries. Change in the login procedure, for example, requires change of a common function implementing the procedure without impacting all the scripts where it’s used. For the new functionalities, like additional verification requirements specific for the screen, new reusable function is needed to be implemented and called from the scripts covering the screen.

Plug-in / Plug-out
Upgrade/Replacement/Adding new Test Component DOES NOT bring testing down. No code change is performed at all.
Further evolution of Automation Frameworks allows automating the automation itself. Hybrid (Keyword/Data Driven) and Business Model based frameworks bring Test Case automation to the level of abstraction where no scripting is required (record/playback is not required either!). Operating by high-level instructions (or keywords) Business SME can design/modify a Test Case (See Visual Prototyping/Swiftness/Usability requirements matrix).

– When time comes to migrate to a new platform (new Testing Tool) do you want to keep your assets or you have to rebuild everything from scratch?


If all the logic was contained in the code, migration to a new Testing Tool will require rebuild from scratch.
Having proper documentation support of the automation implemented (Test Flows, Coverage documents, Maintenance guides) allows reducing risks and expenses of the rebuild-from-scratch project.


External Business/Test Logic could be supported by any Testing Tool. Only Framework implementation is required.
Frameworks having support for externally customizable Business/Test Logic (Keyword-driven and higher on evolution curve) allow keeping Business Test Assets gained. It’s not only saving time for the automation development expenses but also saves a lot of analysis and design time, that has been invested creating automation.

– When the volume of dataset could change significantly will it require rebuilding the scripts or only some external customization is required?

Hard-coded volume

Test scripts are built to use only certain volume of data.
Most typical examples here would be a script verifying only 1 row in the report or a script submitting only 1 record through a multiple request form.

Unlimited volume

External customization allows using of scripts with any volume of data required. User can specify desired number of data records as a comma-separated list or as a certain rule (e.g. “Find and verify all report rows matching the account number”).

– Does the Framework support functionalities below, matching, or above current needs?

Very often automation project is started as a pilot. After the certain phase, “Proof of Concept” gives management an exposure to decide whether they see it worth to continue or automation doesn’t worth funds invested.
Or the initial scope of automation (“just 10 Test Cases“) does not presume capabilities to extend for the framework.
As a result quickly baked framework is not capable to support extending business and automation requirements. This causes either downsizing the requirements or unstable, unreliable work of test scripts. In both cases automation is considered a “barely useful”.

Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported
This work by Albert Gareev is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported.