This article was published on StickyMinds – the TERMS for Test Automation Risk or Success, March, 2019. People have always been keen to use tools. In fact, use of tools contributed to making us sapient beings, as we evolved our utensils from primitive objects to complex mechanics. All that history has led to our current […] ...
This article was published on StickyMinds – Lessons Learned Testing Angular Applications, December, 2017. Web applications have evolved from simplistic forms to highly interactive screens. Implementation of all these interactions requires a lot of JavaScript code on the front end—that is, code that is run by the browsers on users’ devices. When there’s a lot […] ...
This article was published on StickyMinds – Methods and Tools for Data-Driven API Testing, September, 2017. Software testing has many forms and breeds, but one major distinction has always been based on the approach—either working with the code or interacting with the product. The former was typically a prerogative of programmers while testers have concerned […] ...
This article was published on StickyMinds – What Testers Need in Their Accessibility Testing Toolkits, July, 2017. The concept that software should be usable by the widest possible audience has been around for more than twenty years, yet for quite a while it remained out of the mainstream of testing and development efforts. This has […] ...
Another day, another good question on Quora. For years, I’ve been answering “what is performance testing?” in a variety of ways. In a technical way, I tell about process, tools, scripts, measurements, and analysis. More often though, I need to convey the concept to a non-technical or at least not very technical person. Finding a […] ...
Despite of all critique and challenges, automation is a valuable aspect of testing strategy. Every new automator needs to answer questions like that: “How do you choose which test cases to automate?” While everything regarding testing is very specific, we may try to give an answer through general idea supplemented with concrete examples. Cross-posting my […] ...
This article was published on StickyMinds – “Hidden Parts of the Performance Equation”, April, 2016. The Performance Equation Many teams decide to put together a “test bed” of servers and network infrastructure, develop some scripts simulating user requests, run the whole thing against the application, and see if they can satisfy the business requirements. And […] ...
Well, it’s November 2015 and I learned that my site made to Top 5 “Test Automation” blogs in 2014 as per TEST BUFFET. What I find funny is that my site isn’t included in Software Testing at all. We certainly need to do a better job educating that Automation is a sub-service within Testing. ...
This writing is a cross-reference to the post Manage Focus Of Your Attention with regards to the concept of Shadow Work. A quick reminder. In economics, shadow work refers to unpaid labor in the form of self service. Shadow work has one or few of the following attributes. Transferring part of the service from company to […] ...
This is a technical entry with my research-and-experiment notes. Feel free to add or argue. Try at your own risk. I continue my study journal figuring out HP ALM configuration. This is a second entry about customization with VBScript. General Idea Typical issue / defect management workflow is based on rules specific to the Role and […] ...
This is a technical entry with my research-and-experiment notes. Feel free to add or argue. Try at your own risk. I continue my study journal figuring out HP ALM configuration. This entry is about customization with VBScript. General Idea How many times we had to reopen failed fixes? How many times regression defects ruined testing […] ...
This is a technical entry with my research-and-experiment notes. Feel free to add or argue. Try at your own risk. I continue my study journal figuring out HP ALM configuration. This entry is about issue-specific calls. Overview While creating or editing issue entries, users trigger internal ‘events’ in the ALM engine. Each event has a corresponding […] ...
This is a technical entry with my research-and-experiment notes. Feel free to add or argue. Try at your own risk. I continue my study journal figuring out HP ALM configuration. This entry begins series of Workflow customization notes. Overview Workflow Customization view (1) enlists scripting options for List Dependencies (2), New Issue dialog (3), Edit […] ...
This is a technical entry with my research-and-experiment notes. Feel free to add or argue. Try at your own risk. I continue my study journal figuring out HP ALM configuration. This entry is dedicated to customization of Group Permissions. Groups Groups and Permissions (1) view enlists all defined groups (2), and have tabs for group […] ...
This is a technical entry with my research-and-experiment notes. Feel free to add or argue. Try at your own risk. I continue my study journal figuring out HP ALM configuration. This entry is dedicated to customization of “Project Lists” mentioned yesterday – these are enumeration data objects you need to create when declaring a List data […] ...
This is a technical entry with my research-and-experiment notes. Feel free to add or argue. Try at your own risk. This entry is dedicated to customization of “Project Entities” - i.e. data objects in QC, with the examples from “Defects” module (or “Issues”, as I renamed it). Overview Project Customization view (1) enlists components on […] ...
This is a technical entry with my research-and-experiment notes. Feel free to add or argue. Try at your own risk. ..And here it is - HP ALM version 12. Test Director (TD) became Quality Center (QC) which is now rebranded as Application Lifecycle Management. The main positive thing I notice immediately - it’s not painfully […] ...
For any project, it’s typical to conduct assessment and evaluation of tools before acquiring licenses and putting them into use. Below I’m sharing evaluation matrix composed based on my review of accessibility requirements. Requirement Description Checking By Tool Review by person 1.1 Text Alternatives Provide text alternatives for any non-text content 1.1.1 (A) - Alternative […] ...
Class Description Content Parser-Checker. Examples: WebAIM’s WAVE, AChecker, SortSite Tools that process inner elements of the document (tags) and check whether their presence/absence and structure comply with the predefined set of rules.Mostly helpful for testing of requirements: Perceivable Robust Well-developed tools are quite useful for quick and cheap catching of obvious bugs. Testers can use […] ...
Preface If you have just installed WAVE toolbar, the variety of available commands for sure looks overwhelming. And, frankly, professional Accessibility testing requires specialized knowledge and exploratory testing skills. However, there are few quick tests that are relatively simple to take for a basic assessment and identification of major accessibility barriers. Quick Tests Page Scan Accessibility Purpose […] ...
To perform testing, GUI automation scripts need to encapsulate the following components: Test data, used for input and verification. Service functionalities, like reporting, data retrieval, etc. GUI mapping – set of logical names of GUI controls mapped to their physical properties. GUI operation, i.e. recognition of controls, sending commands, retrieving property values. Test instructions (test […] ...
Scalability is an integral characteristic comprising the following: How well volume of test cases can be extended without linear (or even geometric) growth of efforts for creation and maintenance. How well the same testing coverage can be applied to different environments. How well framework of the solution supports creation of integration of tests, by reusing […] ...
Maintainability is an integral characteristic comprising the following: Updates in test logic and/or GUI mapping caused by the changes in the application under test. Design or refactoring of automation for the purpose of expansion of coverage. Expansion or update of data set. Execution, testing, and debugging of testing scripts. New environment setup Data changes […] ...
Performance of automated tests must be optimized by design and implementation, to achieve best coverage in shortest time possible within the constraints of the context. This should be achieved by use of configurable synchronization parameters, proper design of automated test cases and test scenarios. Failure criteria: Execution of knowingly failed test cases, i.e. execution of […] ...
Attendance Automated tests must be designed for unattended execution. Failure criteria: Test script may stop or break execution at any moment. Manual action required to continue the execution. Full restart is required if execution was stopped. Upon error or failure execution is skipped to the next script, leaving gaps in coverage. Scripts don’t log errors […] ...
Automated tests must be delivered as a software product which can be used (operated) by testers and other team members without special technical skills in programming or automation tools. Operation requirements for all setup and preparation procedures: Must require only general computer skills; Must follow a single standard; Must use centralized and unified configuration file(s). […] ...
Automated tests, as a product, should be available for use by any tester (and other team members) with minimal technical skills required. This requirement heavily applies to the following procedures: Automation setup before execution. Launching execution of scripts. Reviewing test execution reports. This requirement somewhat applies to the following procedures: Performing project-specific environment […] ...
Transparency of coverage should include written information about the following: Implemented Test Coverage – coverage of each and every automated test case, including test purpose and main execution steps. Executed Test Coverage – automatically generated test execution report, with each and every automated test case that was executed, including test purpose, all execution steps, and […] ...
Creation effort maps directly to costs - one can compare creation expenses with potential savings. Creation effort is an integral characteristic comprising the following: Test design or re-design of manual test cases to convert from human judgment to mechanistic verification. Automation design and development, including creation of scripts and any reusable components. Swiftness and efficiency […] ...
The automation is fundamentally limited to mechanical verification (aka Checking) – through comparison of values only. However, with proper design, automation may simulate many of usage scenarios, which will trigger possible problems either detectable through comparison or directly impacting the flow of test execution. Combination of cheap high volume of automated tests with human attention and […] ...