Over the past years, software and information technology have become increasingly pervasive on a global scale, reaching even the least developed and developing nations through a variety of industries like banking, healthcare, education, and others. The associated variables/parameters that can be used to use software also expand in variety as software usage does. This means that software should be tested on such potential variables and parameters before being released. Despite the fact that more testing is required due to the increased number of variables, the testing period has not increased proportionately. It's interesting how businesses are compelled to release more frequently now because of the market's competition and the options available. How can test management handle this ironic situation yet ensuring it has taken ownership for overall product quality guaranteeing product acceptance?
The answer to this challenge is effective Test Optimization. Test Optimization has long existed in the history of the testing realm, but as discussed above, it has gotten more complex in recent years. A decade ago options used to be restricted to supporting your product on just the Operating Systems and Browsers (OSBs), from the commercial players. There were hardly 8 to 10 combinations to support. Now even in the space of such OSBs you have a lot more to test for, including products from the Open Source world. Backward compatibility scenarios increase your combinations to test for, in addition to the latest OSBs.
Expanding Scope of Test Efforts in a Diverse Computing Landscape
Besides Option Select Buttons on personal computers and laptops, a lot more computing devices and service offerings will now have to be taken into account, such as smartphones, electronic readers, hosted solutions, on premise solutions, to name some. To add to this list, if your product were to have several SKUs, those need to be included as well to your optimization list. With all these in play it is not surprising to see that test organizations are building artifacts just to optimize test efforts as part of their core test plan. A few things to keep in mind in optimizing your test efforts:
- It is crucial to create a list of tested supported configurations and to periodically check that the list is still accurate.
- The test organization might also prioritize the list based on the users it supports after it has been created; this is crucial to ensuring that the right test coverage is achieved in the constrained amount of time.
- It is also helpful to get a dedicated QA team specially for testing all supported combinations, as this can be a cumbersome task if not foreseen. Such an outsourced software team may not only have the necessary expertise for testing, but also be willing to provide the essential infrastructure, images, virtual connections, devices, and more needed for testing. This is also an area where a company can use on-demand testing solutions, such as Testing as a Service (TaaS), without having to invest significant internal resources.
- Besides major commercial cloud players such as Amazon, Microsoft, Google there are also several small players who also help you set up customized on-demand offerings to simulate your test bed. Just don't reject them, they can become the basis for your test environment. One can leverage such specific on-demand solutions to save time in test environment setup.
- Re-usability is the key to saving time when you are working on a plethora of combinations. Re-use test artifacts, cases, scenarios, frameworks etc. from previous iterations however paying attention to whether they apply to the current iteration as often times past scenarios (especially platforms tested on) might become obsolete quite soon.
- Every tester should be encouraged to build his/her own testing strategies to make the test optimization effort successful. For e.g. if a test case were to be run on 5 different combinations, some may prefer running the same test case simultaneously on the 5 platforms before moving to the next one. Others may prefer to finish testing on one platform fully before moving to the next one. Each has its own pros and cons, so empower your test team to choose their individual test strategy which will help them get the best testing results.
- When working on a test effort that is optimized and has multiple platforms to test, defect management is another area that needs special attention. The test management team should determine the more general best practices and protocols for defect management and, within that pre-defined framework, permit the tester to exercise creativity in reporting and identifying flaws, particularly when it's necessary to reproduce them across multiple platforms.
With the increasing set of computing options available to the end user, the complexities around test optimization will only increase. This is where a seasoned test management effort becomes imperative to chalk out the right matrix to test within the bounds of available time, resources, cost yet not compromising overall coverage and product quality.
Making an input, one can say that all counted above highlights the importance of building a list of supported combinations and prioritizing it based on the users they support to ensure the right test coverage in limited time. It also suggests building a specialized team for testing and leveraging on-demand testing solutions. Re-usability of test artifacts and frameworks is emphasized to save time, and testers are encouraged to build their own testing strategies to optimize the test effort. Do not also forget to pay special attention to defect management when working with several platforms to test.