When do I stop testing?
Bigcommerce is a highly customer facing product. As many merchants as we support and create on a daily basis, it can be a little nerve-racking to release particularly sensitive functionality such as payments, shipping, tax.. the list is endless.
As a Quality Engineer, I try to think of all the things that could go wrong and pay a lot of attention to keeping the quality of the product. At the same time, it is of utmost importance to keep the velocity of the team and release these amazing functionalities well in time.
One of the key balances of my job is to be able to answer this simple question - “When do I stop testing?”, or in other words - “Have I tested enough?”
In order to answer this make-or-break question, I generally follow a bunch of steps for every project-
Create & Share Test Plan-
It is a common practice to create Test Plan documents, right? I suggest it is a good idea to share it as well. In BC, QEs share test plans for a particular project at 2 levels - one with their fellow QEs and two, with their immediate project team. Sharing it with the fellow QEs opens up doors for more areas for testing that might have been missed the first time. And, sharing it with the project team (which includes the Product Owner, Project Manager and developers) really helps to filter down areas that might not even be touched at all, or scope out the testing areas better. Test Plan is also shared on BC intranet for future references.
Once the development has begun, it’s always good to not only write automation code but also review developers’ PRs which gives an insight to precise code changes, making it easier to create test scenarios.
Automate important scenarios first-
When I get around to automating scenarios, I generally write the ones that are the most critical first and share with the project team. Then as I keep building on more automation scenarios, I also make sure the tests are run for every PR that’s raised - thus reducing redundant testing efforts.
Many-a-times the test scenarios or bugs raised are not prioritised keeping in mind the realtime number of users of that feature. At BC, the data team is super helpful in determining real data for the feature in use - thus raising or lowering the priority of a test scenario or issue. Example- say a new modal window is not loading it’s CSS properly only on IE. It’s always good to fetch some data around how many BC merchants have used IE browser in the last quarter or so. If the numbers are really low, the issue automatically becomes lower priority and vice versa.
Once the entire testing is complete from QE’s end, arranging a “Blitz session” is always a good idea where folks from various teams are invited- QE, Dev, Sales, PM, PMO, etc. They all have easy access to the new feature on testing environment and everyone tests the feature or in other words tries to “break the feature”. Each new person brings a different testing perspective and rewarding the most critical bug-finder can be a good addon too! Post-blitz, all issues are curated by the team and fixed accordingly.
Automation Test Suite-
Another great idea to test smartly is to filter down the automation tests in accordance to the feature being tested which must be automated during the sprint. Say the new feature is related to payments. If I run all automation tests related to payments post code complete, I’ll immediately bring down a lot manual effort. Although, post test complete, it’s a good idea to run the entire test suite for a good sanity check.
As I follow along this list of ideas, by the end of the project, the entire team gains confidence in our functionality and gets perked up for the release rather than being anxious. Following all of these steps has not only helped me test better but also helped me test smarter by involving fresh pair of eyes, automating the redundant tests and most importantly, knowing when to stop testing.