Automated Testing: Your Secret Weapon for Sustainable Software Quality
Automated testing is often discussed during implementation, but its significance is not always fully understood. Organizations usually claim they want something automated, but when asked for specifics, they don’t always know what they need. There is more to automated testing, though.
I was once hired to build a quality department with a focus on automation. I asked the VP of Engineering what needed to be automated, and their reply was, “Everything.”
As I explored what they already had, I recognized that there was no direction or guiding principle for the team to work with. There were only a few coded UI tests, and they didn’t do a lot more than log in to the application. Unit testing was practically non-existent, and nothing was run during deployments. There was little in test documentation, so the team lacked manual tests to guide them on what needed to be automated.
I went ahead and built out an implementation plan. It spanned several months and included creating the manual tests, cross-training folks in the automation technology (Cypress), and an investigation into technologies that would allow us to leverage test run metrics.
“But, wait,” you may ask, “Weren’t you told automation needed to be built? Why on earth did we need manual tests as well?”
First, we were going to have to do a SOC2 audit, and there would need to be proof of “what” we were testing in the first place, plus pass/fail metrics for each run.
Second, since most of the department had limited knowledge of the applications we were working on, we needed the tests as a guide to thoroughly test the application.
Building the manual cases allowed the group to train each other in application usage and created a training base for new hires. We were building a department after all.
Usually, you can’t just stop regular testing to do this, so we had to adopt an “as needed” situation. Tests were created for specific functional requirements, as well as common regression cases during regular development. Taking this route allowed for major regression cases to build out over several months, and build out test coverage. It also allowed the team to identify common cases they ran, which created candidates for automation. The goal when building an automation suite is to focus on common regression cases.
We also put smoke tests in place. Tests that did not change data, but merely verified the application was functional. These could be run in any environment, as we didn’t want data creation in a Production setting.
Other automated tests were then introduced, including Unit tests, API tests, and load testing. Load tests became more straightforward to implement after building out some automation, as we could use the common regression tests as a baseline and increase the frequency of usage simultaneously.
Once we had a few automated cases in place, it was time to implement their usage in the deployment platform. As I noted, different environments received different kinds of cases. A development or test environment would receive automated tests that change data and run entire scenarios to verify functionality. This allowed us to start pushing for a better Continuous Integration/Continuous Deployment (CI/CD) foundation. That meant the automated tests were run as part of the delivery process, and pass/fail metrics could be generated before a tester even saw the changes.
As more testing was integrated into the deployment process, the software became higher quality. Note that automation doesn’t replace the function of having someone test the application. Instead, it replaces the time spent on testing, allowing testers to focus on the functional changes being made and any ad-hoc testing that was noted during design and implementation, where problems may have arisen.
Believe it or not, it didn’t take as long as it sounds to get this off the ground. Once the initial tests are in place, they can be built over time, increasing the application’s reliability.
In the case of my example above, it took about six months to get the application delivery to a higher quality. It reduced the number of incidents and hotfixes to virtually zero, which pleased the business. Plus, we had the metrics to show what we were testing and how often it was passing or failing. Failures defined our following areas of focus, whether it be to improve the application or automated testing code.