Automated testing; unit, functional and visual UI testing.
October 11, 2018by Chris Carreck
Although automated testing will never fully replace manual testing - the process of having someone who knows and understands the quirks of the software in question and can identify abnormal behaviour and the edge cases they cause - we can certainly do a lot to make their job easier, more streamlined, and allow them to concentrate on the clever stuff while the repeatable tasks can be automated.
At CLD our visual designs are just as important as the quality of software that we produce. So we test at all stages of our software lifecycle. In order to do this we generally use 3 types of automated tests:
- Unit testing: code level unit tests for individual functions.
- Automated functional tests: running selenium scripts across browsers to replicate user actions on the front ends.
- Cross browser visual testing and benchmarking: - automating page rendering across browsers and also versioning the differences in a page visual from one release to the next.
Automated functional tests:
We started playing around with point and click/replay functional tests across our suite of sites, but we soon found these were too brittle, were prone to false positives and a lot of time would be spent on maintaining them. At this point we moved over to selenium, and a 3rd party service from crossbrowsertesting.com (CBT) that allows us to run our selenium tests in multiple browsers at the same time. We’ve integrated this into our deployment pipeline, and now when we commit or deploy one of our sites, we are able to run a suite of standard functional tests across multiple browsers for desktop and mobile, across CBTs own grid.
Cross Browser visual testing:
This is an area that was causing us to spend a lot of manual time, both in test, and mainly, in regression testing our sites for any kind of visual defect. Our developers already use a number of tools such as browser-sync and Browserstack to check their builds cross browser, but we also wanted to eliminate the manual regression testing being done after a deployment that can take time, and still have that extra safety net that nothing slipped through. By utilising another of CBTs offerings, a screen snapshot, we are able to include in our process a way to take a full length snapshot of a page across a number of desktop and mobile devices, have these rendered out, any differences analysed, and the results reported back to us once the test is complete.
This allows us to speed up the cross browser aspect, as we can at a glance identify any abnormalities on each page. But we also wanted to take this one further, and not just compare across browsers, but also compare a screen from one version to the next, to ensure we haven’t introduced any unwanted changes into our new release. To do this we introduced another tool, Applitools.com, an “AI” powered visual testing tool. We are able to plug this tool into the CBT pipeline, and it will compare one version of a page, the baseline, against a second version of the page. By always setting the current production version of our page as the baseline, we can compare on each release the baseline (what we currently have) against the version currently being deployed. Again, Applitools alerts us to any visual changes between these pages.
So now we have in our pipeline:
- Unit testing in code
- Functional automated testing on each build
- Visual UI cross browser checks on each release
- Visual UI cross browser regression comparisons on each release.
As well as our manual testing effort which can concentrate on the human interactions that really cant be replicated by an automation tool, concentrating on those obscure user patterns and edge cases.
So far we think this is a pretty sweet setup, and we plan to continue rolling out these automation teqniques across more of our projects as we go!