End-to-End Regression Testing, in particular, Web and Mobile testing have historically been done in three ways:
- Manual testing. Someone would write descriptions of what and how to test called "test cases" for example, in a product like TestRails and, often, Excel. Then when a release is ready a group of humans calling themselves "testers" would read these scripts and conduct the "test". This usually would result in verifying that functionality described in that "test case" works according to the specification in that exact "test case".
- Automation testing. Similar to above, but humans would write code to "automate" those "test cases". So there is no need (supposedly) for humans during the "test" phase because it is now executed automatically. The prime examples of frameworks which humans use to write these cases are Selenium, Appium, Cucumber, etc.
- Record and Play. Some companies went even further and created "recorders" which testers could use to record their actions and “kind of” produce a test. I say "kind of" because you'd most probably need to write a verification part yourself.
Another approach is to calculate "test coverage" for their application by dividing the number of automated test cases by sum of number of automated and manual test cases.
Well, this all sounds great, in reality, Automation and especially Record and Play in the last several years the average magic "test coverage" number has been stuck around 30%? Even despite the "test coverage" is such an arbitrary number?
There are some challenges with Automation Testing and Record and Play methodologies for regression testing, specifically:
- Challenge to create - for Automation you basically spend the engineering time to create tests - that's expensive. For record and play it is almost the same since you need to be able to create the validations as well as have an expert who is familiar with the system. The system usually creates the description "test cases" in advance.
- Support nightmare. A lot of changes to the system under test will break these tests, in addition, these tests might also have bugs themselves. Basically, it is a law of diminishing returns - the more tests you have the more time the QA team spends supporting these tests instead of doing something productive. Including writing more tests.
- Stability issues. There are stability issues of 2 types:
a. In a lot of cases both automation and record and play tests would rely on "XPath" - the precise location of elements. Any change in the structure will break it.
b. The infrastructure is inherently unstable by nature. Timeouts and browser/emulator crashes are normal. The tests usually don't account for it and die at random moments causing spurious test "failures".
But, most importantly it all pales in comparison with the amount of time it takes to manually test the software. Picture Richard Smith, CEO of Equifax hearing the news. Nobody wants to be Richard Smith.
What can we do about it?
Imagine an intelligent system which:
- Can create tests automatically to cover the most important areas every time new functionality is created;
- Does not need to be supported other than to be told what is a bug and what is a feature for some changes;
- Does not have stability issues.
This is exactly what is defined as an "autonomous system".
Before we dive into how does it work, here are the advantages of autonomous testing.
I'd be happy to discuss it over coffee or in the next article.