Too much test automation?

The other day, a QA engineer was telling me about his legacy at his last job and mentioned that he was most proud of transitioning the teams manual tests to automated ones. Wow, great achievement I thought until he started the number game. We automated 57 tests in the first month he said and were able to run 4 times as many test cases within the first quarter. At that point, he lost me because tests are not about how many you have. Are 57 automated tests better than 10 manual ones or even 1 well written automated one? How many of you have seen test reports that show that 95% of of the 2000 tests have passed? Does that number really mean anything? What about the 100 that didn’t run or were failing? Were those critical functions?

That QA engineer would have impressed me if he would have told me that he developed an automated test suite that covered the functions most used by the customers and ones that are business critical. It doesn’t matter if that would have been 50 tests or 10 tests or even 5 tests because the number is irrelevant. The important aspect is the coverage of the tests and not the number.


When I asked the same QA engineer what was most important in the tests he wrote, he gleamed with pride as he said: 1) Clear Variable Names and 2) Lots of Comments. Hmmm.. Is that really what is really most important? What about how long it takes to run or more importantly how to avoid the pesticide paradox with the tests. For those of you that are not aware of the term pesticide paradox, to me it means that a test can only find bugs for what it was programmed or designed to find. Any variation on that will go unfound. Test automation can avoid this by built it variation on paths, data or even permissions.

Is there such a thing as too much test automation? If you read this far, I hope that you realized that the question itself is should have been phrased differently. There can be inefficiencies and redundancy in the test suite. There can also be too much automation coverage when the tests are covering “dead code” or irrelevant features. The question should have been “How do you know what your automation test suite is covering?”. When you answer this question it shouldn’t be with a number of tests that you have. Instead, it should be with what functionality is covered.

A test automation suite is a living entity that needs to be constantly updated to remove redundancy and ensure that the test are covering the most critical aspects of the system as new features are added/modified/removed. It should be our goal as test engineers to inform others who ask the as the question of “how many test cases do you have?” that the answer is not in the number of tests but instead is in the coverage and dynamics of the tests.

  • LKyte

    GREAT post!! I have found that sometimes people who don’t quite understand testing put WAY too much importance on irrelevant numbers.