Automated Testing and Test Coverage Limitations
Fundamentally, both manual and automated testing rely on a process that parses inputs (i.e., the code being tested) and yields outputs (i.e., whether the code has passed the test, according to set parameters). Automated testing consists of test scripts written to those requirements in order to validate an expected result. For example, a block of code for a smart thermostat might be tested by feeding examples of a room’s current temperature as the input and checking that the outputs (e.g., whether to turn on heat or air conditioning) are within an acceptable range. This is sometimes referred to not as automated testing, but as automated checking: a computer automatically plugging in a set of inputs and checking the outputs against predefined acceptance criteria.
While testing code against set requirements and expected outcomes is essential, it does not paint a complete picture of a product’s readiness for release. Further, test coverage is an imperfect indicator of testing efficacy: it is impossible to express test coverage as the number of tests run vs. the number of possible tests because, theoretically, there is an infinite number of possible tests that can be run. Test coverage is therefore an expression of tests run vs. either the number of tests planned or the number of requirements that must be tested. Both are important metrics, but both have gaps. If the number and types of tests planned do not fully encompass the core functionality of the code, or if the requirements being tested are poorly expressed, it is possible to have 100% test coverage, in which all tests yield passing results, while still having broken or suboptimal code.