“Why didn't you find X defect?” you get asked.It’s a valid question, Support spend a lot of their time listening to customers who have fallen foul of these defects, in their eyes, they’re picking up the pieces, listening to customers regale stories about how much time X defect is costing them, or how much money Y defect will result in them losing.
“Actually we did find that defect” you reply, “but the decision was made not to fix it.”
“What about this defect? Or this one?”
“OK, we missed those.”
“Why didn't you find it?”
So why didn't we find it?
Simply because you can’t find everything. You can’t test everything and you can't test for every eventuality.
The challenge is how to communicate this.
In trying to explain this, I coined a scenario which I call the Haystack Analogy.
At its simplest, the codebase is a Haystack, and the Tester is a person looking through this haystack for needles. These needles represent all the defects within the codebase.
When you start out developing a piece of software, you will only have in your hand one or two pieces of straw. It’s pretty easy to look at those pieces and discover any needles lurking within.
As your codebase grows, the pieces of straw get stacked upon each other, shuffled around, and invariably needles get mixed in. Then you have external factors, which could come in the form of wind, blowing the haystack around. After many years, you’re presented with a giant haystack, blown about, full of needles, now the job of a tester becomes a lot more difficult.
System, Regression and End to End testing all have the task of finding as many defects as possible, so here are a few questions you could ask regarding testing that haystack:
- With 1 person testing, how long would it take to sift through the entire haystack and find all the needles?
- If the whole haystack was searched, and no needles were found, could you be certain that there were no needles present in the first place?
- At what point during your search do you say that you've found as many needles as is possible?
The answer to the first question is not one that can reasonably answered. It is just too time consuming to test the entirety of a system, so I would never expect any tester (or testers) to do this.
In answer to the second question, no. No software can ever be 100% defect free, so the answer to that question is pretty easy.
The last question is where a tester earns their worth. There are a few more questions you can ask off the back off that - Should a tester stop after a set amount of time? Should a tester stop when all major features have been tested?
The challenge of a tester is to identify the risk associated with change. If you know a previously tested and working part of the system has not changed, then the risk of not extensively testing that area is low. If however, a new change has been introduced, or an area of code has been refactored, then the risk that that change has introduced unwanted behaviour is now quite high.
Going back to the haystack, if you've identified which areas have had change, which areas are likely to have been affected by external factors, which areas are crucial to the business, then you can assign a timeframe to test based on estimations for how long each individual area would take to cover.
Remember, testing can only ever say that defects are present, so once those areas have been searched for needles, then you as a tester can say how confident you are in the running of the system.
This still doesn't mean that any of the defects found during testing have been fixed - severity, time constraints, cost/benefit considerations and more come into play when deciding whether or not to fix defects. But at least if a needle does slip through your fingers, at least you can say why it happened.