Been there. Change one tiny thing, and 20 tests fail all over the place. But hey, at least we had ~95% test coverage! /s
The more time some piece of code has survived in production, the more "trusted" it becomes, approaching but never reaching 100% "trust" (I can't think of a more precise word at the moment).
For tests it's similar; the longer they had remained unchanged while also proving useful (e.g. catching stuff before a merge), the more trusted they become.
So when any code changes, its "trust level" resets to zero at that point, whether
it's runtime code or test code. The only exception might be if the test code reads from a list of inputs and expected outputs, and the only change is adding a new input/output to that list, without modifying the test code itself.
Tests that change too frequently can't be trusted, and chances are those tests are at the wrong level of abstraction.
Just been beaten by a bug in a production system... hidden in code silently for more than 10 years !
It just mean that for 10 years, this codepath has not been taken (The conditions for this specific error case was not met for 10 years) :-(
Actually, it would be a good monitoring information to know which path are "hot" (almost always taken since the beginning), "warm" (from time to time) or "cold" (never executed). It could help build a targetd trust. I guess that it might be possible for VM languages (like based on JVM) because the VM could monitor this... but it might be harder for machine code
This could be interesting. Unfortunately it'd be a performance hog to do. Some kinds of things do work with this (see performance guided optimisation in compilers)
Been there. Change one tiny thing, and 20 tests fail all over the place. But hey, at least we had ~95% test coverage! /s
The more time some piece of code has survived in production, the more "trusted" it becomes, approaching but never reaching 100% "trust" (I can't think of a more precise word at the moment).
For tests it's similar; the longer they had remained unchanged while also proving useful (e.g. catching stuff before a merge), the more trusted they become.
So when any code changes, its "trust level" resets to zero at that point, whether it's runtime code or test code. The only exception might be if the test code reads from a list of inputs and expected outputs, and the only change is adding a new input/output to that list, without modifying the test code itself.
Tests that change too frequently can't be trusted, and chances are those tests are at the wrong level of abstraction.
That's how I see it at least.