So the question is, are you going to test it manually?
Case study 1: The help functionality in the UI was broken for months before anyone noticed, because people on the team don't need the help. Integration tests were written to exercise the help functionality. Problem solved.
Case study 2: 'We' insisted on having extensive unit tests for our login logic. Getting code coverage on that functionality ended up being easier said than done, and a bunch of refactoring was necessary to make the tests not suck balls. Said refactoring introduced a number of regressions in the login logic. Every time the login functionality broke, two engineers came to tell me or my teammate that login was broken before the CI build had even completed.
If the human is willing, faster than the automation, and failures are unambiguous, what's ultimately the value of those tests? Goodhart's Law says it's less than or equal to zero.
Great case in point. If nobody noticed, can you really tie it to revenue? Did sales fail because prospects couldn't use the help? Did customers churn because they didn't want to pay for something broken? No? Then it doesn't matter that it's "broken". Unless it affects the bottom line somehow, even in the most tenuous way, then from a business/financial perspective, arguably nothing is actually "broken".
> Every time the login functionality broke, two engineers came to tell me or my teammate that login was broken before the CI build had even completed.
So, technically, somebody manually tested it, right? Ideally, the executives should have been pressuring you to do a better job and not force other teams to catch your bad work, but it's fine if it's a rare-ish occurrence.
That depends on your delivery pipeline. At the time this product was only shipping between teams on a new initiative, so all of the users were experts. But things like this cost you status once louder more influential people start noticing.
The login one I’ve seen on two separate projectst, only one was I directly involved in. What was needed were negative tests to make sure authorization or authentication don’t succeed when they shouldn’t. I won’t begrudge those tests at all, and may insist on them personally. But some parts of the code are dogfooded hourly. If you can test it quickly and cheaply, by all means do. But if the test system is slower and less reliable, if chasing a metric makes you break things that already “worked” you need to tap the brakes and think about your strategy. Not this way, or maybe not at all.
Case study 1: The help functionality in the UI was broken for months before anyone noticed, because people on the team don't need the help. Integration tests were written to exercise the help functionality. Problem solved.
Case study 2: 'We' insisted on having extensive unit tests for our login logic. Getting code coverage on that functionality ended up being easier said than done, and a bunch of refactoring was necessary to make the tests not suck balls. Said refactoring introduced a number of regressions in the login logic. Every time the login functionality broke, two engineers came to tell me or my teammate that login was broken before the CI build had even completed.
If the human is willing, faster than the automation, and failures are unambiguous, what's ultimately the value of those tests? Goodhart's Law says it's less than or equal to zero.