It sounds to me like you're arguing from a position where tests are expensive and time consuming to write.
If you have the right habits and testing tools in place, this doesn't need to be the case.
I write tests for even my tiniest projects: I estimate I spend less than 20% of my coding time working on the tests - and they give me more than a 20% boost in productivity VERY quickly.
But... that's because every one of my projects starts with a pytest test suite in place. Adding new tests to an existing test suite is massively less work than adding tests if there's no test suite and you have to spin one up from scratch.
I’ve seen it go off the rails though. I’ve seen people exhibit traumatic responses to it.
Pain is information, but there are less helpful ways to interpret it.
There are some architectural decisions I resist energetically because they create a lot of consequences for testing, with not a lot of value added compared to other alternatives. If you introduce something that makes writing code 30% harder you’d better be getting a lot of profit margin out of the deal. Otherwise you’re just torturing people for no good reason
I haven't seen a single case where it doesn't go off the rails.
One problem with tests is that benefits are immediate, but costs are delayed. When you're writing tests it feels like you're creating nothing but value.
By the time you realize that tests made your architecture stiff as a board, they have already spread everywhere.
I've seen numerous cases where teams were not able to refactor because too many tests were failing. Quite ironic, given that tests are supposed to help refactoring.
I don’t think it’s an accident that I’ve had more testing mentors than anything else. Testing is hard. There’s a huge number of mistakes you can make and I’ve only made 70% of them so I deflect people calling me an expert. I’m not an expert, but I can look at tests and enumerate all of the rookie and sophomore mistakes, and let me tell you that can keep some people busy for their entire tenure.
I don’t say it in front of policy people much, but I immediately suspect anything as hard as this of being done incorrectly.
> It sounds to me like you're arguing from a position where tests are expensive and time consuming to write.
Of course they are. Tests are code. As a rule of thumb, sufficient coverage requires writing three times as many lines of code for the test suites than for the code it's testing. You need "sufficient coverage" to get the benefits of "I completely forgot these parts of the codebase, but I can make changes without fear of breaking something, because wherever I would need to make changes, there are tests covering it". 3:1 test to application ratio means that it's roughly four times as expensive compared to a completely untested codebase; compensating for the cost of manual testing, so perhaps three times as expensive.
If you're only writing tests for a small portion of the codebase - a complicated section of the business logic, say - see (a) in the original comment, where I do see value in writing tests up front. But that doesn't provide full coverage.
I gave a talk at DjangoCon today about how I maintain 185 projects as a solo developer... thanks to a habit of comprehensive tests and documentation for every one of them. Video online soon, slides and notes are here: https://github.com/simonw/djangocon-2022-productivity
I am confident I would not be able to productively maintain anything near this much software without those tests!
If you have the right habits and testing tools in place, this doesn't need to be the case.
I write tests for even my tiniest projects: I estimate I spend less than 20% of my coding time working on the tests - and they give me more than a 20% boost in productivity VERY quickly.
But... that's because every one of my projects starts with a pytest test suite in place. Adding new tests to an existing test suite is massively less work than adding tests if there's no test suite and you have to spin one up from scratch.