Using a throwaway because I don't want anyone to guess the product/company.
I'm CTO at a company who's product makes heavy use of stored procedures for business logic. It's constantly the cause of all the biggest headaches. I inherited this architecture, I didn't design it myself. I believe the original rationale behind the design boils down to 'SQL is better at the type of set based operations this product does a lot'. i.e. If you've got an array of 100k values, and you want to perform an operation against a second array of 100k operands, you can do that much faster in SQL with a join than you can by loading them all into memory/code, looping over to do the operations, and saving them again. This is kind of true in the simple cases, but once it grows to require lots of logic and conditionals in those operations, and then chained operations you start to lose the benefit.
Upgrades and versioning are generally a bit awkward, we've got it fairly smooth now, but it still causes pain reasonably frequently when something in the upgrade process breaks. It was worse when I started as many upgrades were a bodged together folder of scripts with lots of 'if exists' type checks in them. Now at least we use a mostly automated diff based upgrade process for the database. Some types of changes still require manual upgrade scripts though. The articles solution of a folders of numbered scripts doesn't really look viable if you need to manage upgrades from different versions to the current latest version. Re-creating customer databases doesn't really go down well when they tend to like to keep their data.
Debugging is awkward. There are tools, but none of them compare to code debuggers.
SQL isn't easily composable, we have repeated SQL all over the place (or nearly repeated with small changes which is kind of worse because you don't spot the differences). Finding a bug in one of these repeated blocks means spending the next few hours hunting down any similar SQL to check over the the same bug.
Performance is unpredictable and all over the place. Stored procs that run fine one release will suddenly start performing like a dog the next release. We often never discover the actual 'cause', the thing that changed that made it slow down. We just end up finding a way to make it fast again by adding a new index, or changing the way a join is composed, or splitting something up. I've not yet got evidence, but I'm convinced that we've made performance improvements in one release only to reverse the exact changes several releases later also as a 'performance improvement'. It feels like playing wack-a-mole. Because of query plan caching, parameter sniffing and other optimizations the DB we have had scenarios where the performance of feature x, depends on if you used feature y before hand in the same session or not. We have some exact duplicates of stored procedures that are only there to ensure that two different code paths that use them don't share the same plan cache because when they do we get problems. Performance characteristics are often very different between dev setup and customer setup. Performance characteristics are different for customers that choose to host on cloud servers compared to customers that host on physical hardware. I don't just mean cloud is slower, I mean it's different. Some things are faster on cloud servers, but it's never predictable what will be what. It makes testing for performance very hard.
The articles statement about 'a database spends 96% of it's time logging and locking' is totally irrelevant. So what if that's what it spends 96% of it's time on. It's still spending that time. And as soon as your database has multiple users all those locks are going to start getting in the way of each other and causing delays or deadlocks.
It doesn't scale at all. Our DB severs are powerful and we can't realistically go much bigger (CPU & RAM wise), yet better performance is probably one of our customers biggest requests.
Deadlocks are not uncommon, hard to defend against, hard to fix, and half the time introduce other deadlocks in other places.
Maybe it's good in some scenarios, but once you have a growing evolving product being built by a team, it's far far harder to manage if a large chunk of the logic is in SQL.
I'm CTO at a company who's product makes heavy use of stored procedures for business logic. It's constantly the cause of all the biggest headaches. I inherited this architecture, I didn't design it myself. I believe the original rationale behind the design boils down to 'SQL is better at the type of set based operations this product does a lot'. i.e. If you've got an array of 100k values, and you want to perform an operation against a second array of 100k operands, you can do that much faster in SQL with a join than you can by loading them all into memory/code, looping over to do the operations, and saving them again. This is kind of true in the simple cases, but once it grows to require lots of logic and conditionals in those operations, and then chained operations you start to lose the benefit.
Upgrades and versioning are generally a bit awkward, we've got it fairly smooth now, but it still causes pain reasonably frequently when something in the upgrade process breaks. It was worse when I started as many upgrades were a bodged together folder of scripts with lots of 'if exists' type checks in them. Now at least we use a mostly automated diff based upgrade process for the database. Some types of changes still require manual upgrade scripts though. The articles solution of a folders of numbered scripts doesn't really look viable if you need to manage upgrades from different versions to the current latest version. Re-creating customer databases doesn't really go down well when they tend to like to keep their data.
Debugging is awkward. There are tools, but none of them compare to code debuggers.
SQL isn't easily composable, we have repeated SQL all over the place (or nearly repeated with small changes which is kind of worse because you don't spot the differences). Finding a bug in one of these repeated blocks means spending the next few hours hunting down any similar SQL to check over the the same bug.
Performance is unpredictable and all over the place. Stored procs that run fine one release will suddenly start performing like a dog the next release. We often never discover the actual 'cause', the thing that changed that made it slow down. We just end up finding a way to make it fast again by adding a new index, or changing the way a join is composed, or splitting something up. I've not yet got evidence, but I'm convinced that we've made performance improvements in one release only to reverse the exact changes several releases later also as a 'performance improvement'. It feels like playing wack-a-mole. Because of query plan caching, parameter sniffing and other optimizations the DB we have had scenarios where the performance of feature x, depends on if you used feature y before hand in the same session or not. We have some exact duplicates of stored procedures that are only there to ensure that two different code paths that use them don't share the same plan cache because when they do we get problems. Performance characteristics are often very different between dev setup and customer setup. Performance characteristics are different for customers that choose to host on cloud servers compared to customers that host on physical hardware. I don't just mean cloud is slower, I mean it's different. Some things are faster on cloud servers, but it's never predictable what will be what. It makes testing for performance very hard.
The articles statement about 'a database spends 96% of it's time logging and locking' is totally irrelevant. So what if that's what it spends 96% of it's time on. It's still spending that time. And as soon as your database has multiple users all those locks are going to start getting in the way of each other and causing delays or deadlocks.
It doesn't scale at all. Our DB severs are powerful and we can't realistically go much bigger (CPU & RAM wise), yet better performance is probably one of our customers biggest requests.
Deadlocks are not uncommon, hard to defend against, hard to fix, and half the time introduce other deadlocks in other places.
Maybe it's good in some scenarios, but once you have a growing evolving product being built by a team, it's far far harder to manage if a large chunk of the logic is in SQL.