Sounds like a deliberate attempt to avoid spinning up Redis, Kafka, or an outbox system early on.. and then underestimated how quickly their scale would make it blow up. Story as old as time.
I find the opposite story more true: additional complexity in the form of caching early, for a scale that never comes. I've worked on one too many sprawling, distributed systems with too little users to justify it.
If there's one thing I took away from school, it's that distributed systems are hard. More failure points and much more communication hops. Serialization into deserialization into serialization again over network hops.
Not sure I get it... how would you replicate this functionality with Kafka? You'd still need to have the database LISTEN to changes and push it to Kafka no?
Isn't this one of the things partitioning is meant to ameliorate? Either through partitions themselves, or through an appropriate partitioning strategy?
Because documentation doesn’t warn about this well-loved feature effectively ruins the ability to perform parallel writes, and because everything else in Postgres scales well.
I think it’s a reasonable assumption. Based on the second half of your comment, you clearly don’t think highly of “AI companies,” but I think that’s a separate issue.
It’s unsurprising to me that an AI company appears to have chosen exactly the wrong tool for the job.