For a project I currently have, I am parititioning not only the database but also the software for each "customer" (let's call them that for now), so basically I have my single-threaded Python application that uses SQLite and put that in a container with Litestream and run one of those containers for each customer on one or more kubernetes clusters or on anything else that runs a container (or you can run the software directly). Then you can take the data from one to N of those customers and aggregate it and process it for monitoring and central configuration control and backup with another bit of software that does a distributed query on the S3 data that Litestream is saving. I can also control several of the systems in a coordinated way, if needed, by linking them together in a configuration file and letting them elect a leader that will be the central point of contact for a source of "work" (let's call it that for now) and then that leader can notify the other instances of the work to be done and help to dole out that work in a reasonable manner based on the resources and capabilities of the aggregation.
A few similar projects I worked on that were centralized were a huge mess in both the database and the codebase due to trying to do the same thing for thousands of customers with several dozen horizontal-scaling services that had to be coordinated. In my system, I just write one single-threaded bit of software that does the whole thing, but just for one customer at a time, so the whole system is horizontally scaleable instead of just each service within the system. I can still do the same things to aggregate data from the individual customer software units, and as described above I can still add coordination features for groups of individual customer units, so there is no loss in features, but the whole system is simple to reason and think about with none of the usual synchronization pitfalls you get in the usual distributed system.
For a project I currently have, I am parititioning not only the database but also the software for each "customer" (let's call them that for now), so basically I have my single-threaded Python application that uses SQLite and put that in a container with Litestream and run one of those containers for each customer on one or more kubernetes clusters or on anything else that runs a container (or you can run the software directly). Then you can take the data from one to N of those customers and aggregate it and process it for monitoring and central configuration control and backup with another bit of software that does a distributed query on the S3 data that Litestream is saving. I can also control several of the systems in a coordinated way, if needed, by linking them together in a configuration file and letting them elect a leader that will be the central point of contact for a source of "work" (let's call it that for now) and then that leader can notify the other instances of the work to be done and help to dole out that work in a reasonable manner based on the resources and capabilities of the aggregation.
A few similar projects I worked on that were centralized were a huge mess in both the database and the codebase due to trying to do the same thing for thousands of customers with several dozen horizontal-scaling services that had to be coordinated. In my system, I just write one single-threaded bit of software that does the whole thing, but just for one customer at a time, so the whole system is horizontally scaleable instead of just each service within the system. I can still do the same things to aggregate data from the individual customer software units, and as described above I can still add coordination features for groups of individual customer units, so there is no loss in features, but the whole system is simple to reason and think about with none of the usual synchronization pitfalls you get in the usual distributed system.