I see a lot of value in spinning up microservices where the database is global across all services (and not inside the service) but I struggle more to see the value of separate core transactional databases for separate services unless/until the point where two separate parts of the organizations are almost two separate companies that cannot operate as a single org/single company. You lose data integrity, joining ability, one coherent state of the world, etc.
The main time I can see this making sense is when the data access patterns are so different in scale and frequency that they're optimizing for different things that cause resource contention, but even then, my question would become do you really need a separate instance of the same kind of DB inside the service, or do you need another global replica/a new instance of a new but different kind of DB (for example Clickhouse if you've been running Postgres and now need efficient OLAP on large columnar data).
Once you get to this scale, I can see the idea of cell-based architecture [1] making sense -- but even at this point, you're really looking at a multi-dimensionally sharded global persistence store where each cell is functionally isolated for a single slice of routing space. This makes me question the value of microservices with state bound to the service writ large and I can't really think of a good use case for it.
> I see a lot of value in spinning up microservices where the database is global across all services (and not inside the service)
This issue with this is schema evolution. As a very simple example, let's say you have a User table, and many microservices accessing this table. Now you want to add an "IsDeleted" column to implement soft deletion; how do you do that? First you need to add the actual column to the database, then you need to go update every single service which queries that table and ensure that it's filtering out IsDeleted=True, deploy all those services, and only then can you actually start using the column. If you must update services in lockstep like this, you've built a distributed monolith, which is all of the complexity of microservices with none of the benefits.
A proper service-oriented way to deal with this is have a single service with control of the User table and expose a `GetUsers` API. This way, only one database and its associated service needs to be updated to support IsDeleted. Because of API stability guarantees--another important guarantee of good SoA--other services will continue to only get non-deleted users when using this API, without any updates on their end.
> You lose data integrity, joining ability, one coherent state of the world, etc.
You do lose this! And it's one of the tradeoffs, and why understanding your domain is so important for doing SoA well. For subsets of the domain where data integrity is important, it should all be in one database, and controlled by one service. For most domains, though, a lot of features don't have strict integrity requirements. As a concrete though slightly simplified example, I work with IoT time-series data, and one feature of our platform is using some ML algorithms to predict future values based on historical trends. The prediction calculation and storage of its results is done in a separate service, with the results being linked back via a "foreign key" to the device ID in the primary database. Now, if that device is deleted from the primary database, what happens? You have a bunch of orphaned rows in the prediction service's database. But how big of a deal is this actually? We never "walk back" from any individual prediction record to the device via the ID in the row; queries are always some variant of "give me the predictions for device ID 123". So the only real consequence is a bit of database bloat, which can be resolved via regularly scheduled orphan checking processes if it's a concern.
It's definitely a mindshift if you're used to a "everything in one RDBMS linked by foreign keys" strategy, but I've seen this successfully deployed at many companies (AWS, among others).
I get your point around the soft deletion example but that sounds more like poor module separation + relational query abstraction/re-use rather than a shared database issue. Whether it's through a service, a module or a library, leaky abstraction boundaries will always cause issues. I see your point about separately versioning separate services which I think can make certain kinds of migrations more tractable -- but it comes at the expense of making it heavier and prolonging the duration of supporting two systems.
The difference I generally see with shared-state microservices is that now you introduce a network call (although you have a singular master for your OLTP state), and with isolated state microservices, now you are running into multiple store synchronization issues and conflict resolution. Those tradeoffs are very painful to make and borderline questionable to me without a really good reason to sacrifice them (reasons I rarely see but can't in good faith say never happen).
Pertaining to your IoT example -- that's definitely a spot where I see a reason to move out of the cozy RDBMS, which is an access pattern that is predominated by reads and writes of temporal data in a homogenous row layout and seemingly little to no updates -- a great use case for a columnar store such as Clickhouse. I've resisted moving onto it at $MYCORP because of the aforementioned paranoia about losing RDBMS niceties (and our data isn't really large enough for vertical scaling to not just work) but I could see that being different if our data got a lot larger a lot more quickly.
Maybe putting it together, there are maybe only several reasons I've seen where microservices are genuinely the right tool for a specific job (and which create value even with shared state/distributed monolith):
1) [Shared state] Polyglot implementation -- this is the most obvious one that's given leverage for me at other orgs; being able to have what's functionally a distributed monolith allows you to use multiple ecosystems at little to no ongoing cost of maintenance. This need doesn't happen for me that often given I often work in the Python ecosystem (so being able to drop down into cython, numba, etc is always an option for speed and the ecosystem is massive on its own), but at previous orgs, spinning up a service to make use of the Java ecosystem was a huge win for the org over being stuck in the original ecosystem. Aside from that, being able to deploy frontend and backend separately is probably the simplest and most useful variant of this that I've used just about everywhere (given I've mostly worked at shops that ship SPAs).
2) [Shared state] SDLC velocity -- as a monolith grows it just gets plain heavy to check out a large repository, set up the environment run tests, and have that occur over and over again in CI. Being able to know that a build recipe and test suite for just a subset of the codebase needs to occur can really create order of magnitude speed ups in wall to wall CI time which in my experience tends to be the speed limit for how quickly teams can ship code.
3) [Multi-store] Specialized access patterns at scale -- there really are certain workloads that don't play that well with RDBMS in a performant and simple way unless you take on significant ongoing maintenance burden -- two I can think of off the top of my head are large OLAP workloads and search/vector database workloads; no real way of getting around needing to use something like ElasticSearch when Postgres FTS won't cut it, and maybe no way around using something like Clickhouse for big temporal queries when it would be 10x more expensive and brittle to use postgres for it; even so, these still feel more like "multiple singleton stores" rather than "one store per service"
4) [Multi-store] Independent services aligned with separate lines of revenue -- this is probably the best case I can think of for microservices from a first principles level. Does the service stand on its own as a separate product and line of revenue from the rest of the codebase, and is it actually sold by and operated by the business that independently? If so, it really is and should be its own "company" inside a company and it makes sense for it to have the autonomy and independence to consume its upstream dependencies (and expose dependencies to its downstream) however it sees fit. When I was at AWS, this was a blaringly obvious justification, and one that made a lot of intuitive sense to me given that so much of the good stuff that Amazon builds to use internally is also built to be sold externally.
5 [Multi-store] Mechanism to enforce hygiene and accountability around organizational divisions of labor - to me, this feels like the most questionable and yet most common variant that I often see. Microservices are still sexy and have the allure of creating high visibility career advancing project track records for ambitious engineers, even if at the detriment to the good of the company they work for. Microservices can be used as a bureaucratic mechanism to enforce accountability and ownership of one part of the codebase to a specific team to prevent the illegibility of tragedy of the commons -- but ultimately, I've often found that the forces and challenges that lead to those original tragedy of the commons are not actually solved any better in the move to microservices and if anything the cost of solving it is actually increased.
I observe this compulsion a lot and in my opinion, it's almost always coming from resentment driving ego in an attempt to compensate for insecurity and self-loathing, which ultimately ends up misdirected towards others. It's almost always accidentally entertaining, but sadly ends up diminishing rather than elevating discourse.
To refute GP's point more broadly -- there is a lot in /applied/ computer science (which is what I think the harder aspects software engineering really is) that was and is done by individuals in software just building in a vacuum, open source holding tons of examples.
And to answer GP's somewhat rhetorical question more directly - none of those professions are paid to do open-ended knowledge work, so the analogy is extremely strained. You don't necessarily see them post on blogs (as opposed to LinkedIn/X, for example), but: investors, management consultants, lawyers, traders, and corporate executives all write a ton of this kind of long-form content that is blog post flavored all the time. And I think it comes from the same place -- they're paid to do open-ended knowledge work of some kind, and that leads people to write to reflect on what they think seems to work and what doesn't.
Some of it is interesting, some of it is pretty banal (for what it's worth, I don't really disagree that this blog post is uninteresting), but I find it odd to throw out the entire category even if a lot of it is noise.
I'm not sure this direct departure matters that much to Sequoia. The COO Balbale was an operating partner who was CMO of Sequoia for 2 years and then COO for another 2 years and to my understanding did not write checks. The real elephant in the room question was whether Balbale was in a role getting carry or not. If not, then that effectively isn't different from an incentive level than an associate with a higher salary (but no real skin in the game). If so, then that means that carry was actually given up and it probably means more.
In contrast, Maguire (like other partners at Sequoia who are actually writing checks) has skin in the game through the checks he writes and whether they pan out or they don't. In light of that, setting aside his views (which you may agree or disagree with politically), I view his controversy as being most likely a calculated marketing maneuver to improve or maximize the signal to noise in his deal flow. It's hard for me as an outsider to say whether that's working for him or not, but his track record suggests that he's not having problems with his deal flow as a result.
That said -- the material comment at the end of the article does make a lot of sense. While this departure may not affect Sequoia that much, Maguire's position may sour many of the Middle East sovereign wealth funds that form some of the largest parts of Sequoia's LP base. If their discontent with Maguire's rhetoric ends up being more important to them than Sequoia's returns, that may well pose a far more material issue to Sequoia and they will be forced to act.
Funding a seed stage VC with a record of picking winners like Sequoia is probably very easy even without the few sovereign wealth funds from the Middle East. Sequoia’s checks are mainly in the early rounds where the sizes are relatively smaller to series C+. The American university endowments would oversubscribe their rounds multiple times over.
That's a very good point, and there is probably enough appetite in the endowments (nevermind the other large LP archetypes such as pension funds, hospital systems, family offices, etc) to make up for any pullout.
To go one step further, I think I recall recently that Sequoia also moved to an evergreen RIA structure (as well as several other large funds such as a16z, Lightspeed, Thrive, etc), so that's even less of a concern and makes it possible for these firms to capture exits in the evergreen structure and recycle it into earlier stage funds.
Maybe I went overboard trying to soften how non-material this event really seems.
> I view his controversy as being most likely a calculated marketing maneuver to improve or maximize the signal to noise in his deal flow. It's hard for me as an outsider to say whether that's working for him or not
why is it that tech bros will bend over backwards to find "good-faith" interpretations of the most obviously stupid shit. like bro have you literally never heard the phrase "confirmation bias"? you know it's possible he could just be a lucky idiot right?
> In an interview with the Caltech Heritage Project, Maguire reported that he earned a 1.8 GPA in high school and failed his Algebra 2 course, and that his admission to Stanford University depended on letters of recommendation.
1. The fact that he cheated his way into Stanford is completely irrelevant when considering whether he cheated his way into Caltech
or
2. No dummies graduate PhD programs, even T10 PhD programs, just like no dummies are admitted to T10 BS programs
or both.
I hope you understand that one or both of these perspectives is either the height of naivety or more of that backbending work I was talking about before.
He has a PhD in /physics/ from Caltech, and that was before he became an extremely successful investor.
If I had the platform Maguire had, I'd likely it in a different direction. But I'm not him so who cares? He has achieved significant academic and professional outcomes not in spite of but likely because of the way he is.
Just because you don't like someone doesn't mean they're an unaccomplished idiot no matter how gratifying and simplifying that would be. And I think it's really unfortunate that people let their resentment of others outside their tribe (and often their inability to perceive their tribal filters) get in the way of accurately perceiving reality as it is and not the way they wish it worked. It stunts their intellectual development and maturation into an adult, and I think it's just such a waste.
You're getting answers in child responses that while accurate are not necessarily answering the spirit of your question. In my personal opinion, you'll find what you're looking for by searching for a high growth Series A - B startup (I would recommend Seed but that's almost a different animal in terms of risk) with a technical product and strong technical founders + eng leadership.
When you're at a company at that stage that's doing well and has a lot of commercial runway ahead of it, the reality can often end up being that the golden age will last long enough for a very pleasant 4-6 year tenure if you so decide to stay at the company through its growth phase (which often takes it from a 50m-100m valuation to $1B+). Some of these companies will also make the leap from $1B+ to $10B+ or beyond (which makes the golden era at least as long as 6-10 years) and although nothing lasts forever, it can last long enough for you to find what you're looking for at least for a decently long period of time. This pertains to what other commenters have mentioned with regards to "making it golden" -- the golden era is what it is because everyone needs to make it golden and the company is too small for anyone who would dilute that for their own gain to do so without anyone noticing.
The challenge to this approach is that it requires being able to assess a company's commercial prospects as well as the quality of the company's founders, leadership and early team well enough to assess whether the company merely looks like a golden era company or whether it is actually the real deal -- something which even professional investors who target these kinds of companies struggle with. It is possible, but in my experience, it definitely took a couple of rounds of trial and error and getting burned a few times before my radar worked.
Qwen3 Next and Qwen3-30b-a3b are pretty decent proxies for GPT-OSS 120B and 20B respectively (and in fact are both MoEs with 3B active rather than 8B active parameters), and they lap GPT-OSS pretty hard in this specific benchmark, getting to #17 and #33 respectively. That being said, it's hard to take benchmarks beyond a grain of salt because real world tasks that I try to use these models for always have a lot more variation than the benchmarks illustrate. I do view GPT-OSS as a pretty good alternative to the Qwen models in some cases but there are tradeoffs -- while I see better reasoning from GPT-OSS sometimes, the prompt adherence and overall flexibility of the Qwen models makes them a lot better IMO as general purpose local open weight models.
Those models got released later than GPT-OSS. It’s like saying the Android phone released 6 months after the iPhone is faster. Maybe, but it also had 6 extra months of development.
I really want to like Mojo but you nailed what gives me pause. Not to take an anecdotal example of Polars too far beyond, but I get the sense the current gravity in Python for net new stuff that needs to be written outside Python (obviously a ton of highly performant numpy/scipy/pytorch ecosystem stuff aside) is for it to be written in Rust when necessary.
Not an expert, but though I wouldn't be surprised if Mojo ends up being a better language than Rust for the use case we're discussing, I'm not confident it will ever catch up to Rust in ecosystem and escape velocity as a sane general purpose compiled systems language. It really does feel like Rust has replaced C++ for net new buildouts that would've previously needed its power.
I know this is such a controversial livewire of a topic and borderline taboo, but the evidence is pretty substantial. That being said, the intra-group variation is also extremely substantial (IE the variation between genius/median in any particular group is simultaneously a) far more than median in one group and median in another group and b) far less than genius in one group vs genius in another group). All that being said, I think this contributes to rather than detracts from GP's comment. These "studies" (as with much of modern psychological "research") are so poorly designed so as to be meaningless, hence the replication crisis. I think they're actually worse than meaningless because they're misleading and create infohazards.
I disagree. It strongly detracts from the GP’s claim.
If we see huge variation in intelligence scores intra group, that strongly suggests that there are social/cultural/environmental factors in play driving a large part of this.
It may be true that some racial backgrounds offer an advantage; but there is no evidence to suggest that this advantage is materially large relative to many of the social structural drivers that are obvious.
The subtext of the claim is not that a statistically significant effect exists. It’s that there is a big important difference in intelligence across races intrinsically derived from genetics. And there’s no compelling evidence to support that.
>If we see huge variation in intelligence scores intra group, that strongly suggests that there are social/cultural/environmental factors in play driving a large part of this.
Correlation does not equal causation. Variation in genetics in a group can realistically be a factor as well. Three probable possibilities here: Only environment, Only genetics, both genetics and environment. Common sense says it's both genetics and environment.
>It may be true that some racial backgrounds offer an advantage; but there is no evidence to suggest that this advantage is materially large relative to many of the social structural drivers that are obvious.
I never commented how large this advantage was relative to the social driver. I agree with you... the social structure likely the greater driver. But the genetic driver is not insignificant.
>The subtext of the claim is not that a statistically significant effect exists. It’s that there is a big important difference in intelligence across races intrinsically derived from genetics. And there’s no compelling evidence to support that.
There is evidence. But there is huge political debate and attacks around the evidence. There are many studies that study IQ among races independent of environment and many of those studies show there is a statistically significant difference. Those studies suffer from the replication crisis, but so do all conflicting studies within psychology as well.
Cite them. Let's see which ones you're talking about. We know there are studies that say what you say! But it's hard to engage when the studies themselves are abstractions.
Please don't dump ChatGPT stuff onto threads. It's specifically against the rules here. If your uncertainty was whether we could set up dueling ChatGPT sessions: we very definitely can.
I disagree. This is not a hard rule. I'm willing to bet if the moderators saw this they would be ok with this instance of it.
This would be the fault of the moderators for not directly putting it in the rules if that was the case.
Though I doubt they'd be ok with this entire thread as it's heated and the topic is flamewar-like even though I tried to direct it in a different direction initially. It already went off the rails with the other guy once he told me I was dumb and trying to establish race superiority among asians. Too late. I think I'll get warned or banned.
Anyway, I'm not disguising the content human generated. Additionally All I'm doing is asking the LLM to cite and summarize citations which you asked for. Manually finding those citations are tedious.
My opinions and thoughts are still human written and not AI generated.
What I find incredulous is that you have some problem with content that's AI generated even if it's actually true and even it's just flat summaries of citations. Like I told the LLM to find specific resources and summarize studies I have already seen so you don't have to go through the entire paper and you have a huge problem with that? Fine. You can just refuse to engage. Commenting on it is also "against the rules" per your link: https://news.ycombinator.com/item?id=44808351
I mean if you want you can flag my post and vote it down. I think it's extreme and an ass hole move so I don't do that to people I have different opinions with. It's up to you.
"Please don't post AI-generated comments, or any generated comments" seems pretty clear to me. My point stands: I'm not interested in watching dueling ChatGPT contexts and I don't think anyone else is either. I can just write what I know about this issue into a GPT5 session and say "change my mind" and get all those 1970s cites myself.
I literally said to you that chatGPT is JUST citations. My comments are my own, you can respond to my comments can you not? What if I used AI to spell check and repair grammar? End of the world and you can't respond? Are people so against AI that they lose all common sense?
>"Please don't post AI-generated comments, or any generated comments" seems pretty clear to me.
Did he say please don't post AI-generated citations? Is he referring to the entire post or all comments? Why not make it an official rule on the rules page. What does Dang say about this? Seems unclear. I think you're just being deliberate.
Bro. You don't need to respond if you don't want to. My citations are still there. You asked for it, you got it. You don't like it? I'm not going to manually do what chatgpt ALREADY did so conversation is over if you don't want to continue.
On second thought, I'm noticing that we're the only people reading this, and that this thread is mostly pretty uncivil and gnarly, and I feel bad for contributing to it staying alive, so maybe we pick this up some other time.
The thread is already buried. Like I said, it's your call whether you want to engage. Either way, no loss. If I actually manually wrote down those citations so SERVE your request it would've been a massive waste of my time and it would be rude for you not to respond.
Good thing I used AI to assist me. I knew you'd leave. Mainly because there's really no solid evidence so a lot of this will go in circles. Good day sir.
> Correlation does not equal causation. Variation in genetics in a group can realistically be a factor as well. Three probable possibilities here: Only environment, Only genetics, both genetics and environment. Common sense says it's both genetics and environment.
Common sense says nothing about the weight of these factors nor does it say anything about “genetics” being archetypally delineated by race. Genetics for sure plays a role in intelligence.
You are appealing to non cognizance as a premise to support your biases. But that’s… dumb.
You are welcome to point to specific studies if you wish but the general consensus is that there is no statistical evidence of what you’re claiming to be obvious.
Most studies that attempt to normalize against socio cultural features recognize that it’s basically impossible to do. That’s why the best available premise is that since we broadly observe huge gains in population intelligence based on economic development within racial groups; it is most likely that economic and cultural differences occupy the lions share of any observable difference between racial groups currently as they’re all in different places.
Don’t call me dumb just because I disagree with your point. Keep the conversation civil and stop acting like an immature child or find another place to voice your opinion without insulting other people.
Common sense says many things about genetics. In fact it’s the basis behind my entire premise which you didn’t even address. Genetics plays a role in the physicality and even temperament of a race (testosterone is measurably different across races). What black magic makes intelligence the only factor that is independent of race? Common sense says it’s a factor.
Common sense also says environment is the greater factor. If a person lacks practice or education vs. a person who practices math puzzles everyday. Obviously that is the bigger causal factor by common sense.
Both are factors by common sense. Environment is the bigger factor also by common sense but by that same reasoning genetics is not insignificant. The best way to put it is that environment influences IQ but genetics influences potential.
Why appeal to common sense? Because there’s lack of solid causal evidence. Evidence exists, but the replication crisis and the lack of causal experimentation makes all the tests not as solid as the correlative tests.
The stupidest thing here is that we are not in disagreement on what the evidence points too. It’s just I’m able to rely on induction and logic to predict conclusions where scientific evidence is lacking while you’re entire model of the world is essentially “if the science doesn’t exist then it must not be true“
If the science doesn’t exist, it means it’s unknown. I hope this was educational for you.
I’ll point to some resources when I have time. Im currently not able to cite them atm.
Does genetics influence intelligence? Yes. Does genetics influence race? Yes.
Does that mean that race is a _material_ driver of differences in intelligence? No. That just doesn’t follow at all. Every difference between groups is statistically significant at some obscene sample size but the claim in question here is about whether it is _material_ and important. That is not at all clear. Nor is intelligence the only thing that this applies to. There’s a basically infinite list of human traits, competencies, and capabilities for which race-affiliated genetic advantages alone is pointlessly small in terms of effect.
The claim was originally made by me. Qualifiers like “important”, “material” were added by you so you’re the one who’s moving the goal posts with vague words like “important”.
The word I used is “significant”which I will specify here as a different mean value.
It applies because among top countries of different races with extremely high wealth, gdp and education standards there are clear differences in IQ. You can still attribute this to environment but it starts to lean towards genetics once you match wealthy countries.
None of this is solid but neither is your conclusion that genetics doesn’t influence racial intelligence in any significant way. If your conclusion is “we don’t know” then my counter is common sense and evidence suggests otherwise.
> The claim was originally made by me. Qualifiers like “important”, “material” were added by you so you’re the one who’s moving the goal posts with vague words like “important”.
> The word I used is “significant”which I will specify here as a different mean value.
There are statistically significant differences between any two populations where randomness is included provided your sample size is big enough. Your thinking here is novice and misinformed. If an effect size is immaterial and unimportant then it definitionally does not matter. You win no points for saying HA! Technically there is an immaterial advantage for Asians! If it’s immaterial, it doesn’t matter.
> It applies because among top countries of different races with extremely high wealth, gdp and education standards there are clear differences in IQ. You can still attribute this to environment but it starts to lean towards genetics once you match wealthy countries.
Wealth is one of many things that matters. It’s not the only thing. As I have said before, culture is a huge one.
> None of this is solid but neither is your conclusion that genetics doesn’t influence racial intelligence in any significant way. If your conclusion is “we don’t know” then my counter is common sense and evidence suggests otherwise.
You need to learn how to interpret statistical effect sizes. The basic 101 conclusion of failure to reject null hypotheses is that you cannot conclude that population A is different from population B. But “different” doesn’t mean much. The important takeaway is much rather that there’s no evidence of a strong effect size showing that one race is materially intrinsically smarter than another. If there were a big gap, it would be visible in available statistics. It’s not, so we can largely conclude that there’s no material difference.
You’re talking a big talk about people being biased by trying to be equitable but ultimately you’re just saying “well I can’t provide it but my common sense biases say my race must be superior, even if it’s by a meaninglessly small margin”. Yeah, ok buddy. Take a lap.
This is not a great "study" if you can call it that. Let me be specific by pointing a passage that's doing a lot of the heavy lifting:
```
After controlling HDL and LDL cholesterol, uncontrolled high blood pressure, atherosclerotic cardiovascular disease, cocaine use, alcohol use and several other lifestyle risk factors, the researchers found that new cases of diabetes were significantly higher in the cannabis group (1,937; 2.2%) compared to the healthy group (518; 0.6%), with statistical analysis showing cannabis users at nearly four times the risk of developing diabetes compared to non-users.
```
Note "nearly four times the risk of developing diabetes" -- this feels like a dangerous exaggeration of "four times the correlation of having developed diabetes." No controls for diet, exercise, etc. In comparison to a gold standard clinical trial this is about as far as you can go on the other end.
That's not to say that I think that a prospective link doesn't merit deeper research -- far from it. In fact, Novo Nordisk has an anti-obesity drug in phase 2a trials, monlunabant [1], that serves as a CB1 (cannabinoid receptor 1) inverse agonist which has a mechanism of action inverse to THC. The clinical trials are showing that it creates modest weight loss, so it seems that there's likely something to how that receptor is activated that could cause weight gain. What's not clear to me is whether all the other receptors that THC activates create a compound effect at a population health level that leads to net weight gain and the development of diabetes, the inverse, or non-correlated outcomes, and whether those occur across the board or differentially based on genetic makeup.
This isn't a great answer to the overall issue (which I agree is a ridiculous dark pattern), but I've used Privacy.com cards for personal projects to hard spend at a card level so it just declines if it passes some threshold on a daily/weekly/monthly/lifetime basis. At work, I do the same thing with corporate cards to ensure the same controls are in place.
Now, as to why they're applying the dark pattern - cynically, I wonder if that's the dark side of usage/volume based pricing. Once revenue gets big enough, any hit to usage (even if it's usage that would be terminated if the user could figure out how) ends up being a metric that is optimized against at a corporate level.
I don't get how interest rates is given at best a cursory phrase when ZIRP regime ending is one of the biggest macro events of the past several decades. Seems like it would deserve more of a spotlight.
The main time I can see this making sense is when the data access patterns are so different in scale and frequency that they're optimizing for different things that cause resource contention, but even then, my question would become do you really need a separate instance of the same kind of DB inside the service, or do you need another global replica/a new instance of a new but different kind of DB (for example Clickhouse if you've been running Postgres and now need efficient OLAP on large columnar data).
Once you get to this scale, I can see the idea of cell-based architecture [1] making sense -- but even at this point, you're really looking at a multi-dimensionally sharded global persistence store where each cell is functionally isolated for a single slice of routing space. This makes me question the value of microservices with state bound to the service writ large and I can't really think of a good use case for it.
[1] https://docs.aws.amazon.com/wellarchitected/latest/reducing-...