Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Big instances are cheap relative to the cost of implementing horizontal scaling (at the app layer), plus engineering cost of Kubernetes or whatever your preferred orchestrator is, plus the heavy cloud tax on going "cloud native."


Even in the old days we were horizontal at the app layer, the persistence layer is what is hard.

Unless you have some very specific need, I was horizontally scaling the app layer in 1996 even in true monoliths.

K8s can be too much, but even when tooling and costs forced us to segment by technology layers, you never tried two node failovers.

If you are trying to share state at the app level you would most reduce availability, because of split brain etc…

The persistence layer was bad enough with shared quorum drives, heartbeat networks etc…

It sounds like you are just spinning up to app servers or are you talking about active/passive or active/active two node clusters?

That is vertically scaling in the way I understand the term, not just scaling out two instances.


I'm talking about true horizontal scaling where load is sharded across replicas. Active-passive is easy. Handling web traffic is also easy. Backend business logic is not so easy.

If you're message driven you can maybe have a set of replicas consume off of a message bus like Kafka or NATS, but now you need to maintain that cluster, as well as build a messaging layer. What protocol do you use, etc?

Then there's the question of what triggers scaling up/down. The simple way is scale on CPU utilisation but then you get flapping issues. Or if load isn't completely evenly distributed you get one replica starving for CPU whilst the rest have too much.

All of these questions go away if you can just provision a big node and be comfortable that it will be able to handle anything within reason.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: