I know we're mentioning the $100k number, but I'm genuinely curious: how do you spend $100k on staging? I currently spend $75k/year on production, $3k/year on staging, and around $2k/year for a full cloud environment per developer that wants one. For me, $100k for staging would mean >200k monthly active users on staging.
You want staging to be as close to production as possible, but obviously not exactly he same size.
If you spend 1M on production, you might have a staging environment 1/10th of that size. Say 10 machines in a cluster instead of 100. But some technologies also have minimum implementation cost requirements per tier of usage. Like if you have a multiple data centers in production, you will need to test that in staging. You can’t just have 1 machine in a cluster, so your staging environment needs at least 3 to make a quorum. Therefore, to have 5 data centers in staging, you now need 15 machines... which is more than the base of 1/10th of production.
Issues like that can balloon staging costs if you try to properly implement it.
Honestly with the right architecture you shouldn't even need 10 machines in a cluster, 2 should do. After all, the whole point of clusters/load-balancing etc. is that 2 machines behind your LB should function the same as 1000 machines behind your LB, just the latter gives you a lot more throughput.
it depends however whether you want to test the code or the infrastructure
my preproduction uses a three machines setup so it can catch clustering problems, but the code itself can run just as well on a single machine interacting with a local single instance db, and devs run it like that.
Depends on the business you’re in. Sometimes an “environment” is a rack (or 2) of servers and that might be 100k-400k/rack before space & power & bandwidth.