Competition only matters for new contracts. Once you have picked a provider and made a big enough infrastructure investment, there's no realistic path to switch to someone else.
Keeping everything, including mission critical databases, "inside" Kubernetes is smart. The smart people keep everything "in" Kubernetes. Everyone else can pay up.
Like everything else it's a tradeoff. If you are running _everything_ inside Kubernetes you can easily move away, but I think you're loosing much of the benefit of being in the $big_cloud to begin with. If you have the staffing to provide your own db's, block storages, caches, IAM, logging/monitoring, container registries etc in a reliable way "as a service", the jump to some much cheaper bare metal hosting is not that far.
For me the sweet spot is to have all compute in Kubernetes and stick to open source or "standard" (e.g S3) services for your auxiliary needs, but outsource it to the cloud provider. Fairly easy to move somewhere else, while still keeping the operational burden down.
But agree that having e.g all your databases in a proprietary solution from the cloud vendor seems sketchy.
Yeah, not _everything_, but almost or "pretty much" everything. The main exception to "everything" ultimately being S3, and some monitoring exceptions here and there that are purely cloud-side like monitoring AWS' Service Control Policies and using some cloud-side AWS tooling.
AWS has very nice one-to-one mapping of K8s serviceAccount with IAM roles.
We used some cryptography-centric GitOps patterns to eliminate any human hands beyond a dev environment which also helps IAM be easier (but of no less reasonable granularity and quality).
> the jump to some much cheaper bare metal hosting is not that far.
Heh, at a consulting firm I was at not too long ago, all the K8s nodes were sole-tenant on the cloud providers these K8s nodes were on. Intra-cluster Cilium-based Pod-to-Pod networking across cloud-based datacenter sites has been super smooth, but I have to admit I'm probably tainted/biased by that team's uncommon access to talent.
Don't be ridiculous. To the nearest 9 decimal points, nobody keeps their mission critical database inside Kubernetes, and the K8s tax is excessive for essentially all cloud users anyway.
The last time I was involved in this kind of thing (long enough but not too long ago), we ran Postgres and other databases strictly in K8s per a mandate by our CTO. Using reserved instances are great for avoiding a lot of cost.
I think for that particular firm, "K8s tax" measured in fiat currency is negligible and anything on the human side their people upon hearing "K8s tax" would respond with some ridicule along the lines of "someone doesn't know how to computer".
To be fair, most of the commenters on HackerNews should use something like Heroku.
This is absolutely not true. Customers regularly shift spends in the >= 7-digit between cloud providers. Yes, it takes planning, and it takes many engineer-months of work, but it definitely happens.
And also factor in (1) the claim that most cloud growth is ahead of us, eg. moving large customers from on-prem to cloud, and (2) it would be terrible policy to try to charge existing customers more than new customers.
>> it would be terrible policy to try to charge existing customers more than new customers.
Companies do this very, very often. This is part of the reason why they have a "call the sales department for special pricing" option. They can give large contracts a nice discount to get them onboard, then slowly (or in some cases, if they really think they have you hooked, not so slowly) ratchet up the price. This is common in both B2c and B2B businesses.