> If you're using k8s as a service where you've outsourced all of the cluster maintenance (from any of the cloud vendors), which part due you see as super complex?
This point bears repeating. OP's opinion and baseless assertions contrast with reality. Even with COTS hardware nowadays we have Kubernetes distributions which are trivial to setup and run, to the point that they only require a package being installed to get a node running, and running a single command to register the newly provisioned node in a cluster.
I wonder what's OP's first-hand experience with Kubernetes.
> When Kubernetes was in its infancy, it was quite difficult to do the PKI and etcd setup required to run a cluster - and managed services didn't exist.
I agree, but it's important to stress the fact that things like etcd are only a pivotal component if you're putting together an ad-hoc cluster of unreliable nodes.
Let that sink in.
Kubernetes is a distributed system designed for high reliability, and depends on consensus algorithms to self-heal. That's a plausible scenario if you're running a cluster of cheap unreliable COTS hardware.
The whole point is that none of the main cloud providers runs containers directly on bare metal servers, and even their own VPS are resilient to their own hardware failures.
Then there's the whole debate on whether it's a good idea to put together a Kubernetes cluster running on, say, EC2 instances.
Yeah, it's as if someone is saying they can't run linux because compiling takes forever, dealing with header files and dependencies is a pain, plus I use bluetooth headphones.
This point bears repeating. OP's opinion and baseless assertions contrast with reality. Even with COTS hardware nowadays we have Kubernetes distributions which are trivial to setup and run, to the point that they only require a package being installed to get a node running, and running a single command to register the newly provisioned node in a cluster.
I wonder what's OP's first-hand experience with Kubernetes.