Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Future of Kubernetes (eficode.com)
147 points by kiyanwang on March 13, 2022 | hide | past | favorite | 173 comments


I am worried that the supreme arrogance of abstraction-builders and -apologists that this article's vernacular, too, emanates will be the cause of a final collapse of the tech and IT sector. Or maybe I even wish for it to happen, and fast.

Everything gets SO FRICKIN' COMPLEX these days, even the very simple things. On the flip side, interesting things that used to be engineering problems is relegated to being a "cloud cost optimization" problem. You can just tell your HorizontalPodAutoscaler via some ungodly YAML incantation to deploy your clumsy server a thousand time in parallel across the rent-seekers' vast data centers. People write blogs on how you can host your blog using Serverless Edge-Cloud-Worker-Nodes with a WASM-transpiled SQLite-over-HTTP and whatnot. This article's author fantasizes and fetishizes interconnected lanscapes of OpenID-Connected workloads that are, and I QUOTE, "stateful applications that are stateless". Because "This means data should be handled externally to the application using other abstractions than filesystems, e.g., in databases or object stores." - yeah, fine and dandy, but which platforms of the future should these services be hosted on? And who will develop and shepherd and maintain and fix them?


> across the rent-seekers' vast data centers

why not go further and call it "neo-feudalism"? The infrastructure providers are the territorial lords that control the arable land, the aristocracy of the digital world. Corporations that rent this infrastructure to build services on it are their vassals. They may claim to be freemen or nobility as they pay for their fief in money not in services and allegiance, but most of them are ministeriales that can't just move elsewhere without significant risk to business continuity (some can, like a oil mega-corp that just rents some IT infra). Lastly we have the cloud engineers, the serfs at the bottom of the pyramid, who work the land they don't own and have to give part of the profits of their work to the upper class in exchange for the opportunity to make a living on their infrastructure.


agree with this; similar things can be said about the MSFT Windows strategy.. for the system designers, it is a feature


Insightful.

Next level for you: look at society and realize it works the same.


I've appreciated many of the modern tooling but as things have matured, I'm convinced the industry has lost the plot. As days wear on, I'm increasingly disliking the complexity these tools have bought and often times see them solving problems that never existed.

With that said, I'll echo your sentiment "Everything gets SO FRICKIN' COMPLEX these days, even the very simple things."

Nomad is now perhaps my only favored tool for container deployments/some orchestration. I also long for the days for the original container ... the folder! Put everything you need in and zip it up. instantly shareable and no spaghetti tooling needed to run it. The OS told you everything you needed to know about the process. I also oppose (my personal opinion) the trend to want to scale things crazy to numbers. I wholly believe on doing more with less. The company infrastructure stack I currently harbor admiration and put them on the pedestal is StackOverflow. Last I checked in 2021 it still ran on 4 MSSQL servers and 9 IIS servers with a few satellite services. Could easily be 4 MYSQL servers and 9 HTTPD servers.


I've been thinking a lot about this complexity problem too. Tech people are starting to get really arrogant and demand crazy salaries across the board. Right now this is OK because tech is now seen as a competitive advantage. I wonder if one day some C level is going to look at their IT budget and start to question if technology is really that much of an advantage. For example, at one point is it cheaper to put 10 humans in a call center rather than build online ordering systems and deal with data regulations, app changes, outages, etc?


That (in house tool builders) is not the kind of IT that demands high salaries, it’s the engineers building the call center replacement software (analogy) that demand the high salary. And they make it up in scale. All of these high paid engineers make it up in scale. And you can see it in companies financial statements. It’s not some hand wavy wizardry.


You need about three engineers to support a ~$500mm/year ecommerce site using managed kubernetes. Really you can do it with two, you need one that understands it, and another guy/apprentice in training who sort of understands it, in case the first guy gets hit by a bus. The third guy is just so the first and second guy don't trade off on-call and get burnt out immediately.

Ten or fifteen years ago you'd be looking at a team of at least five, probably 7-10 engineers, mininmum, to design, maintain and keep up to date a similar system. The company gets a tremendous value out of their three engineers at $150-250k/year, ea


Managed Kubernetes has really changed the economics of deployment. Kubernetes administration sucks about as much as vSphere or Open Stack. Which is to say: a lot. You can debate the details but you are talking highly trained people and potentially several of them to run a large installation. Plus your management chain now needs to worry about HA, security, upgrade being covered.

Managed Kubernetes services like EKS replace that with half a person or less + about $100/month overhead per cluster.


And for something as "simple" as S3 how much is it gonna cost to get your data back out when you need to leave?


The complexity of the systems (mostly!) reflects the reality of the needs. Thousands of services working in orchestration in an environment that is, by definition, hostile; is non trivial.

Little parts of a system being over engineered may be the fault of an engineers ego somewhere, but the massive complexity of distributed systems isn’t some ego trip or frivolous fiddling with shiny new tech.

It’s just big and hard for mere mortals. It’s hard to say when k8s actually decreases complexity, but I think most will get there before ~100 software engineers do.


My current record is one week with 1 infrastructure person trying to run small containerized application on Azure. This is specific to Azure (I could get it running on GCP no problem, AWS would be a bit more complex), but it reflects, IMO, how k8s is simplifying operations because I could do it easily with k8s on all of those platforms (and that includes setting and managing the cluster).

I lost track on how many hours (more like man-months, mythical or not) k8s has saved so far in my work, whether it was in team of 2 or team of 15 supporting multiple other teams.


The portability and the widespread adoption is a force multiplier but I have yet to be on a (healthy) team where everyone levels everyone up as the k8s complexity grows for even "simple" things (where you end up adding custom controllers or similar). One engineer will add it based on a Google help article or stackoverflow post, HOPEFULLY the engineer reviewing the PRs tries to understand it and only later when it breaks does everyone find out it exists and only Sally knew all the potential issues, etc etc.

On a highly functioning team with strong communication skills and no major egos it should go a lot smoother where everyone is leveling up on the knowledge at the same time.


>will be the cause of a final collapse of the tech and IT sector

It has a name - it is the Anti-Singularity, and it is coming for us all. As we blindly make ever more aspects of our lives completely dependent on increasingly complex and fragile systems, those very systems are becoming unmaintainable, despite ever-increasing numbers of engineer hours. One day, we will go too far, and will make a change that will break things. It won't be possible to go back, as there will be nothing to go back to. As the breaks ripple out, faster and faster, the Anti-Singularity will occur, freezing development in place permanently, and bringing productivity to a halt.


Thank you for writing the comment I didn't have the energy to make.


+1 for "LANscapes". LOL!


A difference between the Linux movement and the Kubernetes ecosystem is that most of the open-source tooling developed for K8s is built by venture-funded startups (e.g. Istio or Envoy). And the endgame of these companies is almost always an open-core product that is functional but in practice just broken enough to not be usable without buying premium support or the enterprise edition.

So I'm a bit skeptical about the long-term viability of the platform. Personally I think we'll see new paradigms emerge that will mostly obsolete containers and complex orchestration platforms, you can e.g. already see some pretty interesting ideas around WASM. Before Kubernetes OpenStack was the big thing, but now it's mostly used as a substrate to run Kubernetes on.


Istio was started by Google & IBM, while Envoy was created by Matt Klein at Lyft.

One of the reasons why we adopted Envoy at Ambassador for our API Gateway was precisely because there was no single _commercial_ entity who controlled Envoy.

As a note, technically Google & Lyft are VC-funded, so that’s true, although I wouldn’t put Google in the startup category, and Lyft’s business model is unrelated to Envoy.


> One of the reasons why we adopted Envoy at Ambassador for our API Gateway was precisely because there was no single _commercial_ entity who controlled Envoy.

I'm a little disappointed that you placed so much weight in that factor. I believe simplicity, reliability, and comprehensive documentation are far more important concerns than whether a commercial entity has control over a project. The flip side of community ownership is more politics, more complexity, bureaucratic decisionmaking, and slower evolution.

Connection/service proxies are a dime a dozen. I suspect you could have picked nginx, haproxy, or the Linkerd proxy and been just as well off.


> The flip side of community ownership is more politics, more complexity, bureaucratic decisionmaking, and slower evolution.

This of course may be true for some communities, but certainly not the case for L7 proxies. In fact, at the time, the opposite was true: NGINX & HAProxy were content in slowly supporting their (captive) customer bases, with little incentive to innovate. This is why Lyft went off to build Envoy Proxy. At GA, it supported features such as HTTP/2, native observability, hitless reloads — none of which were easy / possible to do in NGINX or HAProxy at the time.

> Connection/service proxies are a dime a dozen. I suspect you could have picked nginx, haproxy, or the Linkerd proxy and been just as well off.

Linkerd proxy didn’t exist then (there was an early version of it, written in Scala IIRC; the Rust version didn’t come along to quite some time later).

I agree that there are many options for service proxies. I’ve found, though, that the Envoy community has been pushing innovation in the space quite aggressively.


I was under the impression that Istio and Envoy are significantly driven by Tetrate (https://www.tetrate.io/), which in my understanding was started by the Istio creators and sells enterprise editions of the two solutions [1]. So yeah it's open source, I can fork it and that's nice, but let's be honest Tetrate aims to monetize these products. Not saying this can't work and be great for the community at the same time, but I'm at least a bit skeptical.

1: https://www.businesswire.com/news/home/20210310005295/en/Tet...


I've been running Istio for years and have never heard of Tetrate. Tetrate might be started by people who originally worked on Istio, but Istio itself is primarily driven by Google.

Istio development is primarily driven by its steering committee [0][1], which is led by Google. If you look at the contributors [2], 6 of the top 10 are Google employees (perhaps more - not everyone lists their employer on GitHub). They have an open design process, posting design documents and similar to their Google Drive [3]. The team is likewise active at community events, where they solicit input that drives what they work on.

[0]: https://istio.io/latest/blog/2020/steering-changes/

[1]: https://docs.google.com/spreadsheets/d/1Dt-h9s8G7Wyt4r16ZVqc...

[2]: https://github.com/istio/istio/graphs/contributors

[3]: https://istio.io/latest/get-involved/


Envoy was originally written by Matt Klein @ Lyft, and the IP is owned by the Cloud Native Computing Foundation.

Here's the initial video where Matt Klein introduces Envoy: https://www.microservices.com/talks/lyfts-envoy-monolith-ser....

If you want to buy commercial support for Envoy & Istio, Tetrate certainly is an option.

(Note: I am/was the CEO at Ambassador Labs, which organized the Microservices Practitioner Summit where this talk occurred.)


>pretty interesting ideas around WASM

I'm confused here, does WASM stand for something different besides WebAssembly because googling "kubernetes WASM' came up with nothing but WebAssembly. But I was under the impression that WebAssembly is just a virtual machine. I'm not much of a backend person but I don't see the overlap between container management (kubernetes) to a virtual machine (WASM).


I meant more that technologies like WASM might obsolete the need for containers and thereby container orchestration platforms like Kubernetes. Of course Kubernetes might itself evolve to orchestrate WASM workloads.


I can already see it, a zip file containing application assets, a wasm binary and a couple of manifest files with configuration metadata.

Where have I seen this already?


Sure, superficially it sounds like Java. But hey, Java was really damn successful for a very long time.


It would help if that wasn't sold as something new never attempted before.


The WASM concept has been around for decades, it's effectively jvm or .net in but in the browser. I don't get the relevance when discussing k8s or containers.


WASM is not just in the browser, which is the key point. But yeah, it is superficially similar to the Java application servers of the past. One big difference is that I can compile, e.g., Rust to WASM but not JVM bytecode. WASM is language agnostic in a way that the JVM isn't. This makes it a good target for shipping apps in a variety of languages that just needs to be scheduled "somewhere", which is the main purpose of a container orchestration platforms like k8s. This is an evolution of serverless, where I won't have to maintain a "cluster" for my compute but just hand off a WASM binary to cloud provider.


Because its advocates now decided to reinvent JVM/.NET application containers.

https://krustlet.dev/


Of course Kubernetes might itself evolve to orchestrate WASM workloads.

Already been done:

https://krustlet.dev/


I thought WASM was just super-optimized JavaScript that lets you play Unreal in a web browser. Like

Am I thinking of a different tech? Is “WebAssembly” == WASM?


You’re thinking of ASM.js, a method of writing and optimizing JavaScript as a compile target for C. It involved a lot of |0 to force the VMs of the time to assume every number was a int32 instead of a float64. Firefox’s engine briefly used a special optimizer for asm.js but it’s just standard compliant JS.

WebAssembly is a new virtual machine that isn’t related to JavaScript. There’s no garbage collector or automatic memory management. It uses a binary compiled format for machine code instead of plain text. It’s kinda like Java if you replace the Java memory model with the C memory model, and the Java language with something that looks more like LLVM mashed up with Lisp.


I don't know why people keep talking about WASM, WASM is 1% of what Kubernetes does, it's an alternative to runc nothing more. It's like comparing Docker to Kubernetes.


I don't think they were comparing WASM to Kubernetes. In the context of the comment you replied to, I read it as WASM potentially obsoleting _containers_ as opposed to _complex orchestration platforms_.

> WASM is 1% of what Kubernetes does, it's an alternative to runc nothing more

It'd be interesting if you could expand on this. I don't think WASM does _any_ of what Kubernetes does.


Why would WASM obsolete containers?


Because new generation startups are on the verge of selling WASM Application Servers....


Of course containers will be around probably forever, just as we still have mainframes. Note that current serverless and container platforms start and manage long running instances and occasionally incur cold starts.

WASM has near instant startup, making a cold start for each request feasible. I think this could be a key property/advantage that makes it preferable and more economically efficient in the long run.

WASM I think has a stronger sandbox as well. Containers in a multi-tenant environment need to run inside a virtual machine. WASM can be run in multi-tenant without the need to isolate each tenant in a virtual machine, again making it more efficient in long run.

edit: also recalled a relevant tweet from creator of docker https://twitter.com/solomonstre/status/1111004913222324225


How can WASM start faster than actually native executables?


containers != native exe. WASM can start faster than containers, not exe.


Wait a minute... we've had virtual machines (in the programming language sense) in like forever. JVM, BEAM, CRL, etc. None of them made containers obsolete. People package Java inside containers even though Java has had non-Linux-container ways to package things in like forever — because people like the tooling surrounding Linux containers.

WASM just replaces JVM/BEAM/CLR/etc with its own virtual machine. It doesn't address all the stuff that container orchestration tools address such as lifecycle management, update management, artifact management, autoscaling, networking, management access control, logging & metrics, etc.

If you want to start faster than containers, you already can — by not using containers.

So given all this above, how is "starts faster than containers" an advantage?

And containers are not exactly slow to start. `time docker run -ti --rm ubuntu:20.04 true` => 0.5s. I suppose that's too slow for serverless cold start, but if you really care about that then the startup time of Ruby, Python, Node.js etc with all the packaged dependencies are also in that order.


> It doesn't address all the stuff that container orchestration tools address such as lifecycle management, update management, artifact management, autoscaling, networking, management access control, logging & metrics, etc.

Absolutely agree, it's just a core technology, which enables new model of operation, on which all that is necessary to run. Right now there is almost nothing.

> If you want to start faster than containers, you already can — by not using containers.

Only if don't care about security/isolation. WASM is strongly sandboxed. It's appropriate for multi-tenant systems, where containers/jvm/etc have to be run inside of virtual machines(e.g. firecracker) to get acceptable isolation.

> I suppose that's too slow for serverless cold start

Yep, half a second is way too slow for new instance per request. I think wasm startup time around 35 μs for lucet.


Hey, I didn't say I agreed with that statement - I was just pointing out what I thought they were trying to say.


You're right. By itself, WASM is nothing like K8s. This comparison only makes sense if you consider how you would define the syscall layer.

People are seeing WASM in the context of a distributed runtime. Obviously, this would require you to write your applications with something not POSIX-like, but it would allow you to scale your service on a cluster. Your unit of computation isn't just confined to a single machine.


Wasm capabilities for server side enable you to have alternatives to K8s like Cloudflare workers that are far superior in performance and much simpler from a developer perspective. They are suitable for niche use cases right now, but I can see a future where they can run more general workloads without the added complexity. That's why people keep talking about Wasm and part of what's exciting about it


WASM will never be faster because it has a runtime, the only thing that can be quicker is the startup of the application which is less relevant for regular API servers etc ...

WASM is interesting client side, service side beside very simple use case like resize my image it's not very good imo


1. WASM doesn't require a runtime. You can perform AOT compilation on it and still get the benefits of sandboxing.

2. Even under interpretation, WASM doesn't require the interpreter to jump through hoops or optimization to get it to be performant. In this sense, it's more of a portable assembly format than something like LLVM IR.


Bytecode as executable format goes back to the early 60's.

Plenty of platforms would AOT compile it before execution or at installation time (like IBM/Unisys mainframes and microcomputers).


EAR files, just saying.


> already see some pretty interesting ideas around WASM

I'm excited about WASM, but I don't see how it's fundamentally different when it comes to being built by venture-funded startups. Pretty much all of the interesting stuff in server-side WASM is being driven by venture-backed startups.


Agree, but also there's a lot of core work been done by the industry (Intel, Google, etc.) that we all benefit from. It definitely 'feels' different than K8s and closer to the early days of the web


Any extensible platform will be driven into a complex slurry of options and configurations to support every niche vendor/use case. I think the successor of kubernetes will be a simpler version of Fleet: distributed systemd. The future is edge, and edge doesn't need to integrate with every series A startup, it needs a small footprint and stability.

I've been tinkering with a prototype that does this (and if anyone is interested in collaborating, my email is in my profile).


The strength of Kubernetes (vs Mesos, OpenStack, etc.) is that you can strip it down pretty well so that you only bring the niche bits that you need - whether it's edge, or something big that needs a lot of extras. The unified API mechanisms and moving drivers out of core helped a lot there.

Also, in my experience, it turns out that you can quickly find yourself needing all those extra bits if you're not Google (which implements a lot of the same features in libraries linked to programs directly and depends on the fact that they run customized everything) or Netflix (similar).


Istio came out of Google. And I'm not sure what Kubernetes has to do with OpenStack, other than early-on it was assumed you would need to run containers on top of cloud VMs because that's what the big public cloud providers did.


Given the failure of most pure FOSS business, and the love for non-copyleft licenses, it is quite clear that what most businesses care about is the model that ruled the industry before Linux, with open core being the new demo versions.

No wonder that as IoT took off plenty of POSIX like clones with non-copyleft licenses sprung off.

Even on Android, the Linux kernel is the only GPL piece of the puzzle left standing.


> broken enough to not be usable without buying premium support or the enterprise edition.

How does kubernets do this ? I haven't hit that spot yet but I'm pretty sure I will.

I guess I'll also find an open source solution to avoid switching licenses.


tbf if a new Linux were starting in 2022, it would absolutely be VC-funded, since the dotcom era opened that Pandora's Box and it will never close again

i think the other key difference b/w nix and k8s is that a very large swath of k8s core contributors are employed by very large companies (mostly Google, Red Hat, Microsoft, VMware, and IBM) who are also deeply invested in k8s-derivatives sold in the marketplace. from what i've gathered, and this might not be true anymore, many contributions to Linux still come from the academy (universities, etc)


Same as Linux, it only took off because many UNIX shops like IBM saw a value in improving it, while cutting costs on their own.

> 1998: Many major companies such as IBM, Compaq and Oracle announce their support for Linux

> 2000: Dell announces that it is now the No. 2 provider of Linux-based systems worldwide and the first major manufacturer to offer Linux across its full product line

> 2006: Oracle releases its own distribution of Red Hat Enterprise Linux.

--- https://en.m.wikipedia.org/wiki/History_of_Linux


The dig about Envoy doesn’t read true to me. Envoy was open sourced by Lyft in 2017 and quickly replaced Haproxy in our stack because it was Good. Is there actually a venture backed startup steering it these days?


See my other reply.


The responses to which indicate you're factually incorrect and that neither Envoy nor Istio are owned or controlled by venture backed startups. You can however get products built on top of them or enterprise support from venture backed startups. Akin to companies like Red Hat in regards to Linux.


I have never used Envoy or Istio, and maybe I shouldn't believe the PR hyperbole about Tetrate which says "Today, Tetrate is one of the largest contributors to the Istio open source project and second largest to Envoy Proxy.". In the other tread people have pointed out that Google is instead the main driver behind Istio these days, which I do not find more reassuring as they mostly use Kubernetes as a marketing tool to sell cloud services (as, in my understanding, they have an entirely different internal platform that they deploy their services on). I would say the point still holds, but of course it's fine to have different opinions on this. It's just that seeing containerization startups like Docker starting to monetize their core ecosystem to the detriment of the open-source community (e.g. by limiting container pulls on Dockerhub) does not give me confidence that the same thing won't happen to the Kubernetes ecosystem in the long term.


K8s will be superseded by a simpler yet more powerful tool sooner rather than later. Just like it happened with:

- Corba, MRI -> http apis

- XML, Soap -> json

- Flash -> html5

- Perforce, svn, mercurial -> git

Etc. We can’t think of anything better than k8s right now, but that’s just how it works. The moment that new tech appears, we all will be asking ourselves “how could we possibly use k8s?”


> Flash -> html5

HTML5 is definitely not simpler than Flash was. I'd argue, it's far more complex, if you include all the sub-technologies and frameworks required to reach feature parity.

There was also no "natural" progression from Flash to HTML5. Oracle and the browser vendors had to run an involved depreciation campaign to get developers off flash and move them to HTML5. If you had left the web ecosystem on it's own, my guess is, we'd still be writing flash applications today.

> Perforce, svn, mercurial -> git

This progression was definitely natural, but I doubt again that git is less complex than svn or mercurial. However, one big advantage of git is that it is extremely easy to set up: You can just install it and use it locally without bothering any admins or setting up any servers. Once you are already using it, it's also easy to connect your repo to a server.

So, I'd argue, git has better UX and better availability than the alternatives, but it's not necessarily simpler.


> required to reach feature parity

I think most people don't need full feature parity. That's why a lot of these techs take off. They need something that is easier to understand.

Similar with kubernetes, there's like 5 permutations of software deployment people actually want to do. Deployments are one of those but there are still a few areas that you have to get hacky with StatefulSets to achieve. Once some PaaS-wrapper for kubernetes takes off we'll see the V2 of that idea as a stand alone cluster management thing. If someone nips some ideas from MaaS [0] we might even see on-prem bare metal take off again once people realize they're getting screwed on perf/bandwidth and many of the benefits of cloud providers are already provided to you by something like Kube.

[0] - https://maas.io/


I disagree completely :)

Git is very simple. There are basically just four concepts: blobs, trees, commits, and pointers to commits.

However, git is far from easy! The UX is haphazard and confusing and many people are scared of venturing outside of their couple of memorized commands.


This is the first time _ever_ that I've heard git UX described as simpler or easier than something else.

As far as I remember Mercurial was easier to start with but Github came along and won.


What does oracle have to do with flash? Are you thinking of Adobe or are you thinking of java applets?


Ah, I'm sorry. Yes, I meant Adobe, not Oracle.


There are so many devs around now that have never programmed or used Flash. It’s details are becoming myths.


Kubernetes is the new tech your are looking for. It's not simpler, because it _needs_ to be complex in order to provide portable, heterogeneous, application deployments.

How was it before ? A bunch of unstandardized shell scripts and ansible playbooks that you could not reliably automate, let alone port to a different topology ? How could we possibly use that ?


> bunch of unstandardized shell scripts

Eh, nothing unstandardized about POSIX shell scripts, as opposed to a bunch of ever-changing YAML files with questionable semantics.

I think this generation of "cloud natives" have drunk too much of the cool aid served by "cloud providers" and their clever faux grassroots marketing/blogs.

As compared to the Linux and BSD "movement" (no such thing), which was about liberating ourselves from having to run apps on commercial OS's, based on a well-understood OS baseline (POSIX, a term coined by RMS).


Eh. I like and use Kubernetes as a tool. But, this over simplification at best.

Kubernetes is not a defecto and always better replacement of Ansible and shell scripts. In most small scale deployments, the value brought by Kubernetes is trumped by the costs of complexity. Bunch of scripts you say? Now you have bunch of yaml files laying around. Atleast before, you could find and fix problems manually quickly. Now, first find someone who has been in the weanies if kubernetes and it’s ecosystem to fix your problem.

In large scale infrastructure, kubernetes may bring the standardisation that is needed. Being able to hire people who knows kubernetes would be easier than hiring people who know how you wired your infra with ansible playbooks and scripts.

Portability you say? How easy do you think it is to move your infra(not just the apps, but - users, security rules, storage etc) to another kubernetes provider? EKS to AKS? Do you think AWS IAM and whatever equivalent Azure uses would be straight portable? Your S3 buckets and it’s policies to Azure block storage? Selling K8s with infra portability is a lie.

Applications - yes, but that is achievable by other means as well.


Portability is _THE_ killer feature of kubernetes. You may argue that it is not straightforward to move from EKS to AKS, but consider that any replacement solution will suffer from the same shortcomings.

However, moving from 1 cluster in one region to another within another, is very simple. And that did not exists before kubernetes.


> It's not simpler, because it _needs_ to be complex in order to provide portable, heterogeneous, application deployments.

No, the complexity of Kybernetes is mostly in its auto-tuning and auto-scaling features. That's the stuff that needs to be re-engineered from the ground up in an elegant way. If history is any guide, we'll probably end up with some variant of it in systemd sooner rather than later.


Autoscaling is honestly a minor thing (I have yet to use Autoscaling other than cluster autoscaling since 2016) and autotuning... what autotuning are you talking about? The binpacking support?

Also, I do wonder what "elegant way" would be, considering that k8s is a very elegant system (once you notice that YAML is just one of the serialization formats to make it easier for raw API access by humans).


No. Kubernetes solves imaginary problems, not real ones.

The reason k8s took off is because it provides the fief that devops engineers so desperately need to be motivated to stay on the job. (And that's mostly a good thing. Look at software testing to see what happens when you don't give developers career incentives.)


> Kubernetes solves imaginary problems, not real ones.

Portability between cloud providers is not imaginary. Same for portability of network configuration.


I find this amusing considering that the JSON community are slowly re-inventing XML with their versions of XSL, XSLT etc.


True but i do see a general shift to "types are good" so there is a shift to "schemas are good" by extension IMO.

The big thing is JSON has common basic building blocks (simple types) so you are unlikely to see the same class of interop problems like SOAP and friends had ... "oh look this defines a new type my Python soap / xml lib doesnt natively support" (happened to me a few times, most memorable was akamai soap api using a custom list-of-string type nothing in python would parse out of the box)


Types are good and XML/SOAP was a good idea. The vendor lock-in shenanigans were the problem.


Just like the WASM folks are now re-inventing application servers.


JSON is already 20 years old. I guess things can’t stay the way they are forever (and 20 years in the IT world is a lot).


I am not sure. There are already superior (IMHO) alternatives and JSON still sticks around unaffected. For instance, at work we use protobuffers everywhere and now I am so used to that that I started using them also on my personal projects since I believe they are so convenient. In fact, I am not sure how many hours I saved from not having to write serialisation/deserialisation code, and how you can get full IDE support for your messages.

However, it seems protobuffers (or alternatives like flatbuffers, cap'n proto) are still niches compared to JSON.


Everyone I've worked with save a few that experienced protobufs with extentions immediately finds the benefit once they use them.

However I have yet to convince anyone at an org that isn't already using them that they will change things for the better.

Schema once, code everywhere. It is really nice.

Have you seen Buf? https://buf.build/


Indeed a chicken and egg problem since investing in the tooling to make protobuf usage smooth requires some prior use.

No, thanks for sharing! I will have a look!


I'm not too far into protobuf, etc., but do you only prefer them for your internal services, or do you actually see them as a viable solution for public-facing APIs?


Great question! My rule of thumb would be for server to server apis use protobufs. Also probably for Android/iOS/Flutter clients (depending on how well the protobuf compiler is integrated in your toolchain). For web clients, last time I checked, grpc-web was the most complete implementation/tool, but couldn't make it work with the rest of our stack. Maybe spending more time I could've, but it got really frustrating and I gave up.


To extend my comment: notice how the tech on the left side was/is usually backed up by companies or corporations while the tech on the right side usually is not. I think this is just a natural flow of software: some piece of software emerges for the benefit of a big corp disguised as open source… and suddenly everyone uses it. It takes some time to the “internet community” to come up with something better and less tied to any company in particular. It’s a win for everyone: Google is having his piece of cake with GKE now, the community thanks the effort and will use k8s as an inspiration to make something better.


This is not always the case. Sometimes we use a technology for decades and just upgrade its version. Like HTML, HTTP, GCC, ... the list goes on. It's very possible that we still use k8s as major container orchestrator in a decade, just in a higher version.


I think a better way to think about it is in separating API and implementation or wire format and semantics.

In terms of the wire-format and the low-level concepts, HTTP2 and 3 are completely different from HTTP1.x. it's not an extension like HTTP1.1 was, it's a complete reinvention.

However, what they did was to reuse the high-level semantics and the "API" of HTTP1, so as a web developer you can pretty much pretend you're still using HTTP1 and let the browser and web server do the rest. This makes transition significantly easier.

I could imagine something similar with K9s. Maybe some other product will come along that supports (a subset of) the k9s API but does something different under the hood. I believe, k9s itself did so with Docker: You can still define k9s images using Dockerfiles, even though no piece of Docker is actually involved in the process.


I think all your examples are "simpler but less powerful" instead of "simpler yet more powerful".

25 years later and still, worse is better.


>Flash -> html5

>Perforce, svn, mercurial -> git

Not sure I agree with those two. Edit: Someone beats me to it.


I think K8s will become simpler and easier to use. There will be platforms (eg. Knative) on K8s, that will become the norm for starters.


nomad has been trying for years despite being "simpler" and open-source; it's still in third place :(


Agreed. My go-to example is: J2EE and EJBs in particular -> Spring.


- Java -> wasm


It’s funny to see this pitched as the “future” when we were doing most of these shenanigans during Airbnb’s Kubernetes rollout in 2018. The only thing that’s futuristic from the perspective of 2018 Airbnb infra is Knative. Knative sounds like a good idea — especially for control-plane services or devtools where scaling to zero is something that would actually happen.

The commentary around PersistedVolumes is true (don’t write user data as json pls) but simple. We had a few use-cases for big persisted volumes or even host volumes managed by a daemonset, where object store performance didn’t cut it. One was indexes for a search service. You can’t push these 64gb hunks of data into S3 without paying a ton, much better to use a file system. Same thing with translation string data, a PV with a local SQLite database or other indexed read-optimized format is 100x faster and 100x less expensive than “object storage”. We did enforce that the PV was always read-only for the “main” container in a pod, so you needed to add a sidecar or daemonset if you wanted to sync data there.


I appreciated the article, but it read more like iterating on a few pain point in Kubernetes than envisioning a very different future.

Where I work, we use Kubernetes very very heavily and some of the article reflects some of my experience, in the sense that any challenge we have, the community response is that we should add MORE complexity. Using secrets is complex? Rearchitect your whole app to use OIDC tokens is the answer (for an example from the article).

I think that Kubernetes was designed, released, and funded, to attack the AWS hegemony. As such, it was designed to provide a cloud abstraction layer for developers to target instead of the AWS API(s). As a result, it is designed for an organization to run a single cluster with many relatively trivial workloads (enter Helm for packaging "off the shelf" apps).

However, it turned out for some of us, the need is to run the same non-trivial application in multiple regions of multiple Cloud Providers. Here, Kubernetes really falls short, because Cloud Service Providers don't seem particularly motivated to make Kubernetes-based applications more portable, and Kubernetes remains a very leaky abstraction in terms of versions, storage, and networking.

To me, the future of Kubernetes looks like the community rallying around a single "distro" of Kubernetes that is reasonably consistent across all major cloud providers, but also makes basic assumptions such as how the CI/CD pipeline, observability and SLOs, self-managed packaging, etc... Here, I think the Linux analogy holds, in the sense that different Linux distros emerged for different sets of users, and communities were able to rallying around solution sets and provide support to each other.


We use Kubernetes for our cloud database platform, which we are extending to new environments. It's been quite successful. We were able to port from AWS to GCP in a matter of weeks. We don't try to share tenants within a single Kubernetes cluster.

With that experience in mind, the storage section of the article is a hand-wave. Kubernetes is a major platform for high performance databases themselves. Saying that Kubernetes developers should "use databases" is not a very plausible answer in this case.

Object storage is not a panacea either. It does not exist equally in all locations and building caches to make it work efficiently in low-latency apps is a non-trivial exercise. (Like years of implementation work by PhDs if you want something good.) Locally attached NVMe SSD is still the winner for a lot of applications. It needs to be first class in Kubernetes and it needs to be easy to allocate pods that offer it. Applications can deal with resilience through app-level replication. Databases have been doing it for years.


We use Ephemeral Inline Volumes backed by LVMs on a local ssd. TopoLVM can make this trick possible.


That's an excellent tip. I was not aware of that though some of our storage guys probably have. Thank you!!


What do you think of mixing things like crossplane (and other "operators" wrapping provider-specific bits) together with KubeFedV2?


> Linux is still ubiquitous, and part of our stack, but few developers care much about it because we have since added a few abstractions on top. It's the same that will happen to the traditional Kubernetes we know today.

I think the tone of the article writes this as a positive, but I do not see this as a good thing. Looking at the ecosystem of k8s today, all that has happened is the same concepts that existed in OSes have been reinvented. This future abstraction sounds like yet another migration, but I wasn't able to tell what the net benefit was aside from addressing some intellectual or academic concerns (x should not y).

I'll definitely try to keep my mind open, but, to borrow a sentence pattern often used in the post, more abstractions should not be seen as a better thing.


Kubernetes has always had poor design decisions that have rarely ever been improved, because the designers care more about keeping people using their system than making it simpler. The way forward is not to keep adding abstractions to get around these design problems. The way forward is to fix the design.

We need something simpler than K8s. Something that does not need a million abstractions, yet accomplishes the basic goals that we want: efficiency, scalability, security, reliability, reproducibility, immutability, etc.

I think the future we should be driving for is a distributed operating system. A real operating system whose primitives run distributed applications. K8s already reproduces features that exist in the kernel, and the kernel already does a lot of the heavy lifting of networking and containerization. What's left is to add userland and kernel features so apps can make a syscall or library call to tell the operating system how they'd like to scale, or deploy, or be traced, etc. This would simplify without adding abstractions. It would also be controlled by the open source world, rather than by corporations, making an incentive to finally get rid of crappy design patterns and make the right design be top priority.

Those of you who have used Mosix clusters already have an idea how simple and effective this can be. SSI clusters didn't take off because they were focused on unmodified applications, and sharing memory and threads across systems is hard. But by requiring applications to design themselves for a distributed OS, we can instead allow apps to run independently yet communicate seamlessly across systems as if they were just on one.


You're probably looking for Nomad or Mesos honestly.

Kubernetes, as designed, is a framework and a rather leaky one in my not so humble and subjective opinion.

it accomplishes a pretty lofty goal (bin-packing, deployment, eventual reconcilation of service and generalising the network/storage so that you don't care about underlying infra) but it does so at a rather heavy cost of complexity and because you can swap out underlying pieces: if you're not on a cloud provider then you're probably going to spend the majority of your life maintaining it.

But, you probably want Nomad, if you're thinking about kubernetes seriously.


Nomad is neat, but I think it's the wrong paradigm.

Say you wanted to write an application that inspects the current state of jobs running in Nomad and modify them. You need to design something specifically to do that within nomad. Essentially you need to integrate their proprietary interface into your application.

In a distributed operating system you would just do ls /proc/host/*/* and then issue syscalls or write to files in /proc/ and /sys/. No proprietary interface, it's just operating system primitives.

I think it's so simple that it's hard for people to grasp how insanely easier this would be than any current system. Look up Mosix/OpenMosix (but that's an SSI; I'm not saying we should do SSI exactly, it's just an example of the simplicity).


> if you're not on a cloud provider...

and lord help you if you are but choose to try to use the same approach across environments and eskew the markup on something like EKS. I've been thumping my head against trying to have a rather trivial HA service working on both a Hyper-V and EC2 cluster equally for weeks and it seems near impossible for reasons no leading lights in the space quite articulates.


I believe your experience in this regard would be a good blog post in of itself that I would love to read.


""" Generally, the Kubernetes we know today will disappear, and developers will not care """

I truly cannot wait. I love the premise but hate the complexity. Nomad stole my mindshare in 2016 and I would still choose it over k8s today for small applications. Even choosing it for managing VMs in some cases. (Disclaimer: worked at hashi but not on nomad nor did i use it while there).


I have been learning Kubernetes on the side for months. My biggest pet peeve is that it's riddled with problems that were solved by much older technologies or simply did not happen before. Debugging some CrashLoopBackOff error that could be happening for dozens of obscure reasons is not my idea of fun.


Basically, Kubernetes as Cloud OS regardless of what kind of hypervisor powers the whole stack.

The irony of UNIX having won, only to be replaced by System 360 roadmap.


Not sure I understand your second sentence. Are you equating Kubernetes’ roadmap to that of System 360? FWIW, I am not a big fan of Kubernetes. It’s an incredibly complex beast. I much prefer the simplicity of *BSD over Kubernetes any day.


Kind of, Kubernetes is just another way of doing mainframe workloads, whose main concepts all trace back to System 360.

20 years ago I was deploying EAR files into application servers without caring which OS they would run on.

Nowadays I package managed runtimes into containers, that I don't care if they run directly on top of a type 1 hypervisor, VM or whatever.

Basically what mainframe OS virtualization looks like nowadays and traces its roots back to System 360.


Good point. Kubernetes actually reminds me a lot of J2EE around version 2.0. Towering abstractions and complexity.


Do you see System 360 and Kubernetes as parallel technologies?


I see Kubernetes + type 1 hypervisors replacing UNIX, thus achieving the same kind of OS virtualization done on IBM mainframes that trace their roots back to System 360.


Fair enough. But one big feature of Kubernetes is that you can scale it to multiple (thousands or more) physical machines (that are physically or even geographically separated). Can you do the same with System 360?



Even before Cloud there was parallel sysplex, which could be distributed geographically afair https://en.m.wikipedia.org/wiki/IBM_Parallel_Sysplex


Hybrid cloud is bad comparison.

The best comparison, IMHO, is that Kubernetes is continuation of ideas espoused perhaps most visibly by IBM JES3 (and before that, ASP)


The problem is that these products are way too still-in-development. You want to use knative, istio, kubeflow and OIDC auth? Get ready to open some pull requests and start following Gitlab tickets you have no control over. I say this as someone who has rolled a full knative, istio, kubeflow stack in multiple clouds and on prem. Good luck making it bank-grade security compliant... The ecosystem just isn't there yet. There is so much pain involved still.


Saying that "few developers care much about Linux" is like saying "few people care about breathing". Dude, it's everywhere and you can't do anything without Linux nowadays (not even game development anymore, because it is more popular on the Desktop).


Linux is not popular enough on desktop to matter for game development or basically any other consumer software.


You're wrong, game developers in recent years have started caring a lot more about Linux: https://en.wikipedia.org/wiki/List_of_game_engines

Half the game engines on that list target Linux, not 5%, not 15%, half.


They care so much that Android games seldom land on GNU/Linux.


Hmm, I guess, but I never play Android games, I loathe a touch interface. Keyboard and mouse are the only logical user interfaces to a machine, ever since 1968 with Engelbart.


I work in gamedev, in 2014 when I started, the gameservers were windows.

Thanks to rent seeking by Microsoft (the cloud providers got rekt by license cost for Windows) and Googles strong push for Stadia (forcing game engines to support linux, which usually shared some amount of code with the game server) things have changed.

Now; most things being made are on Linux. (the actual development is still mostly happening on Windows Desktops though)


It also helps that now you can natively use Direct3D 9 through 11 on Linux.

https://github.com/Joshua-Ashton/dxvk-native



I hope k8s and related needless complexities go away for the masses.

I would rather focus on building a product and clicking a couple buttons that allow millions to use it (managed solutions). Oh no it costs 10k, let me spend 1m+ on hiring people to play with configuration files.

Idk why companies are pushing for k8s at points when they could rather hire 10 engineers/tech team to build better and more features than fiddling around with y’all files and adding so many feedback loops through new “processes”.

Even when things work it takes a long time to A) get there organizationally and B) it still doesn’t work lol


That company that hires 10 engineers to create a bespoke orchestration system will inevitably end up re-inventing a buggy, poorly specified implementation of Kubernetes.

Kubernetes gives a standardized set of abstractions and concepts that means hiring and development is easier, not harder.


Not really, they will just end up using managed services in AWS. Kubernetes exists (in the eyes of the corp) only as a tool that avoids vendor lock-in.


Cloud “value added services” ie, all but elastic VM, networking and storage, are typically very poor compared to whatever you can find in k8s just one helm install away.


That already exists and is called Heroku. It works until it doesn't and then you need a bunch of engineers to work around the limitations.


I've managed to avoid working with Kubernetes thus far in my career. I admit I don't have a great understanding of it. I can't think of a single technology that seems to unanimously stir up such bitter feelings in the developer community. Not even node.js, or webpack is as contentious as k8s.


Systemd stirs up similar angry because it was built for similar reasons to become this god-management-layer that’s useful for getting shit done but also hugely more complex (…in implementation at least) than the status quo it replaced.

I worked on Kubernetes/containers migration at Airbnb, and I wouldn’t recommend a Kubernetes investment for a company with <300 engineers, who can dedicate 10+ people to dealing with it. At Notion we are quite happy with Amazon ECS.


The comparison to systemd is often made, but imo very unfair.

systemd may have a complex implementation but it's usage is rather simple and consistent across distros so in that way it definitely simplifies on what came before it, which was an init system simple in its implementation but finicky and inconsistent in usage.

It's kind of like I'd much rather have a programming language that has a complex compiler but it takes care of so many details that its usage (programming) is simple - because I am not the one who has to deal with the compiler complexity - rather than making the jobs of the compiler devs easier, (why, if that's their primary job?), but making my everyday job programming in such language harder?


I don't understand why people always recommend ECS over EKS for smaller shops. EKS is kind of creaky, but we haven't really had any major problems with it, and the tooling around the k8s ecosystem is sooooo much bigger and better in terms of developer experience. We felt like we were constantly reinventing the wheel with ECS, and while I fully acknowledge that there's both less surface area and blast radius for things to go wrong with ECS, I'm not sure it's THAT much more stable. ECS is pretty half-baked too. Almost all of my problems are with EKS specifically, and if we compare ECS to GKE, I think the argument is even less strong.


Maybe trauma? After a couple years herding cats using YAML and kubectl I have been happy for others to herd the cats using HCL and Terraform.


Perhaps 10 people might be required if you're doing everything on-prem including storage and networking. If you're using a managed service, then I find it hard to believe a company should drop 1+ million on labor just to operate a cluster.


You are both missing nothing and missing everything. Haha.

Try to hold out for the next generation! It will surely arrive soon?

Do you use any container orchestrators today?


> This means we are designing for a waste of 30%. Another reason for using conservative target utilizations is that the HPA often works with a response time of a minute or more.

This is one of the many things that's really wrong with Kubernetes. A big selling point of it is binpacking; not wasting resources, but it can't even achieve that. You are just overcomplicating your infrastructure without the big win of almost-perfect resource utilization.


I tripped over this early in the article and had to rush back to write a comment.

> Interestingly, Linux was the platform upon which we built everything a decade or more ago. Linux is still ubiquitous, and part of our stack, but few developers care much about it

You definitely care about it when something breaks in a container and you have to figure out what's going on. Abstractions are fine until something doesn't work.


That is when I open a cloud support ticket.


Our main product is essential a CRUD desktop application assisted by a lot of background services, mainly various integrations. We're in the process of charting out a path to a cloud-based offering, driven by customer requests.

We're operating in a niche so we'll never have millions of concurrent users. We have ~1000 customers, and number of users per customer follows a power law. Our largest customer by far currently has a ~100GB database and ~100 concurrent users.

Since we're relatively small (currently ~25 employees) we don't have the resources to "constantly" be changing our stack, and so I fear vendor lock-in.

At least Kubernetes seems to be something that'll be around for a while, and it's available from several cloud providers. So while it seems overkill it would appear it has that going for it. However the complexity does scare me a bit. It's a giant leap from our current "dumb" setup of a database and core service on a plain server.

Reading recommendations or helpful suggestions would be appreciated. Are there better options out there for small fish like us?


I’d say you don’t need k8s. You just need a good terraform module to set things up for quickly on multiple providers.

K8s is worth all the work cus then you can scale magically. You don’t need to scale magically. You just need to be able to easily deploy.

In fact if you think scale will never be an issue you could just run it on hardware, seems like everything would easily fit in a single server.


And with terraform, you're going to be writing pretty much from scratch with every provider out there, as outside of very basics of infrastructure the offerings differ wildly, and even if all you need is a VM with dumb load balancer you have to write ~5x$CLOUD_PROVIDERS resources and abstract it around.

Personally k8s is least interesting in scaling, because you could easily scale with other options if you had money to burn


The same is true for Kubernetes too from my experience. Only the very basics of it works cross provider.


Thing is, nothing in Terraform works cross-provider - it's just a (often clunky, though it got better) DSL over JSON which makes for nicer way to call vendor APIs. But I need to know all the details of those APIs anyway.

With K8s, the basics that "just work" are pretty wide, the bits that are specific to vendor that are necessary are usually small and contained, and for everything else I have extensible API that allows abstractions like crossplane.io


ecosystem is changing rapidly. i actually couldn't say if this is true as I have not played with any of the hosted k8s environments outside of highly complex real world usage.

it does seem completely plausible that within the last year or so hosted environments are stable enough and standardized enough that k8s can be a target directly even for small projects. cool. i will try for my next one


The state I described above was somewhat solid by early 2017.

Biggest possible issues were if you wanted to pass things like cloud credentials to your apps etc., but standard k8s resources worked pretty well with at most some extra annotations to for example link a Service of type LoadBalancer with specific pre-allocated IP.


> You just need a good terraform module

Thank you, I've looked into Terraform before, but had half-way forgotten it. Thanks for the reminder, will have to check out again.

> seems like everything would easily fit in a single server

We probably need a few. Our client application and integrations do quite a lot of crunching at times. The largest client mentioned has a 16-core server with 64GB RAM for DB and integrations alone, and that is not over-provisioned by any means. They have two or three Citrix servers for the client in addition.

Obviously things would be a bit different with something web-based, but more crunching would have to move to server-side due to DB latency.

For smaller customers we do fit several of them on a single beefy server currently, and could very likely move forward with that.


right, 64GB is less than 5% of what you can on a single server these days.


are you going to run on multiple clouds or are you targeting just one cloud?


That's the thing. We would probably be happy on one cloud, but I'm worried about not being able to move if the cloud-vendor conditions change.

That's why I'd prefer to avoid getting to married to the solutions from a single cloud vendor.


if you stick with terraform; it is true that each implementation will be different for each provider. they will be similar for sure but each will be made individually.

that said, that is a very very small task to change cloud providers and shouldn't deter you. also as you said, you can easily just start with one.


changing cloud providers is not a small task


Personally, I'd stick to kubernetes - especially as you can even pack it into VM for on-site classic deployment - avoiding too much cloud-provider-specific bits, and optionally work on optimising away the time necessary to handle it.

I assume that for whatever reasons going very close with one PaaS from any of the cloud vendors is out (I am not a fan of them, but that's specific to me being in Poland and not Silicon Valley, and I can't pay Silicon Valley prices)

You have ~25 employees, so you need to reduce toil - you can't just throw a person full time to massage and understand each custom thing.

1) Take care to run your components <https://12factor.net/> style. Even if you run them 'classic way' started from a shell script, it will make things easier to move around if necessary.

2) Don't use raw manifests - this gets tiring over time, and is prime example of "toil". Your time will be better served writing anything (whether fancy or not) that can manipulate some basic data structures and spit kubernetes resources in json format.

Once you have that, you can template your resources easily and enforce common standards and behaviours. This helps a lot especially when you don't have many employees - I'm not advocating for "employees as cogs", but if you all your applications follow a standard scheme for labelling, where credentials and data is stored, etc. - each of those small improvements reduce a bit of toil - and critical to maintaining that is removing the toil of manually keeping those standards implemented. In one job I might have spent a week tailoring a templated deployment for an application, but the next person (who was dealing with the setup for the first time ever!) configured another, similar application in one day.

It's a huge difference when all you need to start a new app is to fill in few blanks like names, maybe some credential, and maybe declare "ok, it needs a >25G temporary storage, and 5G persistent shared storage, plus should be available under this URL" and have the templates fill in the rest.

3) Implement some standards on observability and administration. It doesn't really matter whether you set up your own EFK stack/Grafana/Prometheus/Loki/Victoria Metrics/whatever or go with Datadog (ow, my wallet) or another vendor. K8s generally makes it (comparatively) easy to hook together.

Apply templating and standardisation from (2) here. Whether you're a developer or sysadmin or both, having it "just happen automagically" leads to happier people, less burnout, and ultimately more productivity.

4) Abuse the hell of k8s flexibility to get multiple deployments going on. The more expensive the underlying gear is, the more important it is. If your whole deployment is parameterised, and with things like crossplane to integrate tasks like "allocate $VENDORs postgresql offering", you can quickly setup, test/poc/do something fun/do a dog&pony show/etc. then teardown an environment. Can you do it with more classical setup? Sure. Personally I found k8s both faster in wall clock time & development time for this... and definitely less expensive.


Thank you, this was very helpful.

Making deployment easy and generic is indeed a key point I have been thinking about. Support won't be able to handle much customer-specific bespoke solutions requiring special hand-holding.

Observability will also be key. Our customers typically call us first whenever something is broken, even if 99% of the time it's their own ERP or similar that's the issue. So support needs to be able to quickly identify that the relevant integration is running as expected on our side, and be able to tell the customer what's wrong.


A lot of those are really general, I guess :)

That said, ability to go from vague complaint to both specific component and bird's eye view of the right cluster is great force multiplier for people.

As for delivering in more on-site bespoke situations, it might be worthwhile to be able to go down with minimal k3s or similar setup that can run inside single VM that you can provide as deployable OVA file to be imported into VMware or similar environments - as well as for building small on-site test environments, or even make a Chick-Fil-A style cluster out of three NUCs taped together and use it for dog&pony road trip, excuse me, sales demo ;)


Great point about OVA and k3s. Some customers will want to run our software on their own VMs, so having a similar environment would be great.


you probably don’t need k8s. vms + lb + asg may go a long way for you. you can always make things more complicated but once you put k8s in you’re going to have to stick with it


Mh... As far as I understand this article is promoting the fact to factor in kubernetes or the cloud platform directly into the orchestrator system (i.e. k8s). I think this won't happen. I think k8s was successful because it allows to run unix-like applications with little adoption effort of the application itself.


k8s is very powerfull tough complex speccialy if you manage the cluster yourself. Docker swarm is a simpler alternative that provides all the features required by the majority of use cases.


Would be interesting to see what Facebook came up with in this space. Not a fan of their "service" but I love their libs.



Lot of theories disconnected from the reality.


Istio is a big Kubernetes failure story. So much complicated although it could have been directly integrated to Kubernetes.


I've actually written about why i'd personally look towards Docker Swarm for smaller projects and setups in my blog post "Docker Swarm over Kubernetes": https://blog.kronis.dev/articles/docker-swarm-over-kubernete...

(though it is a pretty brief stream of consciousness)

Either that or Hashicorp Nomad, because it is also a pretty nice solution and seems to overall have a brighter future than Swarm does, even if the HCL which it uses is a little bit weird and needs porting over from Docker Compose files.

Admittedly, in the last year, i've also seen Kubernetes be tolerable for on-prem deployments in the form of K3s, which is a lovely project and wastes way less memory and CPU for smaller clusters: https://k3s.io/

I know that many out there won't necessarily have to or won't want to run it on-prem and instead will just run whatever Kubernetes solution will be offered to them by their cloud vendor, which is also a valid point, but not one that i have to deal with.

Instead, i often find myself with a VM that has about 8 GB of RAM and needs to run everything from the actual cluster, to PostgreSQL, RabbitMQ, Redis, a few Java apps and who knows what else inside of a development environment, because clearly that was possible without containers so also should be possible with them. There, running something like Rancher or even a K8s cluster that's made from scratch (or initialized with kubeadm or whatever) would be insanity. And having a shared cluster for the whole company might also necessitate people to just manage it, which isn't in the cards.

Also, in my personal hybrid cloud (homelab + cloud VPSes), running Kubernetes would be comparatively expensive and wasteful, since Docker Swarm still has a lower overhead and Swarm provides me with most of the functionality that i might want, oftentimes in the form of regular containers, e.g. Caddy/Nginx/httpd web server as an ingress with some configuration files.

Actually, anyone who has ever tried to get the K3s Traefik ingress working with a custom wildcard certificate will probably understand why i might actually prefer to do that, since in K3s you'll need the following for such a setup to work:

  - a ConfigMap for Traefik, knowledge about the structure of the ConfigMap (tls.stores.default.defaultCertificate)
  - a TLSSecret for storing the actual certificate/key
  - a TLSStore (which i also needed to actually use the secret, spec.defaultCertificate.secretName)
  - a HelmChartConfig for Traefik to load the ConfigMap with the mounted secrets and config
All of those weren't sufficiently documented because apparently the popular usecase was to use Let's Encrypt with either cert-manager or another solution altogether. All of it took digging through GitHub to find out and trial and error to configure it all.

Versus just reading a page for a web server that i want to run and dropping some values in a config file.


TL;DR;

- Use OpenID tokens instead of secrets

- Use Istio instead of ingress

- Use Knative


> Use Istio instead of ingress

Sow the wind, reap the whirlwind.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: