The peer relay approach is interesting because it essentially turns every node in your tailnet into a potential relay for other nodes. This is a meaningful architectural shift from relying on Tailscale's centralized DERP servers.
For anyone worried about the "rug pull" concern raised in another comment — this actually makes me more optimistic, not less. By distributing relay infrastructure to the edges, Tailscale is reducing its own operational cost per user while improving performance. That's the kind of flywheel that makes a generous free tier more sustainable, not less. Each new node potentially helps the whole network.
The observation about donations growing linearly while requests for care grew exponentially is one of the most honest descriptions of nonprofit scaling I have seen. Most founders in that position either burn out silently or pivot to a for-profit model. Choosing the slow, steady, sustainable path instead — and then coming back 13 years later to share what you learned — says a lot about character. 33k surgeries is remarkable. Thanks for sharing this.
There's something to this. The 200-400MHz era was roughly where hardware capability and software ambition were in balance — the OS did what you asked, no more.
What killed that balance wasn't raw speed, it was cheap RAM. Once you could throw gigabytes at a problem, the incentive to write tight code disappeared. Electron exists because memory is effectively free. An alternate timeline where CPUs got efficient but RAM stayed expensive would be fascinating — you'd probably see something like Plan 9's philosophy win out, with tiny focused processes communicating over clean interfaces instead of monolithic apps loading entire browser engines to show a chat window.
The irony is that embedded and mobile development partially lives in that world. The best iOS and Android apps feel exactly like your description — refined, responsive, deliberate. The constraint forces good design.
> What killed that balance wasn't raw speed, it was cheap RAM. Once you could throw gigabytes at a problem, the incentive to write tight code disappeared. Electron exists because memory is effectively free.
I dunno if it was cheap RAM or just developer convenience. In one of my recent comments on HN (https://news.ycombinator.com/item?id=46986999) I pointed out the performance difference in my 2001 desktop between a `ls` program written in Java at the time and the one that came with the distro.
Had processor speeds not increased at that time, Java would have been relegated to history, along with a lot of other languages that became mainstream and popular (Ruby, C#, Python)[1]. There was simply no way that companies would continue spending 6 - 8 times more on hardware for a specific workload.
C++ would have been the enterprise language solution (a new sort of hell!) and languages like Go (Native code with a GC) would have been created sooner.
In 1998-2005, computer speeds were increasing so fast there was no incentive to develop new languages. All you had to do was wait a few months for a program to run faster!
What we did was trade-off efficiency for developer velocity, and it was a good trade at the time. Since around 2010 performance increases have been dropping, and when faced with stagnant increases in hardware performance, new languages were created to address that (Rust, Zig, Go, Nim, etc).
-------------------------------
[1] It took two decades of constant work for those high-dev-velocity languages to reach some sort of acceptable performance. Some of them are still orders of magnitude slower.
> Had processor speeds not increased at that time, Java would have been relegated to history, along with a lot of other languages that became mainstream and popular (Ruby, C#, Python)[1].
I'd go look at the start date for all these languages. Except for C#, which was a direct response to the Sun lawsuit, all these languages spawned in the early 90s.
Had processor speed and memory advanced slower, I don't think you see these languages go away, I see they just end up being used for different things or in different ways.
JavaOS, in particular, probably would have had more success. Seeing an entire OS written in and for a language with a garbage collector to make sure memory isn't wasted would have been much more appealing.
> I'd go look at the start date for all these languages. Except for C#, which was a direct response to the Sun lawsuit, all these languages spawned in the early 90s.
I don't understand your point here - I did not say those languages came only after 2000, I said they would have been relegated to history if they didn't become usable due to hardware increases.
Remember that Java was not designed as a enterprise/server language. Sun pivoted when it failed at its original task (set top boxes). It was only able to pivot due to hardware performance increases.
> I said they would have been relegated to history if they didn't become usable due to hardware increases.
And I disagree with this assessment. These languages became popular before they were fast or the hardware support was mature. They may have taken different evolution routes, but they still found themselves useful.
Python, for example, entered in a world where perl was being used for one off scripts in the shell. Python replacing perl would have still happened because the performance characteristics of it (and what perl replaced, bash scripts) is similar. We may not have used python or ruby as web backends because they were too slow for that purpose. That, however, doesn't mean we wouldn't have used them for all sorts of other tasks including data processing.
> Remember that Java was not designed as a enterprise/server language. Sun pivoted when it failed at its original task (set top boxes). It was only able to pivot due to hardware performance increases.
Right, but the java of old was extremely slow compared to today's Java. The JVM for Java 1 to 1.4 was dogshit. It wasn't hardware that made it fast.
Yet still, java was pretty popular even without a fast JVM and JIT. Hotspot would have still likely happened but maybe the GC would have evolved differently as the current crop of GC algorithms trade memory for performance. In a constrained environment Java may have never adopted moving collectors and instead relied on Go like collection strategies.
Java applets were a thing in the 90s even though hardware was slow and memory constrained. That's because the JVM was simply a different beast in that era. One better suited to the hardware at the time.
Even today, Java runs on hardware that is roughly 80s quality (see Java Card). It's deployed on very limited hardware.
What you are mistaking is the modern JVM's performance characteristics for Java's requirements for running. The JVM evolved with hardware and made tradeoffs appropriate for Java's usage and hardware capabilities.
I remember the early era of the internet. I ran Java applets in my netscape and IE browsers on a computer with 32MB of ram and a 233MHz processor. It was fine.
I remember running Java applets under Netscape 3.x and 4.x on System 7.5 on a 200Mhz PPC 603ev and 16MB RAM. It was “fine” mostly, but loading was slow as mud (though that might’ve just been the 28k dialup) and they crashed Netscape or the whole system a lot more than the rest of the web did. Technically usable, but practicality was questionable.
Lots of good practices! I remember how aggressively iPhoneOS would kill your application when you got close to being out of physical memory, or how you had to quickly serialize state when the user switched apps (no background execution, after all!) And, or better or for worse, it was native code because you couldn’t and still can’t get a “good enough” JITing language.
The islands pattern is underrated for maintainability. I've found the biggest win isn't even the state isolation — it's that each island can have a completely independent upgrade path. You can rewrite one island from React to vanilla JS (or whatever comes next) without touching anything else.
The global state SPA pattern fails for a more fundamental reason than just being painful to maintain: it creates an implicit contract between every component in the app. Change one reducer and you're debugging side effects three layers away. Islands make the contract explicit — each one owns its data, full stop.
The one gotcha I've hit is cross-island communication. PostMessage works but gets messy. Custom events on a shared DOM ancestor end up being the cleanest pattern for the rare cases where islands genuinely need to coordinate.
SVG generation is a surprisingly good benchmark for spatial reasoning because it forces the model to work in a coordinate system with no visual feedback loop. You have to hold a mental model of what the output looks like while emitting raw path data and transforms. It's closer to how a blind sculptor works than how an image diffusion model works.
What I find interesting is that Deep Think's chain-of-thought approach helps here — you can actually watch it reason about where the pedals should be relative to the wheels, which is something that trips up models that try to emit the SVG in one shot. The deliberative process maps well to compositional visual tasks.
Yeah, spatial reasoning has been a weak spot for LLMs. I’m actually building a new code exercise for my company right now where the candidate is allowed to use any AI they want, but it involves spatial reasoning. I ran Opus 4.6 and Codex 5.3 (xhigh) on it and both came back with passable answers, but I was able to double the score doing it by hand.
It’ll be interesting to see what happens if a candidate ever shows up and wants to use Deep Think. Might blow right through my exercise.
For anyone worried about the "rug pull" concern raised in another comment — this actually makes me more optimistic, not less. By distributing relay infrastructure to the edges, Tailscale is reducing its own operational cost per user while improving performance. That's the kind of flywheel that makes a generous free tier more sustainable, not less. Each new node potentially helps the whole network.
reply