It came from the top of Azure and for Azure only. Specifically the mandate was for all new code that cannot use a GC i.e. no more new C or C++ specifically.
I think the CTO was very public about that at RustCon and other places where he spoke.
The examples he gave were contrived, though, mostly tiny bits of old GDI code rewritten in Rust as success stories to justify his mandate. Not convincing at all.
Azure node software can be written in Rust, C, or C++ it really does not matter.
What matters is who writes it as it should be seen as “OS-level” code requiring the same focus as actual OS code given the criticality, therefore should probably be made by the Core OS folks themselves.
I have followed it from the outside, including talks at Rust Nation.
However the reality you described on the ground is quite different from e.g. Rust Nation UK 2025 talks, or those being done by Victor Ciura.
It seems more in line with the rejections that took place against previous efforts regarding Singularity, Midori, Phoenix compiler toolchain, Longhorn,.... only to be redone with WinRT and COM, in C++ naturally.
May I ask, what kind of training does the new joins of the kernel team (or any team that effectively writes kernel level code) get? Especially if they haven't written kernel code professionally -- or do they ONLY hire people who has written non-trivial amount of kernel code?
IMHO with a couple of fixes which allow Linux+Wine to better simulate some specific lowlevel Windows behaviours (like this one recently in the news: https://www.xda-developers.com/wine-11-rewrites-linux-runs-w...)... a Linux distro with a 'Windows personality' (e.g. running Windows Explorer as desktop) should be pretty much indistinguishable from native Windows.
In the end it's all about driver diversity and quality though...
Would those "brightest minds" want to work for the current US government? Even if they did out of patriotism to the country, the Trump administration would have pushed them out by now and replaced with yes-men.
The people I know leaving that sector have been steadily leaving for years due to the day to day bullshit/internal politics and poor leadership that they have to put up with, not the pay nor current administration.
Right but if you're a lifelong gov worker you are probably used to the pay, and it's hard to switch from gov work to startups or big tech (at least, I would see it as a thing to question). Whereas the GGP talks about people switching from the private sector (adtech, etc.) to public.
The first thing they are going to see is the salary and run a mile. That's partly why Palantir 'works'; they pay tech salaries and have a tech culture, but do gov work. Booz Allen et al were less advanced prototypes of that as well.
> ninja can be used via cmake too but most uses I see are from meson
How do you know though when the choice of cmake-generator is entirely up to the user? E.g. you can't look at a cmake file and know what generator the user will select to build the project.
FWIW I usually prefer the Ninja generator over the Makefile generator since ninja better 'auto-parallelises' - e.g. with the Makefile generator the two 'simple' options are either to run the build single-threaded or completely grind the machine to a halt because the default setting for 'parallel build' seems to heavily overcommit hardware resources. Ninja just generally does the right thing (run parallel build, but not enough parallelism to make the computer unusable).
ninja supports separate build groups and different max number of parallel jobs for each. CMake's ninja generator puts compilation and linking steps in their own respective groups. End result is by default `nproc` parallel jobs for compilation but 1 job for linking. This helps because linking can be way more memory intensive or sometimes the linker itself has support for parallelism. Most projects have only a handful of linking steps to run anyway.
...that's too big for a JS shim to talk to browser APIs... it looks more like a complete 3D engine - e.g. three.js or similar?
From that pov the 2.7 KB WASM is a bit misleading (or rather meaningless), it could be a single function call into that massive JS blob where all the work happens.
Fair point — globe.gl (Three.js) handles the 3D rendering client-side.
The 2.7KB WASM is the server-side scoring engine — Zig-compiled, runs on every
request at the Cloudflare edge. The globe visualizes where those executions happen.
Two separate layers: WASM at the edge, JS in the browser.
This is a completely baffling website but as far as I can tell, the 2.7 WASM thing is the MCP runtime this is marketing? The globe thing is independent of that, just showing there the MCP calls are running.
The 2.7KB Zig WASM binary is the scoring engine that runs on every request at Cloudflare's edge. The globe visualizes where those requests land. Two layers — compute at the edge, visualization in the browser.
Pathfinding middleware has traditionally been a thing though, e.g. apart from the mentioned ReCast (which is the popular free solution) the commercial counterpart was PathEngine (looks like they are even still around: https://pathengine.com/overview/).
If you need to do efficient path finding on random triangle geometry (as opposed to running A* on simple quad or hex grids) it quickly gets tricky.
What has undeniably declined is the traditional "10k US-$ commercial middleware". Today the options are either free and open source (which can be extremely high quality, like Jolt: https://github.com/jrouwe/JoltPhysics) or fully featured engines like Godot, Unity or UE - but little inbetween those "extremes".
I did mention this was changing in particular with physics engines. That being said, proprietary dependencies still reign supreme in places like audio engines. Something like OpenAL isn't really a replacement for FMOD or Wwise. I know those off-the-shelf engines roll their own replacements, but then they also roll their own pathfinding and navmesh generation as far as I know.
reply