afaik, you always have to break open abstractions for more performance.
If you ignore cache-levels in your program you're gonna have a bad time - and depending on the system the layout (and with it how you should use it) is different.
The same is true for how machines are interconnected.
Depending on the wiring you have different throughput-values when sharing data between nodes.
The whole area screams "not universal" to me.
I think people working on these abstractions would claim that an appropriate abstraction does involve the exact features you mention. Cache hierarchies and vectorization are common to CPUs and GPUs - in some sense, the numbers parametrizing them are all that differs. With a good abstraction, this machine-tuning can be automated.
So what would be such an HPC language that you're so fond of?
A quick web search reveals only languages that use C++/CUDA code as a back end (python), are new and experimental (Julia) or FORTRAN.
For what you're talking about none seem all to good, so you've peaked my curiosity.
See https://arxiv.org/abs/2512.17101. I've used some of the tools in the stack they describe (and see Sec 2 for an overview of others). JAX/XLA/etc. are somewhat similar, though still without user control over transformations.
Perhaps part of the reason for the bad takes in this thread is due to taking "language" overly literally (perhaps also the fault of the linked blog post itself). I think one thesis of the above tooling is that, when tuning and generating code (CUDA, OpenCL, what have you) at runtime, the best "languages" for these abstractions are, amusingly, scripting languages like Python. Having CUDA/etc. as a back end without having to hand-write/-transform/-optimize it is indeed the point.
C++ by comparison doesn't stand in your way too much either.
I feel like the biggest gripe Rust has is what happens when you do have to go unsafe.
That seems to be a strong point of contention for many folks.
Maybe all the reasons that lead people to use unsafe rust go away or the attitude about it shifts in some manner.
For me Rust turned out to be less interesting after I saw the whole ceremony about typing.
The amount of things I had to grasp just to get a glimpse into what a library does felt much more involved than any of the things I did with C++.
The whole annotation-ting feels much less necessary and more like a proper opt-in there.
Anything that increases the entry time by a second or more is a pretty good way to make me (and probably others) just not bother with opening the website.
Usually the Anubis anti-bot things only take a second. But I stared at one for more than 30 seconds the other day when I tried to access one of the Linux kernel websites. Literally just a progress bar with a hash counter. I was on a modern iPhone, I don’t know why it took so long. maybe because my phone had low battery? But it’s infuriating that this is what the web has become.
The web is becoming more and more unusable every day. If your data is easy to access, it gets stolen and scraped, your site effectively DDOSed. If your site is hard to access nobody will visit.
I think his point is that "form follows function".
If you know what kind of semantics you're going to have, you can use that to construct a syntax that lends itself to using it properly.
I guess "back in the day" you had to be able to write an efficient parser, as no parser generators existed. If you couldn't implement whatever you wanted due to memory shortage at the parser level, then obviously it's gonna be a huge topic.
Even now I believe it is good to know about this - if only to avoid pitfalls in your own grammar.
I repeatedly skip parts that are not important to me when reading books like this.
I grabbed a book about embedded design and skipped about half of it, which was bus protocols, as I knew I wouldn't need it.
There is no need to read the dragon book from front to back.
> But there's a reason most modern resources skip over all of that and just make the reader write a recursive descent parser.
Unless the reason is explicitly stated there is no way to verify it's any good.
There's a reason people use AI to write do their homework - it just doesn't mean it's a good one.
I can think of plenty arguments for why you wouldn't look into the pros and cons of different parsing strategies in an introduction to compilers, "everyone is(or isn't) doing it" does not belong to them.
In the end, it has to be written down somewhere, and if no other book is doing it for whatever reason, then the dragon book it shall be. You can always recommend skipping that part if someone asks about what book to use.
The question was not if it was possible within price boundary X, but if it was possible at all.
There is a difference, please don't confound possibility with feasibility.
C# has had the reputation of not being viable for Linux for a long time.
Therefore, the people already on Linux didn't have a reason to use it or even try it.
If you're already doing stuff in other languages it's hardly worth it to switch to C#.
I personally use it quite a lot - but I came as a windows user writing all my utilities in C#.
Also, afaik C# is mostly used in corporate environments that don't open-source their projects. You're unlikely to hear from it unless you're working on it for this very reason.
Mono was in a usable state on Linux for literal decades before becoming official and integrated into what is core today, that is unless you needed windows forms, which much like MSFT UI frameworks today had multiple failed attempts spanning those same decades...
We attempted a move to mono for backend web services maybe 3 years before .NET Core released, and it was a complete no-go. 10x reduction in performance on the same hardware.
This wasn't specific to our workload either. I was big into the game Terraria at the time, and I saw similarly poor performance when running it's game server under mono vs .NET 4.x
While some of the mono toolchain was integrated into .NET Core, CoreCLR was a rewrite and immediately solved this problem.
Great context, thanks! I knew it worked (I closely followed mono development at the time) but I didn't have a windows license/windows machines at the time it was ongoing to compare it to. 4x is pretty bad, any ideas what went wrong? Lack of jit maybe? I forget the exact architecture of mono at that time, being like 20 years ago...
It did have a JIT, it just wasn't a very performant one. I recall the GC implementation also being quite slow. GC is an area where .NET is still making strides.. we got a new collector in .NET 10.
A lot of C#'s reputation for not being viable for Linux came from the other direction and a lot of FUD against Mono. There were a lot of great Linux apps that were Linux first and/or Linux only (often using Gtk# as UI framework of choice) like Banshee and Tomboy that also had brief bundling as out-of-the-box Gnome apps in a couple of Linux distros before anti-Mono backlash got them removed.
Also, yeah today Linux support is officially maintained in modern .NET and many corporate environments are quietly using Linux servers and Linux docker containers every day to run their (closed source) projects. Linux support is one of the things that has saved companies money in running .NET, so there's a lot of weird quiet loyalty to it just from a cost cutting standpoint. But you don't hear a lot about that given the closed-source/proprietary nature of those projects. That's why it is sometimes referred to as "dark matter development" from "dark matter developers", a lot of it is out there, a lot of it doesn't get noticed in HN comments and places like that, it's all just quietly chugging along and doesn't seem to impact the overall reputation of the platform.
Yes, however as acknowledged by the .NET team themselves, in several podcast interviews, this is mostly Microsoft shops adopting Linux and saving Windows licenses.
They still have a big problem gaining .NET adoption among those that were educated in UNIX/Linux/macOS first.
Mandy Mantiquila and David Fowler have had such remarks, I can provide the sources if you feel so inclined.
reply