>Rust turns unknown failures in C and C++ into known failures and suddenly the C/C++ people start caring about the failures
I'm actually the one who promotes paranoidal assert-s everywhere. I do agree the original statement from the article is ambiguous, probably should have written something like "memory safety in Rust does not increase reliability".
>The pacemaker argument is complete nonsense, because the pacemaker must keep working even if it crashes. You can forcibly induce crashes into the pacemaker during testing and engineer it to restart fast enough that it hits its timing deadline anyway.
I'm not sure whether there is a deadlock-free modification of Rust — deadlock is not considered an undefined behavior in Rust.
>Not a fact. Particularly in the embedded world, crashing is preferable to malfunctioning, as many embedded devices control things that might hurt people, directly or indirectly.
It really depends on how deeply Turing you mechanism is. By being "Turing" I mean "the behavior is totally dependant on every single bit of previous information". For a reliable system turing-completeness is unacceptable for separate functions i.e. it should produce a correct result in a finite amount of time no matter what hapened in the past. Particulary, that's why modern real-time systems cannot be fit into Turing machine, because Turing machine has no interrupts.
>If a pacemaker suddenly starts firing at 200Hz, telling a victim "but at least it didn't crash" is a weak consolation. A stopping pacemaker is almost always preferable to a malfunctioning one
You almost make an excuse for general unreliability of programs. Mainstream C is unreliable, C++ is unreliable, Rust is unreliable. I can agree that Rust is not less reliable than C/C++, but it is definitely less reliable than some other language e.g. BEAM-based ones. I mean in Rust standard library some time ago I actually read "in these and these conditions the following code will deadlock. But deadlock is not an undefined behavior, so it's ok". The designers of Rust did not really try to support any kind of "recover and continue" way of functioning. Yes, you can catch the panic, but it will irreversibly poison some data.
>I would love to hear what he thinks a good programming language is, because I can easily pick more holes in any other language than he has.
I have a working theory that the classic programming is basically dying out:
https://bykozy.me/blog/llm-s-are-not-smart-rather-programmin...
You just cannot write fast and reliable program neither with C++ nor with Rust, because the fundamental model is broken and nobody's really bothered fixing it.
>Compile speed. Why do people care so much? Use debug for correctness and iterating your code. You're hardly going to change much between runs, and you'll get an incremental compile.
What's the largest Rust-based public codebase so far? Rustc with like 1 million lines of code including all the dependencies in all the languages? Zed IDE has 800k lines. Incremental compilation seems to work fine at this scale, but things become messy above it. Most people prefer to not exceed the limit.
>Mutable shared state: make bad designs hard to write. Mutable shared state is another one of these eternal bug sources. Use channels and pass messages.
People think BEAM (Erlang/Elixir) does not have shared mutable state because the programming model says so, but it actually has some mutable shared state transparently under the hood i.e. implementation details and runtime. BEAM processes all run in a single OS address space, they are not even separate threads but async tasks spread out across OS threads — and still per programming model of the language those are "shared nothing". So in the end BEAM is a very good abstraction on the partially shared mutable state.
And no, in BEAM remote messaging is not nearly the same as local messaging. For example, you don't have built-in supervision trees on remote, and ETS is also local-node-only.
I do agree that hardware model of shared mutable state is inherently problematic, but we don't have other commodity CPU-s yet — you have to handle the shared mutable if you wanna be close to hardware.
>It reads like a lot of critiques of Rust: people haven't spent enough time with it, and are not over the learning curve yet. Of course everything is cumbersome before you're used to it.
Well, at least you confirm that the complexity is real. Don't get my criticism to close to the heart though — I did exaggerated the problems a bit, because someone needs to counteract the fanboys.
>"D supports Ownership and Borrowing, just like Rust. DMD, D's reference compiler, can compile itself in less than 5 seconds, thanks to a fast frontend and fast backend. D is easy to metaprogram through traits and templates. You can make your own JIT in D with dynamicCompile."
Indeed, there are languages that have generics, compile blazingly fast, and still have good runtime performance. Go lang is another good example, although not perfect too. If only Rust designers did a single thing to make it compile fast instead of "eventually".
"Generics" is not why Rust compiles slowly. Rust had certain design decisions and implementation details that impact compile times significantly. Do not misrepresent them and pretend that the Rust developers are incompetent or malicious. By doing so, you invite discourse as ideological and uncritical as that of Rust fanatics, however many or few there are.
>Do not misrepresent them and pretend that the Rust developers are incompetent or malicious.
I'm sorry, but I sensirely think the Rust designer did a sloppy work by reimplementing C++ flaws with a new syntax. The history repeats itself, the same pathological mechanism once driving C++ development now drived Rust — I mean large enterprise wanting to change everything without changing nothing, the new tool that would feel like the old tool. They did not really invent some new model, look at std::shared_ptr, std::mutex, move semantic — it was already in C++ before the prototype of Rust. The "share immutables, lock mutables" model was a holy grail of C++ concurrency caused by the way STL worked — and STL is not nearly the only container library in C++.
Okay, what's your take on why exactly Rust compiles slowly?
Rust certainly takes after C++, but it's not the same. It's more like C++ if it was made today. For instance, Rust's destructive move semantics are the better default. (However, Rust making everything moveable by default is a bit of a drawback.) I don't understand your point about std::shared_ptr, std::mutex, and the immutability/mutability model. These are not C++ constructs; they are general programming constructs that C++ has an implementation of. Rust does not claim to have some innovative take on reference counting or locking. However, Rust's borrowing is an improvement (by default) on managing shared mutable state. Rust is not Carbon, or whatever new C++ variant is out there; it is a backwards-incompatible language that offers better ways to do many of the things C++ is used for.
> Okay, what's your take on why exactly Rust compiles slowly?
I think one interesting thing that could be explored more is what aspects of Rust "inherently" result in slow compilation (i.e., if you changed said aspects to improve compilation speeds then you end up with a "different" language, assuming some hand-waviness with respect to equivalence here) and what aspects are practically and/or theoretically improvable.
For example, rustc's reliance on LLVM for optimization is a well-known source of sluggishness, but I wouldn't classify that as "inherent" since Rust could use a different backend that works faster (e.g., Cranelift) and/or be more efficient about the IR it emits. Something like monomorphization, on the other hand, is a more in a grey area since it's (as far as I know) essential for Rust to meet its performance goals, but IIRC it's not out of the question for the compiler to implement generics using a different strategy instead, which might be viable as an opt-in tradeoff.
There are at least three separate things that rustc and cargo could do to significantly improve compilation speed, but they are complex engineering efforts that require time and engineers being paid to work on full-time. They are: doing what zig does to compile only items that arr reachable, which requires making rustc reentrant to stop after name resolution and cargo more complex to pass used paths through subsequently, leveraging an incremental linker and have rustc only codegen the binary patch instead of the whole binary and have the linker calculate the patch, and caching proc-macros and treat them as idempotent (likely as opt-in). There are of course others, like pre-compiled distribution, better intra-rustc parallelism or feature cfg handling, but those have more open questions and require additional design work before I could claim they're attainable.
Oh, I hadn't heard that you guys are looking at Zig-style "targeted" compilation (for lack of a better phrase). That sounds like it entails quite a bit of infrastructure work. Is there a GitHub issue I can follow to keep an eye on progress?
Out of curiosity, have there been any interesting pondered/proposed avenues for speedup that were rejected because they would result in too much breakage, violate a core tenant of Rust (e.g., require a runtime to be usable) or other reason that is "inherent" to Rust?
We discussed a lot of these last May in person. Haven't kept track of who's doing what on these fronts.
> have there been any interesting pondered/proposed avenues for speedup that were rejected because they would result in too much breakage, violate a core tenant of Rust (e.g., require a runtime to be usable) or other reason that is "inherent" to Rust?
Yes, but I haven't been part of all of them and can't ellaborate much here. Opening a thread in internals.rust-lang.org with that question might get some good info sooner that I could.
>There are two conclusions: 1) If Cloudflare hadn't decided on a proper failure mode for this (i.e. a hardcoded fallback config), the end result would've been the same: a bunch of 500s, and 2) most programs wouldn't have behaved much differently in the case of a failed allocation.
So why do they need Rust then? What advantages does it provide? That was the main point of the article — we all wanted a better language, but got another crappy one instead.
> So why do they need Rust then? What advantages does it provide?
That Rust didn't prevent one error in one specific instance does not mean that Rust didn't prevent any errors across any instances. Nor does it mean that Cloudflare didn't benefit from Rust in some way(s) elsewhere.
For example, from one of Cloudflare's previous blogposts [0] (emphasis added):
> Oxy gives us a powerful combination of performance, safety, and flexibility. Built in Rust, it eliminates entire classes of bugs that plagued our Nginx/LuaJIT-based FL1, like memory safety issues and data races, while delivering C-level performance.
In Go, if one would ignore the error, it could result in a panic or the program would continue with some unexpected state. Go projects work on an honour system and either return Nils with errors or empty structs.
The move to Rust was partly motivated because it prevented that entire class of errors. No more out-of-bound reads, or data races. The compiler audits these missed spots.
Now, you could say a managed memory language would suffice as well. Perhaps it could. But we needed performance, and no memory-managed language met those performance needs then or today.
I get you're making the case that Rust isn't perfect for all use cases, but Cloudflare's scenario is the exact corner case your argument falls apart in: we needed fast and safe in an environment where not being fast or safe had real business consequences, and nothing else except Rust gave both.
That Rust produced a predictable and deterministic way of failing, while in C++ the equivalent code of accessing an uninitialized value without verifying it beforehand would have resulted in entirely unpredictable behavior whose reach is entirely unbounded.
Moreover, now they realize this is an issue for them, they can just do "Ctrl+F unwrap" and fix each instance. Then they can put a hook on their commits that automatically flag any code with "unwrap". In some languages where you're allowed to just ignore errors, you could fix the proximal bug, but you'd never be sure you weren't causing or ignoring more of the same in the future -- how do you search for what isn't there?
>Yeah until that memory safety issue causes memory corruption in a completely different area of code and suddenly you're wasting time debugging difficult-to-diagnose crashes once they do start to surface.
Some very solid argument here. However, as already implied in my article, you can get most of the guarantees without losing your sanity. Memory-safety-related problems are important, but they are not the sole source of bugs in applications — as developers of Zed found out.
>Doing this the "correct" way in other languages has similar impact? So I'm not sure why Rust forcing you to do the correct thing which causes perf issues is uniquely a Rust issue. Doing this the "just get it done" way in other languages will likely come back to bite you eventually even if it does unblock you temporarily.
It might be counterintuitive, but garbage collectors in multithreaded code can be very efficient. I mean you just spawn lots of objects with random references in between and then eventually GC untangles the mess — it's zero overhead until the GC cycle. Escape analysis and semantic-reach containers can reduce GC work a lot (so you don't have the classical JVM GC problems). More specialized things like RCU and general quescence-state reclamation can be totally pause-less.
> Some very solid argument here. However, as already implied in my article, you can get most of the guarantees without losing your sanity.
I think this is part of why your article is causing a strong reaction from a lot of people; quite a lot of the justification for your point of view is left implied, and the concrete examples you do give about Rust (e.g. `Arc<Mutex<Box<T>>>>` and `.unwrap`) are hard not to see as straw men when plenty of people write Rust all the time without needing to rely on those; it turns out it's also possible to get more reliability out of Rust for those people without losing their sanity.
Most of the opinions you give are hard to distinguish from you personally not finding Rust to provide a great experience, which is totally valid, but not really indicative of "core problems" in the language. If you instead were making the argument of why you don't personally want to write any code in Rust, I don't think I'd have much criticism for your point of view (even though I wouldn't find much of it to apply to me). Then again, the article starts off with a statement of previously having made an intentionally provocative statement purely to try to compensate for a perceived bias in others, so maybe I'm just falling for the same bit by trying to engage the article as if it's serious.
Also the argument with `Arc<Mutex<Box<T>>>>` boils down to "Rust makes it hard to work with automatic reference-counted shared mutable heap-allocated state." In which case... mission accomplished? Rust just made explicit all the problems that you still have to deal with in any other language, except dealing correctly with all that complexity is such a pain that you will do anything you can to avoid it.
Again, mission f#*@ing accomplished. Maybe you DON'T need that state to be shared, reference-counted, or heap allocated. Maybe you can refactor your code to get rid of those annoyingly hard to deal with abstractions. And you end up with better, more reliable, likely faster code at the end of it.
So many times I've tried to do something in Rust the old fashioned way, the way I have always done things, and been stopped by the compiler. I then investigate why the compiler/language is being so anal about this trivial thing I want to do.. and yup, there's a concurrency bug I never would have thought of! I guess all that old code I wrote has bugs that I didn't know about at the time.
There are basically two reactions people have to this situation: (1) they're thankful that they learned something, their code is improved, and go about their day learning something new; or (2) feel frustrated and helpless that the old way of doing things doesn't work, and rage-quit to go write a "WHY RUST IS THE WORST THING SINCE MOSQUITOS" blog article.
Yeah, this pretty much summarizes my experience. In most situations, the actual information I need to share between different contexts is quite small, and using a channel ends up being pretty clean. There are still times I need an actual reference counted wrapper around a mutex, but they're the exception rather than the norm, and even then, there are often ways you can reduce the contention (e.g. if `T` is a type you can clone, and you only need a snapshot of the current state of the data rather than preventing changes while processing based on that snapshot, you can just clone it, drop the mutex guard, and then proceed without needing to worry about borrows).
It is hard to use because you can’t just access the shared state. You have to annoyingly lock it, handle the guard object, etc. Each one of those layers has protocols.
Of course you have to lock a mutex. You'd have to do that no matter the language, right?
And that's what I meant by verbose/ugly. Each of those steps is usually an if let / else error. None of those steps are hard, but you have to do them every time.
Heh. You're pedantically correct, the best kind. I meant "have to" in terms of "for correct operation you must" not "the compiler forces you to" since the latter isn't a thing in some languages.
> So many times I've tried to do something in Rust the old fashioned way, the way I have always done things, and been stopped by the compiler. I then investigate why the compiler/language is being so anal about this trivial thing I want to do.. and yup, there's a concurrency bug I never would have thought of! I guess all that old code I wrote has bugs that I didn't know about at the time.
This is also my experience. I've had dozens of times over the years that I've been confused about why Rust does something a certain way, but I don't think I can recall a single one where I didn't end up finding out that there was an extremely good reason for it. That's not to say that in every one of those cases, the design choice Rust made was the only reasonable one, but in the cases where it wasn't, the situation always ended up being more nuanced than it appeared to me at first glance, and what I would have originally expected would have implicitly made a significant tradeoff I hadn't considered.
I'm having trouble finding it now, but years ago someone had a quote that ended up getting published in the language's weekly newsletter that was something like "Rust moves the pain of understanding your program from the future to the present". It's extremely apt; most of the difficulty I've seen people run into when trying to program in Rust ends up being that they're forced to consider potential issues that in other languages they could have ignored until later, and then would have had to debug down the line.
There’s a handful of pain points that the Rust model does impart which are not fundamental. For example, unsafe code is required to get a mutable borrow to two different fields of a struct at the same time.
But really that’s the only example I can think of offhandedly, and I expect there are not many in total. Nearly all of the pain is merely the trouble of thinking through these issue upfront instead of kicking the can down the road.
The only other one I can think of is kind of poor ergonomics around "self-borrows"; it's a lot less common than I've felt the need to mutably borrow two fields from the same struct independently, but there have very occasionally been times where I've realized the simplest way to structure something would be to have one field in a struct borrow another. This is somewhat hard to express given the need to initialize all of the values of a struct at once, and even if you could, there are some complications in how you could use such a struct (e.g. not being able to mutate field `a` safely if it's borrowed by field `b`). Overall though, the things that Rust prevents me from handling safely because of its own limitations rather than them being fundamentally unsafe are quite rare, and even with those, they tend to be things that I would be quite likely to mess up in practice if I tried to do them in a language without the same safety rails.
>the things that Rust prevents me from handling safely because of its own limitations rather than them being fundamentally unsafe are quite rare, and even with those, they tend to be things that I would be quite likely to mess up in practice if I tried to do them in a language without the same safety rails.
Any language with GC can handle complex links between object, including mutable object. Like Erlang/Elixir, JS, Go, etc. You message implies runtime-less language, but majority of practically employed languages are runtime-full.
>Maybe you DON'T need that state to be shared, reference-counted, or heap allocated. Maybe you can refactor your code to get rid of those annoyingly hard to deal with abstractions. And you end up with better, more reliable, likely faster code at the end of it.
That's the point 4 in my article — Rust is horrible for mutable shared state. However, in the modern CPU-based programming mutable shared state is like 70% of all the code, so you cannot just ignore it. It's not that CPU-s have to be like that, it's they hapened to be like that.
>there's a concurrency bug I never would have thought of! I guess all that old code I wrote has bugs that I didn't know about at the time.
Programming languages or libraries that excel at concurrency do not use the Arc<Mutex<T>> nuisance. At least they are not imposing it as a main tool. Having shared mutable state does not mean you directly change cell there, like you would in C/C++. I mean if you have a cyclic graph of connected objects — how the hell are you gonna employ Arc<Mutex<T>> for handling them? What Rust actually does is carving in stone pathologic C/C++ ways of "correct handling of shared mutable state" — whatever it is.
>However, as already implied in my article, you can get most of the guarantees without losing your sanity.
Yeah sure, but what compares that gives you similar perf, safety, and language features to Rust? I'll use "safety" in a loose term to say "you really infrequently encounter odd memory safety issues". Go for example still has the occasional memory corruption issues with maps in particular although these don't show up too often and the race detector exists.
C# is probably the closest I can think of for AOT? I don't really know what the deployment story for a .NET application looks like these days though.
Go has some language design things that turn people off.
>but they are not the sole source of bugs in applications — as developers of Zed found out.
You called out Zed in the blog post as well but I've not seen the Zed devs expressing regret of using Rust. Is this just an assumption on your part? As someone who's written many UI applications with egui and one with GPUI, I've felt some minor pain points but nothing show-stopping.
I used to write a lot of C#. I used to write a lot of Go. I now write either and basically exclusively write Rust these days and a bit of C/C++ at work. The only time I've really felt the pain of `Rc<RefCell<...>>` gross types was recently when trying to port a game engine's data loader to Rust. It makes heavy use of OOP patterns and trying to emulate that in Rust is asking for a bad time, but I did it out of just out of trying to one-shot the migration and keep logic similar. Not fun, but I knew what I was getting myself into.
A lot of things that people wanted in Golang (or in a C alternative), are in the newer language, Vlang[1][2]. Sum types, option/result types, enums, no variable shadowing, flexible memory management (GC can be turned off and stdlib does not rely on), generics (way before Golang's handlers relented to the masses), more safety features, goodies like filter, map, etc... As Vlang is 75% similar, it's also easy to learn and retains the high readability.
>Yeah sure, but what compares that gives you similar perf, safety, and language features to Rust?
I've already answered it in the original article — Rust is already here, and better language is not. Still, it will not make me say "it's the best option we have by now" — because it's not nearly the best option.
Performance? Lots of code is cold and not impacting performance much. You just don't need everything written in Rust or C++.
>You called out Zed in the blog post as well but I've not seen the Zed devs expressing regret of using Rust. Is this just an assumption on your part?
I'm kinda saying if I was a Zed dev I would have my pillow wet with tears at night. I know this because I participated in IDE development in C long time ago, and I was drowning in this whole low-level stuff all the time, I just could not do a single feature because I have to handle hundred of those other small things before the feature can work.
>As someone who's written many UI applications with egui and one with GPUI, I've felt some minor pain points but nothing show-stopping.
I have no idea what those applications were and how complex they were, so I cannot comment on it.
> It might be counterintuitive, but garbage collectors in multithreaded code can be very efficient.
What has garbage collector to do with multithreaded code? Once you have two or more threads which needs to share data, they need to sync and you'd end up using some kind of lock, which will affect the performance. GC doesn't make anything efficient or less efficient here. It might make the code simpler as you don't have to worry about allocation/deallocation, but I don't see how it's magically going to remove the lock.
> It might be counterintuitive, but garbage collectors in multithreaded code can be very efficient.
It is indeed. But on the flip side, no other programming language is going to give you a compile error if you forgot to wrap you data into a mutex before sharing it between threads, and you'll either end up with a ConcurrentModificationException exception at runtime (Java) or with an undefined behavior.
But otherwise yes, there are plenty of situations where a GC is a totally valid solution. Just not for most of Rust's niche.
It's 200% LLM used in production — like 10 hours of dialogs right before writing the article. I had much more hours of coversations with Rust fanboys and it was mostly a waste of time, they just would not try to negotiate on Rust's weak points. I would definitely not be able to write the article with only human support — it's really sad to conclude that LLM-s are much better assistants because they are neutral and objective.
Why would I do this and why do you think I did? Or do you feel like my english is so bad I'm probably writing ransom note out of pieces of LLM output? I can ensure you I'm good enough at typing and talking in english to not require copy-pasting. Do you have any quotes from the article that you feel like were copy-pasted from LLM?
>I've done significant C++ and Rust and Rust compiles WAY faster than C++ for day-to-day incremental compilation. C++ suffers from the header inclusion problem and modules may as well not exist because you can't practically use them.
It really depends on what you are compiling. I did lots of C/C++ that compiled 50k lines of code in 10 seconds FROM SCRATCH — I doubt you can do it in Rust. To be fair, headers in that project were somewhat optimized for compilation speed (not "hardcore", but "somewhat"). People forgot how fast C/C++ compilation can be without 10 Boost includes in each module.
>This is intentionally over complicated, there is no reason for the Box to be there.
Arc<RwLock<Option<T>>> — sounds good now? Don't get me wrong — C++ can be just as horrible, but Rust made it a rule, you can only write your program like this.
>What does this even mean?
I've already answered above, but I can repeat: there are runtime models that allow crash and recover, there are models that crash and limp. In Rust there is only one model of crash: you just crash.
> It really depends on what you are compiling. I did lots of C/C++ that compiled 50k lines of code in 10 seconds FROM SCRATCH — I doubt you can do it in Rust. To be fair, headers in that project were somewhat optimized for compilation speed (not "hardcore", but "somewhat"). People forgot how fast C/C++ compilation can be without 10 Boost includes in each module.
Incremental compiles in Rust are very fast because it has an actual module system instead of textual include. I don't care much how long from scratch compiles take, but even there my experience is Rust is faster than C++.
> Arc<RwLock<Option<T>>> — sounds good now? Don't get me wrong — C++ can be just as horrible, but Rust made it a rule, you can only write your program like this.
I'm not sure what non-garbage collected language would be better here. C++ would be about the same. C would be far far worse as it has no templates. Garbage collection would allow you to omit the Arc, and a language like Java where nearly _everything_ is optional would allow you to omit the Option, but I don't think many people would make this trade.
> I've already answered above, but I can repeat: there are runtime models that allow crash and recover, there are models that crash and limp. In Rust there is only one model of crash: you just crash.
You haven't defined what "crash" means. Rust uses a Result types for error flow and you have just as much control over recovery as any other language. If you are talking about panic, well yeah, that's like calling abort() in C, except it allows more fine grained error recovery with catch_unwind instead of a global SIGABRT handler or w/e for your OS.
You absolutely can. You'll need to pay attention to how things are structured and not all codebases will be equally amenable to such techniques, but those are shared characteristics with C++.
As you said, "It really depends on what you are compiling."
> In Rust there is only one model of crash: you just crash.
>Eventually I rewrote my btree on top of Vecs. My node & leaf pointers are now array indices. The result? There is no longer any unsafe code. The code has become significantly simpler and it now runs ~10% faster than it did before, which is shocking to me. I guess bounds checks are cheaper than memory fragmentation on modern computers.
Optimizations are very complex and potentially fragile in Rust, LLVM has to sort through tons of generated IR, so it might be just that native Rust structures are optimized better for compilation. Particulary, Rust is able to optimize out some bound checks.
Do note that binary trees are mostly an obsolete legacy today — they are way too cache-unfriendly. I mean you could have written similar code in C++ using std::vector or std::dequeue and get the bounds checking too.
>Everyone talks about memory safety but IMO rust’s best features are enums, traits, cargo, match expressions and so on
C++20 with concepts mostly reproduce the traits. C++17 with std::variants emulate enum/tagged union. Match is unmatched by C++, that's true.
Cargo is good for as long as there are few packages in there. Large projects already suffer from five versions of serde in one build and dependencies on FFI-connected libs that cargo itself cannot build. I mean look at the NPM nightmare — and they've mostly dodged FFI-s.
> Do note that binary trees are mostly an obsolete legacy today — they are way too cache-unfriendly. I mean you could have written similar code in C++ using std::vector or std::dequeue and get the bounds checking too.
As a sibling comment said, its a b-tree not a binary tree. B-trees are - as far as I know - the fastest data structure on modern computers for the class of problems they solve.
And yes, I think if I ever go back to C/C++ I'll try this approach out. It might also work great in GC languages like JS/TS/C#/Go because there's fewer pointers to keep track of.
> Cargo is good for as long as there are few packages in there. Large projects already suffer from five versions of serde in one build and dependencies on FFI-connected libs that cargo itself cannot build. I mean look at the NPM nightmare — and they've mostly dodged FFI-s.
I haven't run into the "five versions of serde" problem, but I can easily imagine it. I've lived NPM nightmares more times than I can count. But I'd still prefer that to all the problems you get from CMake, autotools and Makefiles. At this rate we're going to get self driving cars before we have a single sane build system for C.
> Do note that binary trees are mostly an obsolete legacy today — they are way too cache-unfriendly
BTree is not Binary Tree. It's B-Tree and is cache-friendly
> C++20 with concepts mostly reproduce the traits.
C++20 concepts are not the same as traits. Concepts are structural and awkward to use compared to Traits which are nominal. There are other important differences, too.
> Particulary, Rust is able to optimize out some bound checks.
It doesn’t optimise out the bounds checks. I checked the assembly.
Bounds checks in rust always look so ugly if you look at the assembly output because there’s all this panic string building and whatnot. But in practice, I think it’s just that modern CPUs have excellent branch prediction. A branch that follows the same path 100% of the time is cheap. Especially when it has UNLIKELY() annotation hints in the code. This is the case for bounds checks. They just don’t really show up in benchmarks as much you’d think.
>Do note that binary trees are mostly an obsolete legacy today — they are way too cache-unfriendly. I mean you could have written similar code in C++ using std::vector or std::dequeue and get the bounds checking too.
This is simply not true, it's an ongoing thorn of mine that the Rust community seems to decide that anything which cannot be programmed in rust is obsolete legacy. This is a silly stance which is harming adoption.
I for one have coded dozens or even hundreds of tree structures. My most recent tree was a hierarchical context management system for an LLM agent!
> it's an ongoing thorn of mine that the Rust community seems to decide that anything which cannot be programmed in rust is obsolete legacy.
Nobody is arguing that you can't program a binary tree or b-tree in rust.
I think its true that binary trees almost always perform worse than b-trees in programming in general. But that isn't a comment about rust. Its a comment about modern CPU memory & cache behavior.
I'm actually the one who promotes paranoidal assert-s everywhere. I do agree the original statement from the article is ambiguous, probably should have written something like "memory safety in Rust does not increase reliability".
>The pacemaker argument is complete nonsense, because the pacemaker must keep working even if it crashes. You can forcibly induce crashes into the pacemaker during testing and engineer it to restart fast enough that it hits its timing deadline anyway.
I'm not sure whether there is a deadlock-free modification of Rust — deadlock is not considered an undefined behavior in Rust.