So many comments think that Filho's complaints are about the kernel developers rejection of Rust, but in the video referenced he is asking the kernel developers to explain what the changes are, the semantics not the actual code.
There is a real problem if kernel developers cannot take the time to explain the semantics. Trying to explain something to others is how you can be sure you understand it yourself.
If the developers of these filesystems cannot explain the semantics, how can they be certain they know what they are really doing?
So it raises the question - how much of the stuff going in the kernel is properly understood by its developers?
I wonder how much of it is unwillingness to explain rather than the inability to do so? The vibe I got was that the audience was not interested in changing their ways, after all they've been doing things their way for decades with great success. I have to also credit the presenters, they stayed on the topic and pretty calm despite the heckling.
If that presentation encapsulates the state of things, the Rust for Linux definitely have their work cut out for them. Borrow checker seems like a small matter compared to the political issues.
Exactly. And as far as I understood the talk, nobody even asked the audience to change their ways.
Now, as mentioned by the presenter himself, refactors that change the API semantics will break any consuming code, no matter if written in Rust or C.
So, one option is that the whole issue around breaking Rust bindings is just a strawman and the vocal part of the audience just wants Rust devs to gtfo.
In the other, more charitable setting I can conceive, we are looking at an API that isn't extensively documented in the first place (obviously, otherwhise the Rust for Linux team wouldn't have to ask for clarification) and a small group of kernel devs who are trying to fix everything by themselves rather than documenting changes in a way that so other people can fix their own API-consuming code.
In this setting, also fresh would-be contributors using C will have a hard time getting to grips with the codebase. Now, if I remember correctly, one of the purported reasons for Linux to open up to Rust was to attract fresh developers, but in the outlined scenario, there is a clear reason why people might bounce off, and it is unrelated to the language (at least if this case is representative for other parts of the kernel codebase).
None of the options look good to me. But perhaps I am just missing something.
> I think if the amount of effort being put into Rust-for-Linux were applied to a new Linux-compatible OS we could have something production-ready for some use cases within a few years.
I presume @ddevault knows about Redox, so I'm surprised he didn't mention it in this context. In any case I thought it was an insightful remark.
The more I learn about the politics of big projects, the more I believe in flowing around obstacles rather than trying to overcome them directly. If there's significant resistance to Rust in the kernel, then regardless of the technical or other merits, it may just not be worth it to try and win over the objectors.
To me Drew's article is a "bless your heart" answer that tells Rust users to go away, and keep themselves busy with something that nobody else will care about.
He actually did mention it if you look at the blog post his quote comes from. He's specifically calling for a Linux-ABI-compatible rust kernel that just re-implements things as they were rather than trying to delve into new researchy OS concepts like what Redox does.
Writing a Linux-ABI-compatible kernel in Rust will lead to a kernel no one uses. What would be the advantage of such a kernel? It would be much, much slower for many workloads and only be interesting to Rust hackers. In "Linux the OS", we are stuck with the "Linux the kernel", and Drew knows this very well. You will never catch up with the years of development that went into it. So if you want Rust in the kernel of "Linux the OS", you need to get Rust into "Linux the kernel" proper. This approach can work, see for instance the history of PREEMPT-RT.
> This approach can work, see for instance the history of PREEMPT-RT.
Good point, I note that realtime preemption has been in the works for more than twenty years, but (surprisingly) still isn't in mainline. On the other hand it's widely used and available and has caused lots of changes upstream already.
PREEMPT-RT is almost completely in mainline. As your linked article says, pretty much the last issue is printk. You can download the last patch here, it's not much anymore:
Because if you write a kernel from scratch, things like the scheduler or the networking stack will be much, much less performant than the one from the Linux kernel. As I've written: you'll never catch up with the years and years of development that have poured into these systems.
Now, you could take these stacks from the Linux kernel and rewrite them in Rust, but that's not what Drew has suggested. Besides, that would also take ages: it's a lot of very complicated code, and a lot of it cannot just be transferred to Rust without it becoming unsafe, making the whole point of having a kernel in Rust moot.
> Because if you write a kernel from scratch, things like the scheduler or the networking stack will be much, much less performant than the one from the Linux kernel.
Yeah, I don't think you know this.
As I understand it, there have been schedulers written in Rust for Linux already which are faster in specific use cases.
> Now, you could take these stacks from the Linux kernel and rewrite them in Rust, but that's not what Drew has suggested.
As I understand it, Drew that's exactly what he suggested? He didn't say go write a new system. He said go write an ABI compatible system. It's hard for me to imagine he meant "Write an ABI compatible system with wildly different semantics."
> Besides, that would also take ages:
Agreed, and that's what makes his suggestion mostly ridiculous.
> it's a lot of very complicated code, and a lot of it cannot just be transferred to Rust without it becoming unsafe, making the whole point of having a kernel in Rust moot.
I think this passage confuses a few key concepts. First, I'm not really certain how much unsafe a scheduler or a networking stack would have to use. My guess is not that much. Do you have some reason to believe it would have to use an inordinate amount of unsafe? Next, simply using unsafe does not moot the safety argument for using Rust. I've said this about a billion times, but once more: unsafe constrains memory safety to a small enough area that one can reason about it.
> [Filho said]: "Almost four years into this, I expected we would be past tantrums from respected members of the Linux kernel community. I just ran out of steam to deal with them, as I said in my email."
I'm not surprised Wedson feels this way.
It's irresponsible to leave devs out to dry like this. If the Linux Foundation hires Linus and others and pays them gobs and gobs of dough, they should lead. If you set the technical direction, then you need to make it clear every day, that's where we are going. Passive aggression is fine for bad marriages and failed companies, but this is a multi-billion dollar business. Someone needs to be the grown up in the room.
Rust in the kernel is still an experiment, you can't lead in a direction that doesn't exist. The "nontechnical nonsense" is actually the more important and more difficult part of software development.
My feeling about Rust is that it's a playground language where many ideas are tried and insights can be formed, but it needs to be distilled down to something simpler with a stable ABI to be used for something like Linux. Moreover, why be "afraid some other kernel will do to it what it did to Unix"? Wouldn't it be great if Linux could be replaced by something better? Why not pursue that direction?
Rust has used reaserch from 80s and 90s languages. The result may seem novel because the programming language theory and languages that Rust copied are not well known among systems programmers. The ideas are old, Rust added the polish and packaging of the features with a focus on practical non-reaserch use.
Rust isn't that new any more. 1.0 has been released almost a decade ago.
What features do you rely on? This is effectively the opposite of my experience, which is why I am curious: even our embedded OS at work is almost free of nightly features, and everything higher in the stack has been on stable for a long time. Always interested in learning how experiences differ!
Yes, for that OS we’re using asm_const, which is almost stable, naked functions, and used_with_arg, which is linker related. We also use array_methods but that could be polyfilled, if the other three were stable.
crates.io sees about 13% of requests from nightly compilers, although that may be biased by CI tests that just check for future compatibility rather than needing nightly features.
For application development nightly hasn't been necessary for many years now. Places where people use nightly are increasingly just nice to haves and small optimizations.
> Rust in the kernel is still an experiment, you can't lead in a direction that doesn't exist.
"There go my people, I must find out where they are going so I can lead them."
> [B]ut it needs to be distilled down to something simpler with a stable ABI to be used for something like Linux.
Why? I really not understanding a practical reason why that would be necessary.
> Moreover, why be "afraid some other kernel will do to it what it did to Unix"? Wouldn't it be great if Linux could be replaced by something better? Why not pursue that direction?
It's mostly a ridiculous alternative in the here and now. People use Linux, right now, although a fine thing to do. People want a memory safe language to write drivers and filesystems in the kernel, right now. It makes lots of sense to write use Rust in the Linux kernel, right now.
So maybe? It still takes lots of work of make something like that happen?
I'd love if the entire RfL team played hard ball, relicensed their contributions as MIT/ISC/BSD, and moved to FreeBSD or illumos or began contributing to RedoxOS, but I wouldn't expect anything new for awhile.
How many patches come into Linux a day? How hard would it be to do parallel development for a while? Just apply the fixes as you slowly evolve your codebase. Not a hard right, but more of a gentle peel away.
>That remark is a response to a comment on the video that, according to Filho, came from Linux kernel maintainer Ted Ts'o: "Here's the thing, you're not going to force all of us to learn Rust."
>The video depicts resistance to Filho's request to get information to statically encode file system interface semantics in Rust bindings, as a way to reduce errors. Ts'o's objection is that C code will continue to evolve, and those changes may break the Rust bindings – and he doesn't want the responsibility of fixing them if that happens.
>"If it isn't obvious, the gentleman yelling in the mic that I won't force them to learn Rust is Ted Ts'o. But there are others. This is just one example that is recorded and readily available."
A lot of the quotes, this in particular, really comes off as petty. Not wanting a maintenance burden on your hands is a perfectly sane response from Ts'o
>But the rules put in place to guide open source communities tend not to constrain behavior as comprehensively as legal requirements. The result is often dissatisfaction when codes of conduct or other project policies deliver less definitive or explainable results than corporate HR intervention or adversarial litigation.
>Citing the Linux kernel community's reputation for undiplomatic behavior, The Reg asked whether kernel maintainers need to learn how to play well with others.
Lol. You need to set firm rules. No, not like that! You need to play nice with others! Upper-management telling you off is bad only when The Register says so
>As an alternative, [Drew DeVault] has proposed starting anew, without trying to wedge Rust into legacy C code. He wrote that "a motivated group of talented Rust OS developers could build a Linux-compatible kernel, from scratch, very quickly, with no need to engage in LKML [Linux kernel mailing list] politics. You would be astonished by how quickly you can make meaningful gains in this kind of environment; I think if the amount of effort being put into Rust-for-Linux were applied to a new Linux-compatible OS we could have something production-ready for some use cases within a few years."
Ah, yes. Drew DeVault. The expert in A) non-divisive/petty community politics, and B) developing 30M LOC OS kernels with billions of dollars on R&D investment in a few years with a small team in an experimental language. "Just make Linux 2", it's so simple, why didn't we think of this?!
> Ah, yes. Drew DeVault. The expert in ... developing 30M LOC OS kernels with billions of dollars on R&D investment in a few years with a small team in an experimental language. "Just make Linux 2", it's so simple, why didn't we think of this?!
This is an unusual take considering Drew DeVault actually does have experience developing new kernels [1] in experimental languages [2].
Drew's own post [3] (which the linked article references) doesn't downplay the effort involved in developing a kernel. But you're definitely overplaying it. 30M SLOC in the Linux kernel is largely stuff like device drivers, autogenerated headers, etc. While the Linux kernel has a substantial featureset, those features comprise a fraction of that LOC count.
Meanwhile, what Drew's suggesting is a kernel that aims for ABI compatibility. That's significantly less work than a full drop-in replacement, since it doesn't imply every feature is supported.
Not to mention, some effort could probably be put into developing mechanisms to assist in porting Linux device drivers and features over to such a replacement kernel using a C interface boundary that lets them run the original unsafe code as a stopgap.
> This is an unusual take considering Drew DeVault actually does have experience developing new kernels in experimental languages.
Not exactly, a new toy kernel is different than reimplementing Linux!
> Meanwhile, what Drew's suggesting is a kernel that aims for ABI compatibility.
Drew and you are missing the point, which is: we don't want to wait years to use a new OS that may or may not come. "Hey, go fork it!" is a fine response to plenty of feature requests, but not this one. The point here explicitly was that in-tree support is/was better, so everyone got watch the development of the thing.[0]
Moreover the actual problem is bad behavior by C kernel devs, to which a leader would say: "This is something we are doing. This project can fail for lots of reasons but it won't fail because kernel C devs are too toxic to work with. You'll apologize to Wedson. You said -- you're not learning Rust. Guess what? Your contributions are paused, while you learn Rust, and assist in building the ext2 reimplementation. Perhaps, as you do, you can make clearer your concerns to the Rust for Linux team, and learn what theirs are as well."
> 30M SLOC in the Linux kernel is largely stuff like device drivers, autogenerated headers, etc.
And? These drivers matter otherwise you have Haiku, Redox, etc OSs that seems promised to an "eternal" beta status because they have to reimplement these drivers..
There is a real problem if kernel developers cannot take the time to explain the semantics. Trying to explain something to others is how you can be sure you understand it yourself.
If the developers of these filesystems cannot explain the semantics, how can they be certain they know what they are really doing?
So it raises the question - how much of the stuff going in the kernel is properly understood by its developers?