> Threatening a social media campaign to out people is completely toxic behaviour. This is a long-term contributor who is perhaps a bit abrasive, not Jimmy fucking Saville.
I'm no where near in the loop on this, but that sounds incredibly toxic. Are there some good links to summarize this?
EDIT:
OK, these are fully enough to allow me to understand the issue.
There seems to be some issue with unaccountable maintainers, and that’s a problem for innovation. If you can’t even get to the point of “technical debate” (we aren’t interested, sorry), then what hope innovation?
These are “people problems”, and there are no easy or good answers, but it doesn’t mean we should through our hands up and consider things “unfixable” either.
"... I come at this from the perspective of having worked on Linux since around December of 1991. I first met you and Tove in 1995 at the Free Software Conference at MIT that Stallman sponsored. ...
Probably the last technical contribution of my career is leading an initiative to provide the Linux community a generic security modeling architecture. Not to supplant or replace anything currently being done, but to provide a flexible alternative to the development of alternate and/or customized workload models, particularly in this era of machine learning and modeling.
Four patch series over two years, as of yesterday, not a single line of code ever reviewed. ...
We were meticulous in our submissions to avoid wasting maintainers time. We even waited two months without hearing a word before we sent an inquiry as to the status of one of the submissions. We were told, rather curtly, that anything we sent would likely be ignored if we ever inquired about them.
We tried to engage, perhaps to excess, in technical discussions attempting to explain why and how we chose to implement what we were proposing. ...
There were never any relevant technical exchanges. The discussion consisted of, we have decided to do things a certain way, no discussion, if you don't like that you should really consider doing something other than submitting to upstream Linux."
Okay, but how long is reasonable to take to tell someone that their patch is too big to be reviewed?
> We were meticulous in our submissions to avoid wasting maintainers
time. We even waited two months without hearing a word before we sent
an inquiry as to the status of one of the submissions. We were told,
rather curtly, that anything we sent would likely be ignored if we
ever inquired about them.
It's reasonable to ask that people make smaller-sized patches to get reviewed, and it's reasonable to have to rule out some things due to not having the bandwidth for it compared to other priorities, but it's pretty ridiculous to expect people to know that those are the reason their code isn't getting reviewed if they're not allowed to inquire once after two months of radio silence.
Not everything can be broken down into 5 line patches. Some of the bigger features need bigger code. You can look at the impact radius and see if the patch is behind a feature flag to make a decision. In this case, it seems like it is elective and does not have any changes beyond what is self-contained.
I don’t follow the Linux kernel development process closely, but on most projects—open source or proprietary—that I’ve been involved in, dropping a big diff without having proposed or discussed anything about it beforehand would have been fairly unwelcome, even if it’s something that would be infeasible to break down smaller. I’d also argue that even quite substantial changes can be made with incremental patches, if you choose to do so.
There had probably been a lot of discussion in the background
The "every change can be a small patch, can be split in such" is part now of the Linux folklore IMHO. As in, a belief that people kinda heard about but nobody has concrete proof
Also do you know what would make it much easier to review long changes? Not using email patches
GitHub issues with "load 450 hidden comments" on it? No threads, no comparisons between multiple submissions of the same patch, changes from rebasing impossible to separate from everything else? Having to drop anyway to the command line to merge a subset that has been reviewed? Encouraging squash merging makes it easier to work with long changes?
Email is complex and bare bones, but GitHub/GitLab reviews are a dumpster fire for anything that isn't trivial.
I've used Gerrit myself for nearly 10 years. I recognize it's strengths and like it for what it does. I also am comfortable with mailing lists and Git Hub and Git Lab. I've lived in "both worlds".
IIUC, the Kernel project doesn't want to use it because of the single-point-of-failure argument and other federation issues. Once the "main" Gerrit instance is down, suddenly it's become a massive liability. This[1] is good context from the kernel.org administrator, Konstantin Ryabitsev:
"From my perspective, there are several camps clashing when it comes to the kernel development model. One is people who are (rightfully) pointing out that using the mailing lists was fine 20 years ago, but the world of software development has vastly moved on to forges.
"The other camp is people who (also rightfully) point out that kernel development has always been decentralized and we should resist all attempts to get ourselves into a position where Linux is dependent on any single Benevolent Entity (Github, Gitlab, LF, kernel.org, etc), because this would give that entity too much political or commercial control or, at the very least, introduce SPoFs. [...]"
SPoF is not the only concern. Gerrit search is painful. You can't search the internet for a keyword that you wrote in v10 of some Gerrit patch, unlike on list-based workflows. Also, there's too much point-and-click for my taste, but I lived with it, as that's the tool chosen by that community. It's on me to adapt if I want to participate. (There are some ncurses-based tools, such as "gertty"[1]; but it was not reliable for me—the database crashed when I last tried it five years ago.)
In contrast, I use Mutt for dealing with high-volume email-based projects with thousands of emails. It's blazingly fast, and an absolute delight to use—particularly when you're dealing with lists that run like a fire-hose. It gives a good productivity boost.
I'm not saying, "don't drag me out of my email cave". I'm totally happy to adapt; I did live in "both worlds" :-)
That's a great post, and also the presentation linked there is very interesting and hints at some of the issues (though I disagree with some of the prescriptions there)
Yes, the model is not scaling. Yes the current model is turning people away from contributing
I don't think linux itself is risking extinction, but someday the last person in an IRC channel will turn the lights off and I wonder if then they will think about what could they have done better and why don't people care about finicky buildsystems with liberal use of M4. Probably not.
One of the first habits I learned at BigCo was to have a sense of the DAG in any work over 400 lines, so I could make patches arbitrarily large or small. Because whichever I chose, someone would complain eventually
Indeed, and that's exactly what the Rust for Linux guys are doing. They have Binder, multiple changes with various degrees of dependency on newer compilers, support for different kinds of drivers, and they are extracting them and sending them to Linux in small bits.
I'd get mildly angry if someone sent a giant patch without having discussed with me first. (Not working with any kernel things) But a quicker "no, don't have time, probably never will" response would have been nice I guess?
no feedback whatsoever for 2 months - just because it was 17k lines - and you are blindly defending it? if you don't know what you are posting & talking about, then maybe you shouldn't.
if someone is willing to put in the efforts to submit such a huge patch, how about just show them a little bit respect, I mean the minimum amount of respect, for such efforts and send them a one line reply asking them to break the patch into smaller ones.
surely that won't take more than 45 seconds. still too hard to get the point?
> if you don't know what you are posting & talking about, then maybe you shouldn't
I hate to appeal to authority, but I have been working with large cathedral-style open source projects (mostly three: GCC, QEMU, Linux itself) for about 20 years at this time and have been a maintainer for a Linux subsystem for 12. So I do know what I am posting and talking about.
The very fact that I had never learned about this subsystem from LWN, Linux Plumbers Conference, Linux Security Summit etc. means that the problem is not technical and is not with the maintainers not replying it.
A previous submission (7000 lines) did have some replies from the maintainers. Apparently instead of simplifying the submission it ballooned into something 150% larger. At some point I am not sure what the maintainers could do except ignoring it.
> At some point I am not sure what the maintainers could do except ignoring it.
Ignoring people is a childs approach to communication. If there is a problem, its your responsibility to communicate it. If you have communicated the issue and the other party is refusing to hear it, that is a different issue, but it is your responsibility to communicate.
Surely if there were insufficient established context for a reviewer to easily follow a large patch the correct response would be a swift rejection with a request that such context be established prior to submitting again.
It sounds like there was willingness to meet any requirements, but submitters end up in a position of not knowing what the requirements are and if they have met them or not.
It's very easy to develop tunnel vision when working remotely with people you've not even ever met. It's happened to me multiple times and there's a very high chance that this is happening here.
The short of his rant is that he wants a "Code Of Standards for maintainers" in addition to a CoC, in order to witch hunt people he feels "gets in the way."
Yikes, this is EXACTLY what Linus was talking about except an order of magnitude worse!
No. He just wants people to have some decency and standards in communication and not arbitrarily decide that their little kingdom can never be touched.
This does not read as a rant at all to me. Rather it seeks to highlight a problem using an example from his own work and proposes a possible solution (with a previous caveat of “I don’t know how to fix this”).
> "Code Of Standards for maintainers" in addition to a CoC
He wants maintainers to behave in some pre-determined understandable fashion. He wants some accountability, and that seems reasonable to me. This is not a “maintainers must do what I want”, this is “let’s set basic expectations” and ensure people follow them. Whatever those should be.
> in order to witch hunt people he feels "gets in the way."
B does not follow A. You are simply straw-manning here, so I have nothing to say to it other than it’s a fallacious point.
His complaint is, in essence, that people will block technical proposals for nontechnical reasons, and that attempts to resolve this result in people complaining that you're talking about nontechnical things at all.
Few people like dealing with interpersonal conflict. They want to think that technical meritocracy wins the arguments.
But in that discussion, a maintainer said "I am going to try to sabotage this because I hate it.", and there's no technical argument to resolve that. And there's not really any route other than escalation to some kind of authority if there's not a common negotiation baseline.
"You can't reason someone out of a position they weren't reasoned into."
>in order to witch hunt people he feels "gets in the way."
When has this ever happened with the Linux CoC? The only people who I've seen banging pots and pans together about the CoC are folks who like to sealion [1]
There are easy and good answers, just none that every single one would agree with, and that is the problem.
The current management model is based on consensus in some parts, and a dictatorship in others. Its not clear which parts are which because of sprawl, and non-responsiveness.
This is a core communication people issue. The problems arise because in consensus models, people are expected to adopt archetypical roles based somewhat on preference, but mostly on the needs of the group, towards productivity.
If no one else is doing that type of role, someone will be forced into doing it, and will adopt characteristics of that role, regardless of their personal opinions about it.
When they get punished for fulfilling the role, you lose people.
> doesn't mean we should throw our hands up and consider things "unfixable" either.
The given structure is unfixable, because it sets the stage for a trauma loop where actions can't take place without people who are expected to fulfill a role to meet consensus, but in doing so they get punished for volunteering.
There is no way around this structure so long as you use a consensus model in whole or part, in dysfunctional groups the people involved either fail to communicate or worse there are members that impose coercive cost by assuming the unproductive roles so no action happens while resources are wasted.
This really is basic stuff covered in your college level intro to communications coursework.
For those wanting to know more about this, you can read more about the roles at the link below.
I think "Code of Civility" is what Dr. G was describing. Conduct presupposes civility, but acceptable online discourse has changed amongst new internet natives raised in the era of moral totalitarianism, aggression as a service, and popular outrage incubators.
Looks like Dr Greg (not Greg KH) is just piling his own personal grievance on top of the drama. Saying "Jim are you listening?" makes me wonder who he's sniping at.
Speaking as an actual People Manager, his ideas sound pretty shallow.
> Jim Zemlin, who is the executive director of the Linux Foundation and has nothing to do with Linux.
That sounds... Not good. Seems like like this is exactly what's wrong with... Well, not just this, but much of the "FOSS" world. (Maybe about since "Free Software" became "Open Source"?)
This is a recurring pattern I see with drama in some open source communities, where people measure others using yardsticks they themselves don't live up to. People can't post stuff like this while accusing everyone in the vicinity except themselves of bad behavior. Choose one.
Hm, good thing I'm not a Linux maintainer. 'Coz that first C in my username... (My given name is the only thing about me that has anything to do with "Christ".)
Basically, there is concern that even with a declaration that Rust-for-Linux devs will maintain this (and potentially every other) cross-language API translation layer file, the demarcation point between C and Rust isn't sufficient, and is causing C developers lag or headaches by having PRs rejected because of Rust-side failures. I don't see how that can be fixed without wholesale buy-in to Rust of enough kernel devs that they can fix whatever Rust problems are created by C API changes. Or Linus will have to accept PRs that break Rust-enabled builds. The R4L devs, by themselves, don't have the bandwidth to keep up. Even if they can rapidly fix problems, that adds potential friction to every C API change.
Hellwig may not be communicating in a good way, but he might be right, unfortunately. Linux may need to stay as a C-only codebase until an AI language-translation tool is good enough to do things like maintain the Rust API translation layer itself. Or until basically all maintainers learn and accept Rust.
> Then I think we need a clear statement from Linus how he will be working. If he is build testing rust or not.
> Without that I don't think the Rust team should be saying "any changes on the C side rests entirely on the Rust side's shoulders".
> It is clearly not the process if Linus is build testing rust and rejecting PRs that fail to build.
The matter and the question at heart is still unsettled. The answer of whether or not Rust being in a broken state is a blocker for working and valid C code will hopefully be addressed by the end of this cycle of development. Either the patches are accepted and Rust is truly allowed to be broken or the patches will not be accepted due to breaking the Rust builds. If it is the latter, as many of the C developers fear, that is the exact burden being placed upon them that they have been stressing very loudly that they have no interest in taking on. And once other maintainers see this, what is the inevitable response to this metastasization? Comments like those from Ted and Christoph will pale in comparison. The only bright side might be that this finally accelerates the inevitable demise of this failed Rust experiment so that all parties can move on with their business.
Let's say that we both agree that Linus should be making clear statements here, and that lack of clarity is causing lots of problems.
That one bug happened one time does not mean that the promise is broken. To be clear, it's a bad thing, but acting like this means the entire working rules are a lie is overreacting.
> That one bug happened one time does not mean that the promise is broken.
It's not been once. Don't you understand that is why things have gotten to this point? Are you aware of how the developers have been using Coccinelle in their workflows and how many subsystems support it? And are you aware that the Coccinelle for Rust implementation is constantly in a dire state? Have some empathy for the folks who have had their workflows broken and burdens increased because of it.
> Let's say that we both agree that Linus should be making clear statements here, and that lack of clarity is causing lots of problems.
Clarity will be communicated by the result of this patch set.
> Will you show empathy to the R4L folks constantly having sand poured into their fuel tank by kernel maintainers?
But the thing is, the kernel isn't "the R4L folks"' "fuel tank", is it? Doesn't the very name you use, "kernel maintainers", tell you that that's their "fuel tank"? Seems the people pouring sand into that are "the R4L folks".
Kernel maintainers aren't doing anything to R4L folks except where their activities intersect with the upstream linux kernel. Your analogy isn't right.
R4L folks thought that preliminary R4L infrastructure being accepted upstream meant that all the changes they needed in the future would be accepted easily as well, and now that there are concerns from subsystem maintainers, a few R4L folks are playing the dramatic victim card.
From what I understand, market pressures are causing R4L folks to panic. If they can't get more R4L stuff upstream, people can't ship Rust drivers, and R4L development falls apart.
That's not kernel maintainers' problem, though. They have a job to do to, and it has nothing to do with Rust kernel drivers except as far as they're persuaded Rust drivers help the linux community overall and will be maintainable without unacceptably disrupting their tool-assisted workflows. Several of them have concluded, so far, that R4L PRs will unacceptably disrupt existing workflows.
That's where R4L has failed. They've failed to persuade subsystem maintainers that some added headaches/retooling/efficiency-loss is worth it to enable Rust drivers. Which drivers, and who wants to use them? Perhaps the potential downstream users of those drivers can talk with kernel devs, persuade them it's really valuable, and figure out a plan?
> a few R4L folks are playing the dramatic victim card.
Not just 'a few R4L folks', these are basically the spearhead of the effort. And it hasn't been a single occurrence, we now have seen two of those Rust developers resign from the effort out of pure frustration from the entrenched, heels-in-the-sand attitude from kernel maintainers.
The same thing has happened in the past. I will point to TuxOnIce, which got stonewalled by a stubborn maintainer. It would have moved most suspend and hibernation code to userspace, where it would have been easier to maintain and iterate. Or right now we have two divergent RAM compression paths (Zram and Zswap). There are patches for Zram to use the unified Zpool API, but the Zram maintainer refuses to integrate these for absolutely no solid technical reason.
It seems that the R4L peoples are fine with doing any effort required, as long as they know it is not in vain. So far, comments akin to "language barriers in the kernel are a cancer" and "I will do everything to prevent/sabotage this" do not exactly build trust in that regard.
As an aside, would it really be so horrible for kernel maintainers to learn (some) Rust? This is not a dire ask for someone in the software world, your job will often ask this of you. And I can't imagine the maintainers haven't ever dabbled in other languages and/or aren't capable enough to learn a second language.
I understand the fear that saying "ok, we can do more languages than one" opens up the door to a third and fourth language down the road, but that seems unlikely. There's a reason Rust has passed Linus' sniff test where no other language has so far.
> Which drivers, and who wants to use them?
Short term it seems mostly GPU drivers.
Long, looong term (multiple decades?) the plan is probably to rewrite more important parts of the kernel, to significantly increase memory safety.
Well they are. Second class citizens like this just cause problems.
> As an aside, would it really be so horrible for kernel maintainers to learn (some) Rust?
Are the Rust folks planning to pay the existing kernel developers for spending that time? Because otherwise it should be up to those existing maintainers to decide and not something anyone else gets to demand.
Overall this just sounds like people crying that their hostile takeover of an existing project is being met with resistance.
> > a few R4L folks are playing the dramatic victim card.
> Not just 'a few R4L folks', these are basically the spearhead of the effort.
A "spearhead" is by definition a few people.
> As an aside, would it really be so horrible for kernel maintainers to learn (some) Rust?
If they want to contribute to the Linux kernel, would it really be so horrible for these Rust programmers to learn the language the kernel is written in?
> There's a reason Rust has passed Linus' sniff test where no other language has so far.
Doesn't seem it has, really.
> the plan is probably to rewrite more important parts of the kernel
Calling memory safety an 'ideal' is like calling back-ups an 'ideal'. You can cast it off as a "nice to have", until you get bitten in the ass by a lack of it.
To quote myself[1]: I don't get why those Rustafaris do this in the first place. Like, if you want to contribute to a piece of software written in C, then write your contributions in C. If, on the other hand, you want a Linux kernel in Rust... Then just fork it and rewrite it in Rust. It's not as if that were a new idea, is it? Heck, most days it feels like about half the top headlines here are “<Software X> rewritten in Rust!” So why don't they just do that in stead of this constant drama???
I'm not sure this is fundamentally different from e.g. a complex filesystem implementation relying on a specific MM or VFS or whatever API, and a "C developer" wanting to change that API. Just because the callers are in C doesn't necessarily make the global change easy.
> If shaming on social media does not work, then tell me what does, because I'm out of ideas.
That's just manipulative. Maybe it's just a moment of frustration and they'd take it back eventually, but blackmailing people with social drama is not cool. That's what X and Reddit is for.
> Rust folks: Please don't waste your time and mental cycles on drama like
this [...] they know they're going to be on the losing side of history sooner or
later.
Threatening with social media seems to me like a professional suicide, especially when it's followed by a doubling down and a complete lack of anything resembling an apology.
I'm amazed that Hector is doing this fully in public using his real name.
It's definitely your environment, Rust isn't nearly as popular for actual projects getting done as it is in (for example) StackOverflow's survey. C, C++, Odin and Zig are all languages where whatever C knowledge you have right now can trivially be put to use to do things, immediately, I think it makes perfect sense that unless someone is just really into doing Rust things they'll just take the trivially applicable ways of working and use the aforementioned languages (which all have much better support for batch memory management and custom allocators than Rust, it should be added).
There are a lot of C programmers who do not like or are not fond of Zig or Odin. For highly proficient C and C++ programmers, many consider them not worth their time. Not even counting, that both Zig and Odin are in beta. It's an investment to become as proficient in the other language.
An example is Muratori from handmade hero, who publicly stated he dislikes Zig and pretty much sees it as a confused mess[1]. He thinks Odin is kind of OK, because it has metaprogramming (which other languages do too), but is way below Jai. Between Jai and Odin, he thought Jai was the one with potential and could get him to switch/use. Odin's slight popularity, as a kind of Jai-lite, is often attributed to Jai being a closed beta. But, it appears that might change soon (Jai going public or opening up beta use more), along with an upcoming book on Jai.
Programming languages like Go and V (Vlang), have more of a use case for new programmers and general-purpose, because they are easier to learn and use. That was the point of their creation.
I would suggest linking to a timestamped point in the video instead of the entire thing. With regards to Odin having metaprogramming this is either a misunderstanding on your part of what was said or a misunderstanding from Casey Muratori with regards to Odin; Zig has a much stronger claim to having metaprogramming with `comptime` being as powerful as it is, whereas Odin only scarcely can be said to have a limited "static if" in the `when` keyword, plus your average parametric polymorphism via template parameters.
C programmers (and C++ programmers) who don't like Zig or Odin can simply use C (or C++), it's not really the case that they have to switch to anything, much less something as overwrought as Rust. My point is that it is natural for someone who is at any proficiency level above 0 in C to want to apply that proficiency in something that allows them to make use of those skills, and C, C++, Odin and Zig all allow them to do that, pretty much immediately.
You contend that "It's an investment to become as proficient in the other language" but I would argue that this is a very wide spectrum where Rust falls somewhere in the distance as opposed to all of the languages I mentioned instead being far closer and more immediately familiar.
True that Odin is admittedly quite limited in that area. Muratori made the case that Jai was far beyond both Zig and Odin, thus recommending its use and what made it attractive for him. The other issue, is investing in languages that are in beta.
The counter, in regards to "any proficiency above 0", is there are other alternative languages (that are easier to learn and use) where knowledge of C can be put to immediate use too. Getting up to speed in those languages, would likely be much faster, if not more productive for less proficient or experienced programmers.
> The other issue, is investing in languages that are in beta.
I work full time with Odin and I wouldn't put too much stress on this point as I've found that whatever issues we have with Odin are simply issues you'd have with C for the most part, i.e. C knowledge will help solve it. It's not a complex language, it doesn't need to do much and language-specific issues are few and far between because of it. Tooling is a bit of a mixed bag but it's about 80% solved as far as we're concerned.
Odin day-to-day for working on 3D engines (which I do) is not beta quality, whatever that means. I would rate it a better net fit for that purpose than I would C++, and I wrote the kind of C++ you would write for engine work for about 10-11 years (at least what we knew to be that kind of C++ at around 2001). For server software I'd rate it worse, but this is mostly because there are a few missing FFI wrappers for things that usually come up, it's not really because of the language per se.
As for Jai, I can't claim to know much of anything about it because I've never used it, and I wouldn't take many peoples' word for much of anything about the language. As far as I'm concerned Jai will exist when it's publicly available and until then it's basically irrelevant. This is not to disparage the effort, but a language is really only relevant to talk about once anyone with motivation and an idea can sit down, download/procure it and use it to build something, which is not the case with Jai.
My comment was addressing the idea that Odin is a "beta language" in comparison to C and my assertion was that the issues we have with using Odin are the same ones you'd have with C. So assuming that you have accepted C-style issues, then Odin really does not present many or common additional roadblocks despite being relatively young.
Also, I know this may shock a lot of people who don't regularly use sharp tools: The possibility (and even occurrence) of memory safety issues and memory corruption doesn't tank a project, or make it more costly than if you'd used a GC, or Rust. Memory bugs range from very easily avoided and solved to surprising and very, very hard to solve.
> The possibility (and even occurrence) of memory safety issues and memory corruption doesn't tank a project, or make it more costly than if you'd used a GC, or Rust. Memory bugs range from very easily avoided and solved to surprising and very, very hard to solve.
I never said this. The Linux kernel and any other project will thrive with or without Rust. But it is a certainty that the higher the ratio of C-to-Rust in the world (or well, any memory-safe systems language), the higher the amount of CVEs we will see. It would seem reasonable to me to assume that we want as few CVEs as possible.
To be honest, I also don't understand the dogmatic stance a lot of C developers have against Rust. Yes, it does cause some friction to learn a new language. But many of the correct idioms Rust enforces are things you should be doing in C anyway, its just that the compiler is a lot more strict instead of just nagging you, or not even nagging at all.
I meant the whole “we are on the right side of history” and if you’re not with us you’re against us and we’ll wage a public drama campaign against you.
When i was the focus of the rust community, and trending #1 on HN, i simply deleted my repos and disappeared.
Some people in that community reflexively see a conspiracy or invoke the CoC or use whatever other non technical tools they find to derail the discussion.
It's not even that they're always wrong or that I directly oppose the culture they want to have, but the shotgun blast of drama that this comes with is just so much effort to navigate that i decided to just never contribute to rust again.
The dirty laundry of the entire kernel community not immediately knuckling under and implementing a rather long list of technical changes in the preferred way of one specific individual? That's an unreasonable take. It would be disastrous for all of us if the project ever went that way, considering the importance of Linux today. It's not like I disagree with some of the points that Marcan raised, but others have equally valid concerns as well. However, he wants to get discussion and compromise out the window and go straight to social media drama. That's never okay.
And please, stop using the concept of ad hominems to selectively justify blatant abuse. That's not what the term is about.
.. or it could just be that 'the public' has little to add to kernel development other than social landmines that the kernel teams have to navigate in order to achieve technical work..
I'm no where near in the loop on this, but that sounds incredibly toxic. Are there some good links to summarize this?
EDIT:
OK, these are fully enough to allow me to understand the issue.
Marcan: https://lore.kernel.org/rust-for-linux/208e1fc3-cfc3-4a26-98...
Linus: https://lore.kernel.org/rust-for-linux/CAHk-=wi=ZmP2=TmHsFSU...
I must say, I think I'm on Linus' side on this one.
EDIT2:
I was a $$ supporter of Marcan for over a year on github. I was predisposed to believe him, I'd say.