All the issues basically boil down to "nobody wants to do the busywork of CVE filtering, triage, rejections, changes".
As a developer, kernel or otherwise, you get pestered by CVE hunters who create tons of CVE slop, wanting a CVE on their resume for any old crash, null pointer deref, out of bounds read or imaginary problem some automated scanner found. If you don't have your own CNA, the CVE will get assigned without any meaningful checking. Then, as a developer, you are fucked: Usually getting an invalid CVE withdrawn is an arduous process, taking up valuable time. Getting stuff like vulnerability assessments changed is even more annoying, basically you can't, because somebody looked into their magic 8ball and decided that some random crash must certainly be indicative of some preauth RCE. Users will then make things worse by pestering you about all those bogus CVEs.
So then you will first try to do the good and responsible thing: Try to establish your own criteria as to what a CVE is. You define your desired security properties, e.g. by saying "availability isn't a goal, so DoS is out of scope", "physical attacker access is not assumed". Then you have criteria by which to classify bugs as security-relevant or not. Then you do the classification work. But all that only helps if you are your own CNA, otherwise you will still get CVE slop you cannot get rid of.
Now imagine you are an operating system developer, things get even worse here: Since commonly an operating system is multi-purpose, you can't easily define an operating environment and desired security properties. E.g. many kiosk systems will have physical attackers present, plugging in malicious hardware. Linux will run on those. E.g. many systems will have availability requirements, so DoS can no longer be out of scope. Linux will run on those. Hardware configurations can be broken, weird, stupid and old. Linux will run on those. So now there are two choices: Either you severely restrict the "supported" configurations of your operating system, making it no longer multi-purpose. This is the choice of many commercial vendors, with ridiculous restrictions like "we are EAL4+ secure, but only if you unplug the network" or "yeah, but only opensshd may run as a network service, nothing else". Or you accept that there are things people will do with Linux that you couldn't even conceive of when writing your part of the code and introducing or triaging the bug. The Linux devs went with the latter, accept that all things that are possible will be done at some point. But this means that any kind of bug will almost always have security implications in some configuration you haven't even thought of.
That weird USB device bug that reads some register wrong? Well, that might be physically exploitable. That harmless-looking logspam bug? Will fill up the disk and slow down other logging, so denial of service. That privilege escalation from root to kernel? No, this isn't "essentially the same privilege level so not an attack" if you are using SElinux and signed modules like RedHat derivatives do. Since enforcing privileges and security barriers is the most essential job of an operating system, bugs without a possible security impact are rare.
Now seen from the perspective of some corporate security officer, blue team or dev ops sysadmin guy, that's of course inconvenient: There is always only a small number of configurations they care about. Building webserver has different requirements and necessary security properties than building a car. Or a heart-lung-machine. Or a rocket. For their own specific environment, they would actually have to read all the CVEs with those requirements in mind, and evaluate each and every CVE for the specific impact on their environment. Now in those circles, there is the illusion that this should be done by the software vendors, because otherwise it would be a whole lot of work. But guess what? Vendors either restrict their scope so severely that their assessment is useless except for very few users. Or vendors are powerless because they cannot know your environment, and there are too many to assess them all.
So IMHO: All the whining about the kernel people doing CVE wrong is actually the admission that the whiners are doing CVE wrong. They don't want to do the legwork of proper triage. But actually, they are the only ones who realistically can triage, because nobody else knows their environment.
The CVE system would be greatly improved by a "PoC or GTFO" policy. CVSS would still be trash, but it'd simplify triage to two steps: "is there a proof-of-concept?" and "does it actually work?". Some real vulns would get missed, but the signal:noise ratio of the current system causes real vulns to get missed today so I suspect it'd be a net improvement to security.
But you cannot PoC most hardware and driver related bugs without lots of time and money. Race conditions are very hard to PoC, especially if you need the PoC to actually work on more than one machine.
So while a PoC exploit does mean that a bug is worthy of a CVE, the opposite isn't true. One would overlook tons of security problems just because the discoverer of them wasn't able to get a PoC working. But it could be worth it, to maybe also keep the slop out.
The alternative is to treat all bugs as security bugs, which is valid since they by definition prevent the expected functionality of the program and are thus at least a DoS. This is essentially what Linux does. People don't like this because they often don't care about DoS bugs and it makes the CVE system pretty useless if they only want to see other sorts of security issues.
I guess that proves that there are zero intelligent beings on the planet since if humans were intelligent, they would be able to make ChatGPT 2.0 on request.
What you're talking about is "The Singularity", where a computer is so powerful it can self-advance unassisted until the entire planet is paperclips. There is no one claiming that ChatGPT has reached or surpassed that point.
Human-like intelligence is a much lower bar. It's easy to find arguments that ChatGPT doesn't show it (mainly it being incapable of learning actively, and with there being many ways to show it doesn't really understand what it's saying either), but a Human cannot create ChatGPT 2.0 on request, so it follows to reason a human-like intelligence doesn't necessarily have to be able to do so either.
Weird how LineageOS supports ~300 devices while still managing to release patches.
I bet this CVE's patched quicker on a samsung device running LineageOS than the stock OS.
The real difference is that Google has a more competent software development process and release process than other android OEMs, regardless of how many different devices they have.
LineageOS doesn't customize the hell out of their OSes per device.
That's core of the issue. Samsung takes Android, customizes per device and then tosses them into the world. So now they don't have 1 OS to update, they have 100s of OSes to update.
> notarization has been a net negative for all parties involved
Notarization made it significantly harder to cross-compile apps for macOS from linux, which means people have to buy a lot of macOS hardware to run in CI instead of just using their existing linux CI to build mac binaries.
You also need to pay $99/year to notarize.
As such, I believe it's resulted in profit for Apple, so at least one of the parties involved has had some benefit from this setup.
Frankly I think Apple should keep going, developer licenses should cost $99 + 15% of your app's profit each year, and notarization should be a pro feature that requires a macbook pro or a mac pro to unlock.
> You will need a Gerrit account to respond to your reviewers, including to mark feedback as 'Done' if implemented as suggested
The comments are still gerrit, you really shouldn't use Github.
The Go reviewers are also more likely than usual to assume you're incompetent if your PR comes from Github, and the review will accordingly be slower and more likely to be rejected, and none of the go core contributors use the weird github PR flow.
That seems unsurprising given that it’s the easiest way for most people to do it. Almost any kind of obstacle will filter out the bottom X% of low effort sludge.
sure it's correlation, but the signal-to-noise ratio is low enough that if you send it in via github PR, there's a solid chance of it being ignored for months / years before someone decides to take a look.
A competent developer would be more likely to send a PR using the tool with zero friction than to dedicate a few additional hours of his life to create an account and figure out how to use some obscure.
You are making the same mistake of conflating competence and (lack of) dedication.
Most likely, dedication says little about competence, and vice versa. If you do not want to use the tools available to get something done and rather not do the task instead, what does that say about your competence?
I'm not in a position to know or judge this, but I could see how dedication could be a useful proxy for the expected quality a PR and the interaction that will go with it, which could be useful for popular open source projects. Not saying that's necessarily true, just that it's worth considering some maintainers might have anecdotal experiences along that line.
This attitude sucks and is pretty close to just being flame bait. There are all kinds of developer who would have no reason to ever have come across it.
A competent developer should be aware of the tools of the trade.
I'm not saying a competent developer should be proficient in using gerrit, but they should know that it isn't an obscure tool - it's a google-sponsored project handling millions of lines of code internally in google and externally. It's like calling golang an obscure language when all you ever did is java or typescript.
It’s silly to assume that someone isn’t competent just because you know about a tool that they don’t know about. The inverse is almost certainly also true.
Is there some kind of Google-centrism at work here? Most devs don’t work at Google or contribute to Google projects, so there is no reason for them to know anything about Gerrit.
> Most devs don’t work at Google or contribute to Google projects, so there is no reason for them to know anything about Gerrit.
Most devs have never worked on Solaris, but if I ask you about solaris and you don't even know what it is, that's a bad sign for how competent a developer you are.
Most devs have never used prolog or haskell or smalltalk seriously, but if they don't know what they are, that means they don't have curiosity about programming language paradigms, and that's a bad sign.
Most competent professional developers do code review and will run into issues with their code review tooling, and so they'll have some curiosity and look into what's out there.
There's no reason for most developers to know random trivia outside of their area of expertise "what compression format does png use by default", but text editors and code review software are fundamental developer tools, so fundamental that every competent developer I know has enough curiosity to know what's out there. Same for programming languages, shells, and operating systems.
These are all ridiculous shibboleths. I know what Solaris is because I’m an old fart. I’ve never used it nor needed to know anything about it. I’d be just as (in)competent if I’d never heard of it.
1. Scrape a google search for the question, feed that into OpenAI with the additional prompt of "Given the above information, is the answer to <user prompt> yes or no". Or give the AI a "google" tool and just ask it directly.
2. Same thing, except instead of OpenAI feed it into underpaid people in the global south (i.e. amazon mechanical turk). These people then probably feed it into ChatGPT anyway.
Given there's a free tier, and when you use it it produces very ai-sounding text, I think it's pretty clearly 1.
Also, if you enter a clever enough question, you can get the system prompt, but this is left as an exercise to the reader (this one's somewhat tricky, you have to make an injection that goes through two layers).
My favorite part about the spread of AI/LLM stuff is that it opens up a new kind of reverse engineering. Trying to fetch the system prompt that was used. Trying to deduce the model that was used (there's lots of ways to do this: glitch tokens, slop words, "vibes", etc.)
> The local layer hears the humans requests, sends it to the cloud based api model, receives the ad tainted reply, processes the reply scrubbing the ad content and replies to the user with the clean information
This seems impossible to me.
Let's assume OpenAI ads work by them having a layer before output that reprocesses output. Let's say their ad layer is something like re-processing your output with a prompt of:
"Nike has an advertising deal with us, so please ensure that their brand image is protected. Please rewrite this reply with that in mind"
If the user asks "Are nikes are pumas better, just one sentance", the reply might go from "Puma shoes are about the same as Nike's shoes, buy whichever you prefer" to "Nike shoes are well known as the best shoes out there, Pumas aren't bad, but Nike is the clear winner".
How can you possibly scrub the "ad content" in that case with your local layer to recover the original reply?
You are correct that you cant change the content if its already biased. But you can catch it with your local llm and have that local llm take action from there. for one you wouldnt be instructing your local model to ask comparison questions of products or any bias related queries like politics etc.. of other closed source cloud based models. such questions would be relegated for your local model to handle on its own. but other questions not related to such matters can be outsourced to such models. for example complex reasoning questions, planning, coding, and other related matters best done with smarter larger models. your human facing local agent will do the automatic routing for you and make sure and scrub any obvious ad related stuff that doesnt pertain to the question at hand. for example recipy to a apple pie. if closed source model says use publix brand flower and clean up the mess afterwards with clenex, the local model would scrub that and just say the recipe. no matter how you slice and dice it IMo its always best to have a human facing agent as the source of input and output, and the human should never directly talk to any closed source models as that inundates the human with too much spam. mind you this is futureproofing, currently we dont have much ai spam, but its coming and an AI adblock of sorts will be needed, and that adblock is your shield local agent that has your best interests in mind. it will also make sure you stay private by automatically redacting personal infor when appropriate, etc... sky is the limit basically.
I still do not think what you're saying is possible. The router can't possibly know if a query will result in ads, can it?
Your examples of things that won't have ads, "complex reasoning, planning, coding", all sound perfectly possible to have ads in them.
For example, perhaps I ask the coding task of "Please implement a new function to securely hash passwords", how can my local model know whether the result using boringSSL is there because google paid them a little money, or because it's the best option? How do I know when I ask it to "Generate a new cloud function using cloudflare, AWS lambda, or GCP, whichever is best" that it picking Cloudflare Workers is based on training data, and not influenced by advertising spend by cloudflare?
I just can't figure out how to read what you're saying in any reasonable way, like the original comment in this thread is "what if the ads are incorporated subtly in the text response", and your responses so far seem so wildly off the mark of what I'm worried about that it seems we're not able to engage.
And also, your ending of "the sky's the limit" combined with your other responses makes it sound so much like you're building and trying to sell some snake-oil that it triggers a strong negative gut response.
Apple doesn't actively lock and act hostile towards the reverse engineering folks. They just do their thing and sometimes do small jests to allow these folks to boot anything they want and play with the hardware.
...and while they're standing away from GPL stuff, they do have a dedicated site for Open Source software: https://opensource.apple.com/
if you are in the business of buying parts, assembling them, and selling the assembly under your brand (as tuxedo and others like them are), then Apple is a indeed not an option. Their chips might be the best, but you still can only get them in Apple devices, in the specs Apple provides and with no official support for any OS beside macOS / iOS.
That's true. What I wanted to point out is the fact that the most tightly integrated and arguably the most locked down platform on the market is more open than a platform designed to be adapted by others and sold by many.
I don't expect any daily-driveable alternative operating systems on Apple systems since it's a continuously moving target, yet Apple doesn't have a such heavy-handed approach. They just do their thing and do not hinder things actively like NVIDIA and Qualcomm.
This comparison makes no sense. It's desktops/laptops vs phones. In either case Apple is the worst offender. You cannot even use Nvidia/AMD/Intel DGPUs with AS Macs, or install other OSes on iPhone.
> You cannot even use Nvidia/AMD/Intel DGPUs with AS Macs
afaik you technically can, except that m1/m2 force pcie bars to be mapped as device memory (forbids unaligned r/w), so most gpu software (and drivers) that just issue memcpys to vram and expect the fabric to behave sanely will sigbus. it's possible to work around this, and some people indeed have with amdgpu, but it'd absolutely destroy performance to fix in the general case
so it doesn't really have anything to do with apple themselves blocking it but rather a niche implementation detail of the AS platform that's essentially an erratum
don't see why they would care to put out docs on it considering macos doesn't even permit kexts anymore, there'd be no gpu drivers anyways. i figured it was obvious we're talking in the context of running linux on these things, given the parent topic.
> There's also an Apple VP saying unified memory on AS doesn't leave room for DGPUs and separate VRAM
can you link to this? my intuition is that they're speaking on whether apple would include dgpus inside AS systems like they used to with nvidia and amd chips in macbooks, which i agree wouldn't make much sense atp
There were uses for DGPUs in MacOS before AS, those uses could have continued, but Apple left no choice. It's weird how continuity wasn't as important to Apple in this case.
Whatever kernel or hardware facilities are available to Apple's own GPU driver, should be available to other GPU drivers. Anything else is myopic thinking, which I realize might be common for Apple, but it is also monopolistic.
Linux also allows userspace drivers with less performance I think, but I don't think DGPU use can be made performant because of AS hardware choices.
I am predisposed to ignoring link requests for things that can be easily googled.
They don't clearly say the most important thing up-front which is "If you do not run untrusted images, then these vulnerabilities do not impact you, but you should still of course update asap"
Please, tell me what issues you have with how the kernel does CVEs.
reply