Hacker Newsnew | past | comments | ask | show | jobs | submit | ottah's commentslogin

I would assume some good faith on their part. Verification would be valuable, but so would timely release of information. If the reports are true, an active harm to those organizations are being done, and it would be valuable for the public to know sooner than later. If you attempt to verify the information, but it's taking more time and resources than you have to do the job quickly, releasing the information with attribution to a reputable source is the least harmful option.

> but so would timely release of information. If the reports are true, an active harm to those organizations are being done, and it would be valuable for the public to know sooner than later.

I do not believe that that is The Guardian’s goal with this reporting. If it were, wouldn’t it make more sense to list the organizations (provide actionable information), rather than spending time telling a story?

I also have a hard time seeing the harm or the size thereof without knowing more context about any of the organizations, what they do, and how much they rely or depend on Facebook to be effective.

If I were an organization that had my Facebook account suspended unfairly or unjustly, I would simply find a different way to stay in touch with others. Meta does not owe me anything


I wouldn't be surprised to see zig in the kernel at some point

I would be. Mostly because while Zig is better than C, it doesn't really provide all that much benefit, if you already have Rust.

I personally feel the Zig is a much better fit to the kernel. It's C interoperability is far better than Rust's, it has a lower barrier to entry for existing C devs and it doesn't have the constraints the Rust does. All whilst still bringing a lot of the advantages.

...to the extent I could see it pushing Rust out of the kernel in the long run. Rust feels like a sledgehammer to me where the kernel is concerned.

It's problem right now is that it's not stable enough. Language changes still happen, so it's the wrong time to try.


From a safety perspective there isn't a huge benefit to choosing Zig over C with the caveat, as others have pointed out, that you need to enable more tooling in C to get to a comparable level. You should be using -Wall and -fsanitize=address among others in your debug builds.

You do get some creature comforts like slices (fat pointers) and defer (goto replacement). But you also get forced to write a lot of explicit conversions (I personally think this is a good thing).

The C interop is good but the compiler is doing a lot of work under the hood for you to make it happen. And if you export Zig code to C... well you're restricted by the ABI so you end up writing C-in-Zig which you may as well be writing C.

It might be an easier fit than Rust in terms of ergonomics for C developers, no doubt there.

But I think long-term things like the borrow checker could still prove useful for kernel code. Currently you have to specify invariants like that in a separate language from C, if at all, and it's difficult to verify. Bringing that into a language whose compiler can check it for you is very powerful. I wouldn't discount it.


I’m not so sure. The big selling point for Rust is making memory management safe without significant overhead.

Zig, for all its ergonomic benefits, doesn’t make memory management safe like Rust does.

I kind of doubt the Linux maintainers would want to introduce a third language to the codebase.

And it seems unlikely they’d go through all the effort of porting safer Rust code into less safe Zig code just for ergonomics.


> Zig, for all its ergonomic benefits, doesn’t make memory management safe like Rust does.

Not like Rust does, no, but that's the point. It brings both non-nullable pointers and bounded pointers (slices). They solve a lot of problem by themselves. Tracking allocations is still a manual process, but with `defer` semantics there are many fewer foot guns.

> I kind of doubt the Linux maintainers would want to introduce a third language to the codebase.

The jump from 2 to 3 is smaller than the jump from 1 to 2, but I generally agree.


> I kind of doubt the Linux maintainers would want to introduce a third language to the codebase.

That was where my argument was supposed to go. Especially a third language whose benefits over C are close enough to Rust's benefits over C.

I can picture an alternate universe where we'd have C and Zig in the kernel, then it would be really hard to argue for Rust inclusion.

(However, to be fair, the Linux kernel has more than C and Rust, depending on how you count, there are quite a few more languages used in various roles.)


IMHO Zig doesn't bring enough value of its own to be worth bearing the cost of another language in the kernel.

Rust is different because it both:

- significantly improve the security of the kernel by removing the nastiest class of security vulnerabilities.

- And reduce cognitive burden for contributors by allowing to encode in thr typesystem the invariants that must be upheld.

That doesn't mean Zig is a bad language for a particular project, just that it's not worth adding to an already massive project like the Linux kernel. (Especially a project that already have two languages, C and now Rust).


Pardon my ignorance but I find the claim "removing the nastiest cla ss of security vulnerabilities" to be a bold claim. Is there ZERO use of "unsafe" rust in kernel code??

Aside from the minimal use of unsafe being heavily audited and the only entry point for those vulnerabilities, it allows for expressing kernel rules explicitly and structurally whereas at best there was a code comment somewhere on how to use the API correctly. This was true because there was discussion precisely about how to implement Rust wrappers for certain APIs because it was ambiguous how those APIs were intended to work.

So aside from being like 1-5% unsafe code vs 100% unsafe for C, it’s also more difficult to misuse existing abstractions than it was in the kernel (not to mention that in addition to memory safety you also get all sorts of thread safety protections).

In essence it’s about an order of magnitude fewer defects of the kind that are particularly exploitable (based on research in other projects like Android)


Not zero, but Rust-based kernels (see redox, hubris, asterinas, or blog_os) have demonstrated that you only need a small fraction of unsafe code to make a kernel (3-10%) and it's also the least likely places to make a memory-related error in a C-based kernel in the first place (you're more likely to make a memory-related error when working on the implementation of an otherwise challenging algorithm that has nothing to do with memory management itself, than you are when you are explicitly focused on the memory-management part).

So while there could definitely be an exploitable memory bug in the unsafe part of the kernel, expect those to be at least two orders of magnitude less frequent than with C (as an anecdotal evidence, the Android team found memory defects to be between 3 and 4 orders of magnitude less in practice over the past few years).


It removes a class of security vulnerabilities, modulo any unsound unsafe (in compiler, std/core and added dependency).

In practice you see several orders of magnitude fewer segfaults (like in Google Android CVE). You can compare Deno and Bun issue trackers for segfaults to see it in action.

As mentioned a billion times, seatbelts don't prevent death, but they do reduce the likelihood of dying in a traffic accident. Unsafe isn't a magic bullet, but it's a decent caliber round.


“by removing the nastiest class of security vulnerabilities” and “reduce the likelihood” don’t seem to be in the same neighborhood.

If you are reducing the likelihood of something by 99%, you are basically eliminating it. Not fully, but it’s still a huge improvement.

It reminds me of this fun question:

What’s the difference between a million dollars and a billion dollars? A billion dollars.

A million dollars is a lot of money to most people, but it’s effectively nothing compared to a billion dollars.


Dividing their number by 1000[1] is technically the later but in practice it's pretty much the former.

[1]: this the order of magnitude presented in the recent Android blog post: https://security.googleblog.com/2025/11/rust-in-android-move...

> Our historical data for C and C++ shows a density of closer to 1,000 memory safety vulnerabilities per MLOC. Our Rust code is currently tracking at a density orders of magnitude lower: a more than 1000x reduction.


In theory they are the same statement; in practice there is 0.01% chance someone wrote unsound code.

"Unsafe" rust still upholds more guarantees than C code. The rust compiler still enforces the borrow checker (including aliasing rules) and type system.

You can absolutely write drivers with zero unsafe Rust. The bridge from Rust to C is where unsafe code lies.

And hardware access. You absolutely can't write a hardware driver without unsafe.

There are devices that do not have access to memory, and you can write a safe description of such a device's registers. The only thing that is inherently unsafe is building DMA descriptors.

Zig as a language is not worth, but as a build system it's amazing. I wouldn't be surprised if Zig gets in just because of the much better build system than C ever had (you can cross compile not only across OS, but also across architecture and C stlib versions, including musl). And with that comes the testing system and seamless interop with C, which make it really easy to start writing some auxiliary code in Zig... and eventually it may just be accepted for any development.

I agree with you that it's much more interesting than the language, but I don't think it matters for a project like the Kernel that already had its build system sorted out. (Especially since no matter how nice and convenient Zig makes cross-compilation if you start a project from scratch, even in Rust thanks to cargo-zigbuild, it would require a lot of efforts to migrate the Linux build system to Zig, only to realize it doesn't support all the needs of the kernel at the start).


Zig at least claims some level of memory safety in their marketing. How real that is I don't know.

About as real as claiming that C/C++ is memory safe because of sanitizers IMHO.

I mean, Zig does have non-null pointers. It prevents some UB. Just not all.

Which you can achieve in C and C++ with static analysis rules, breaking compilation if pointers aren't checked for nullptr/NULL before use.

Zig would have been a nice proposition in the 20th century, alongside languages like Modula-2 and Object Pascal.


I'm unaware of any such marketing.

Zig does claim that it

> ... has a debug allocator that maintains memory safety in the face of use-after-free and double-free

which is probably true (in that it's not possible to violate memory safety on the debug allocator, although it's still a strong claim). But beyond that there isn't really any current marketing for Zig claiming safety, beyond a heading in an overview of "Performance and Safety: Choose Two".


Runtime checks can only validate code paths taken, though. Also, C sanitizers are quite good as well nowadays.

That's a library feature (not intended for release builds), not a language feature.

It is intended for release builds. The ReleaseSafe target will keep the checks. ReleaseFast and ReleaseSmall will remove the checks, but those aren't the recommended release modes for general software. Only for when performance or size are critical.

DebugAllocator essentially becomes a no-op wrapper when you use those targets.

I have heard different arguments, such as https://zackoverflow.dev/writing/unsafe-rust-vs-zig/ .

We will see how that goes. I love GrapheneOS, I've used it for years, but the details matter. An OEM partnership might promise a lot at the start, but a lot can change between now and delivery.

Worst case scenario we still have Google Pixels.

At this point I'm convinced that ad spending has nothing to do with sales, revenue or any real business principal. It's about power and influence. Advertisers control the culture, news and the public discourse by always paying more than any self funded model would pay. They don't care how much it costs, the control over society is always worth more.

This is pretty cool, I am literally working on a project very similar to this. IMO most of the current agent sandboxes are not great, they're either insufficient for the threat model, too platform dependent or saas only. A microvm I think is the correct answer.


Absolutely this, models acheive very highly on kind problems; ones that you can master with sufficient practice. Which is just remarkable, but the world is a wicked learning environment, and repetition is not enough.


I will never understand why people are so obsessed with this. You don't like it, dont engage with it. If you can't tell the difference, and it's entertainment stop worrying about it.

If veracity matters, use authorative sources. Nothing has really changed about the skills needed for media literacy.


> If veracity matters, use authorative sources

So having a good heuristic for identifying a broad category of non-authoritative sources would be useful, then?


But you have to engage with it before you can find out whether you like it.

>If you can't tell the difference, and it's entertainment stop worrying about it.

At the end of the day it's a philosophical/existential choice. Not everyone would step into the awesome-life-simulator where you can't tell the difference. On similar grounds one might decide on principle to consume only human-made media, be a part of the dynamical system that is real human culture.


We have always been in the life simulator philosophically speaking. Everything is a construct, and the universe is mostly an existential horror. You're only chasing misery by trying to be the information vegan.


Radical solipsism, the ultimate philosophical cop-out...

I would argue the largest CDN provider in the world is a critical path.


I would guess at the individual team level they probably still behave like any other tech shop. When the end of the year comes the higher-ups still expect fancy features and accomplishments and saying "well, we spent months writing a page of TLA+ code" is not going to look as "flashy" as another team who delivered 20 new features. It would take someone from above to push and ask that other team who delivered 20 features, where is their TLA+ code verifying their correctness. But, how many people in the middle management chain would do that?


I 100% agree with the performance improvement goals, but I think the security claims are overblown, and overly cautious. I honestly don't understand the point of trying to implement the security boundary in the display manager. It solves one class of security issues, while breaking a lot of accessibility and automation. The display manager just shouldn't be enforcing rigid per processes security controls, that's better done further down the stack. Or at a minimum security controls should respect user freedom enough to let a user access normally restricted features, with out the all or nothing elevation to root. There's a middle ground here where we don't break the world, and they get their shiny security policies.


I think The fact that people e.g. run the ydotool service as root is an example of this. It's like making a safe that is so hard to open that people just drill a hole in the bottom; you end up with something less secure than a safe that was easier to open.


Where in the stack should it be enforced that my cute desktop clock doesn't pull a Copilot and takes a screenshot of the entire desktop every 15 seconds to send to a remote service?


In Xorg and the WM, using the XACE extension.

https://www.x.org/archive/X11R7.5/doc/security/XACE-Spec.htm...


A security in depth approach obviously. Run less, use vetted sources, when running suspect software execute in a properly sandbox context. Seriously what's the point of securing screenshot and key loggers if a malicious process has full access to the users home directory, auido stack, webcam and network?

If you can't trust the process don't run it. If you have to run it, isolate all of it.

Wayland gives you neither the freedom to safely tailor your security policy, nor the security guarantees to warrant its inflexibility.


If your system is already running malware, why wouldn't the malware use a privilege escalation exploit (which are relatively numerous on linux) to access your data rather than some X11 flaw which depends on their code getting started by the user?


Because it's not an x11 "flaw" or exploit, it's just how X works. I also just don't buy the whole "well other stuff has exploits too" mentality.

I mean, yeah, it does, maybe. So why bother creating a password to a service if their database is probably running Linux anyway and the rdbms is probably compromised and yadda yadda yadda. It's the kind of argument you can make for anything.

Also no - privilege escalation is not "numerous" on Linux. It's very difficult to do in practice. It's only really a problem on systems built on old kernels which refuse to update. But those will always be insecure, just like running Windows 7 will be insecure.


A quick search for "linux local privilege escalation" in the CVE database (https://www.cve.org/CVERecord/SearchResults?query=linux+loca...) shows 25 results just for this year, so clearly these are very common.

So basically we have two issues here:

1. either focus on security even though these changes don't really improve the threat model

2. or allow disabled users and anyone who uses accessibility features to use GUIs


Neither Copilot or Recall do or did this.


The fact that desktop Linux is all or nothing in terms of privilege escalation is a design issue, however arguably Wayland gives us the tools to be more granular. Android has a permission system that makes sense can it's display manager is definitely closer in design to Wayland than it is to x11.


Android is the antithesis of an open computing platform and if anything the Linux desktop should use it as an example of what not to do.


Android's sandboxing is doing work for the benefit of the user. This is similar to how your web browser sandboxes JavaScript. Not every app needs access to my location and providing it access shouldn't require root. The Linux ecosystem understands this and it's why there is a large push for sandboxing models in software such as flatpak. Even if you disagree with Android at some level it hard to argue that users benefit from being able to control what the software they run is capable of doing. Otherwise we wouldn't have filesystem permissions to begin with, in the name of "freedom".


But them it's a question about how trustworthy an app is. Wouldn't it be better for software installed from your own distro repository to be fully trusted and require few or no security popups? After all, they are vetted to a much, much higher standard than any app store. Meanwhile flatpak apps and a random binary you've donwload get the full security isolation, because you can't trust third party devs.


That's not a scalable solution as not every piece of software can pay the packaging cost for every Linux distro. Maybe it's fine for core system software, but it's too difficult to expect that model to work for all software. Imagine if every website you interacted with needed to ship new website updates by packaging it and getting it vetted.

I think you still need a centralized distribution model even for things like flatpak to ensure some level of centralized auditing and revocation for software that has access to sensitive capabilities. However this doesn't necessarily need to be as large of a barrier for shipping updates as trying to package your software for a distro (and playing the game of trying to get your shared library versions aligned).


> that's better done further down the stack

If you do it further down the stack, you break accessibility and automation even more... this has been tried. Doesn't work.

The end goal is to have actually working Android-like sandboxing rather than some broken firejail crap.


So we don't get the security benefits or accessibility. I'm not sure what is being solved. I'm all for a modern display system, I'm just not convinced the security claims are in anyway justified.


How is preventing apps from spying on each other through the display manager not justified? That's the lowest hanging fruit for desktop sandboxing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: