Hacker Newsnew | past | comments | ask | show | jobs | submit | cyphar's commentslogin

TFA mentions this option and then goes on at some length to explain that this doesn't help for transitive dependencies, which is how these attacks usually work.

AV1 has been around for a decade (well, it was released 7 years ago but the Alliance for Open Media was formed a decade ago).

It's fine that you haven't heard of it before (you're one of today's lucky 10,000!) but it really isn't that niche. YouTube and Netflix (from TFA) also started switching to AV1 several years ago, so I would expect it to have similar name recognition to VP9 or WebM at this point. My only interaction with video codecs is having to futz around with ffmpeg to get stuff to play on my TV, and I heard about AV1 a year or two before it was published.


So the solution is to stop doing code reviews and just YOLO-merge everything? After all, everything is fucked already, how much worse could it get?

For the record, there are examples where human code review and design guidelines can lead to very low-bug code. NASA published their internal guidelines for producing safety-critical code[1]. The problem is that the development cost of software when using such processes is too high for most companies, and most companies don't actually produce safety-critical software.

My experience with the vast majority of LLM code submitted to projects I maintain is that it has subtle bugs that I managed to find through fairly cursory human review. The copilot code review feature on GitHub also tends to miss actual bugs and report nonexistent bugs, making it worse than useless. So in my view, the death of the benefits of human code review have been wildly exaggerated.

[1]: https://en.wikipedia.org/wiki/The_Power_of_10:_Rules_for_Dev...


No, that's not what I wrote, and it's not the correct conclusion. What I wrote (and what you, in fact, also wrote) is that in reality we generally do not actually need provably correct software except in rare cases (e.g., safety-critical applications). Suggesting that human review cannot be reduced or phased out at all until we can automatically prove correctness is wrong, because fully 100% correct and bug-free software is not needed for the vast majority of code being produced. That does not mean we immediately throw out all human review, but the bar for making changes for how we review code is certainly much lower than the above poster suggested.

I don't really buy your premise. What you're suggesting is that all code has bugs, and those bugs have equal severity and distribution regardless of any forethought or rigor put into the code.

You're right, human review and thorough design are a poor approximation of proving assumptions about your code. Yes bugs still exist. No you won't be able to prove the correctness of your code.

However, I can pretty confidently assume that malloc will work when I call it. I can pretty confidently assume that my thoroughly tested linked list will work when I call it. I can pretty confidently assume that following RAII will avoid most memory leaks.

Not all software needs meticulous careful human review. But I believe that the compounding cost of abstractions being lost and invariants being given up can be massive. I don't see any other way to attempt to maintain those other than human review or proven correctness.


I did suggest all code has bugs (up to some limit -- while I wasn't careful to specify this, as discussed above, there does exist an extraordinary level of caution and review that if used can approximate perfect bug-free code, as in your malloc example and in the example of NASA, but that standard is not currently applied to 99.9% of human-generated and human-reviewed code, and it doesn't need to be). I did not suggest anything else you said I suggested, so I'm not sure why you made those parts up.

"Not all software needs meticulous careful human review" is exactly the point. The question of exactly what software needs that kind of review is one whose answer I expect to change over the next 5-10 years. We are already at the point where it's so easy to produce small but highly non-trivial one-off applications that one needn't examine the code at all -- I completely agree with the above poster that we're rapidly discovering new examples of software development where output-verification is all you need, just like right now you don't hand-inspect the machine code generated by your compiler. The question is how far that will be able to go, and I don't think anybody really knows right now, except that we are not yet at the threshold. You keep bringing up examples where the stakes are "existential", but you're underestimating how much software development does not have anything close to existential stakes.


Not only that, almost every software license (FOSS or proprietary) has a similar clause (often in all-caps).

It's funny because I remember the first time I tried Linux (Debian) nearly 25 years ago, that "NO WARRANTY, EVEN FITNESS FOR A PARTICULAR PURPOSE" warning in all caps really did leave a bad taste in my mouth, enough to deter me from using it much more until a couple of years later. Nowadays such things are just line noise to me, but back then I remember being lightly concerned.

There are quite a few open source or royalty free Japanese fonts (Google Fonts has 50[1]).

But, as everyone else has mentioned, font usage in games (and most creative visual works) is more particular than just the bare minimum of "does it actually render the glyphs". Imagine if all text in your favourite game was all Times New Roman, it would make the game worse.

[1]: https://fonts.google.com/?lang=ja_Jpan


I grew up in russia and every localized game always had godawful ugly thin fonts. I would always watch foreign gameplay videos with jealousy cause they had beautiful latin fonts.

Now it's been years since I played a game in russian (almost a decade at this point I think) and I am so glad I don't have to put up with that anymore. Once in a while I see a screenshot from a cyrillic-using language translated game and probably half of the time the fonts are still bad.


It's the same for some non-Japanese games who don't take the time to think it through, worse case scenario is when they use Chinese fonts.

Because Japanese and Chinese characters are slightly different, but Unicode decided to unify them under the same codes....

https://en.wikipedia.org/wiki/CJK_Unified_Ideographs


The same happens in reverse sometimes, e.g. the super skinny font in final fantasy pixel remasters:

https://images.rpgsite.net/image/da49c9a1/102696/original/FF...

Ironically, they had a better Latin font _in the Japanese language version_ for all the genre loan abbreviations like MP/HP/LV etc., (https://terimaland.com/Memory/Steam_FinalFantasyPixelRemaste...) so that image is comparing the modded in Japanese Latin font vs the font the game includes by default.

(They also have a retro blocky "pixellated" font option iirc, which doesn't have the super narrow widths)


They look like half-width characters. This is a historical issue, not a font style issue. You can check:

https://mailmate.jp/blog/half-width-full-width-hankaku-zenka...


Standard Latin characters are half width. Full width latin characters do exist, but that’s not the difference we’re seeing here, if anything that would make the Latin font in the English release “quarter width” except that’s not a real thing that exists. It’s just an ultra condensed font, and the fix is replacing the font files with more standard fonts, not some kind of special font to treat half width as full width.

Realistically it’s only katakana that you can make this mixup on. My desktop IME will let me type カタカナ or (reluctantly) カタカナ, though it turns out iOS doesn’t have a way to type the half width kana, and IMEs have differing opinions on if they prefer full width digits, so you might see full width numbers like 5000。


I remember that PES6 was the only PES to get Polish translation. The start screen wouldn't render correctly because the font was missing the glyph Ś.

it's because whatever font stack to directx renderer middleware game engines were using was usually only developed for the latin script, probably the lowest common denominator of no diacritics either.

Alternately, imagine if all text in your favorite game was IP owned by some random third party that could ruin everything one or two years down the line.

Perhaps it is time for more people to invest in royalty free IP? We are seeing a bit of a tragedy of the commons type of situation going down, right now.


Sounds like one should be able to get a long way if you put down, say, 2 years of the new license fee and then never have to pay again

I guess the trouble is that game companies can't really band together and pool money to make a good font since the point is to look unique, so this only works for the largest studios. Otoh, at least for now, the smaller ones might stay below the 25k user limit


Most typefaces would cover a basic Sans (Helvetica/Liberation Sans) and a Serif font enough for the game to be playable.

Other LSMs are slowly switching to syscalls too, and while I in principle like (and have abused) the whole "everything is a file" principle, most security mechanisms really should be done via special-purpose syscalls. Way too many footguns with filesystem-based APIs. Also, you wouldn't be able to use Landlock to restrict filesystem access based on dirfds with a filesystem-based API.

The questions you have about seccomp depend on the rules. Well-written filters would return -ENOSYS in that case, so it would look to the program as though the syscall is unsupported.


glibc has been reticent about adding new syscall wrappers for a few years. The situation did improve for a bit recently (and they added something like 5 years of syscalls from their backlog in the past few years) but I'm not surprised it's taking some time.

Thankfully we have had unified syscall numbers on Linux (for almost all architectures) for the past few years so tracking them is less painful than it used it be.


You absolutely can, both systems are practically identical in this respect.

> In Go you know exactly what code you’re building thanks to gosum

Cargo.lock

> just create vendor dirs before and after updating packages and diff them [...] I don’t believe I can do the same with Rust.

cargo vendor


Of course it doesn't provide backports by itself, it's a versioning system. But version number changes with SemVer are meant to indicate whether an update includes new fearhews or not (minor bump means new features, patch bump means bugfixes only).

Of course, the actual issue is that maintaining backports isn't free, so expecting it from random single-person projects is a little unrealistic. Bug fixes in new code often need to be rewritten to work on old code. I do maintain old release branches for some projects and backporting single patches can cause whole new bugs that were never present in the main branch quite easily.


On Debian you can use the local registry for Rust which is backed by packages.

Though I will say, even as someone who works at a company that sells Linux distributions (SUSE), while the fact we have an additional review step is nice, I think the actual auditing you get in practice is quite minimal.

For instance, quite recently[1] the Debian package for a StarDict plugin was configured automatically upload all text selected in X11 to some Chinese servers if you installed it. This is the kind of thing you'd hope distro maintainers to catch.

Though, having build scripts be executed in distribution infrastructure and shipped to everyone mitigates the risk of targeted and "dumb" attacks. C build scripts can attack your system just as easily as Rust or JavaScript ones can (in fact it's probably even easier -- look at how the xz backdoor took advantage of the inscrutability of autoconf).

[1]: https://www.openwall.com/lists/oss-security/2025/08/04/1


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: