Hacker Newsnew | past | comments | ask | show | jobs | submit | doubletwoyou's commentslogin

Not really? Unedited would be some sort of raw. JPEG usually implies preprocessed by the camera

I guess I meant unedited by the photographer manually (e.g. using Lightroom etc.)

Either that or a photo that has been edited from a RAW and is a final version to be posted online.


Truth. I’m horrified at this backslide into, or at the very least newly unrestrained application of might makes right.


Same, I just had to learn it from somewhere else to find it here.


healthwise i guess for the bad


Triple-cooked chips are usually parboiled and twice fried; if anything it's probably healthier than a single fry, because the first fry is brief and hot to crisp up the exterior, so the oil absorption's less the second time, and perhaps less overall than a single (necessarily cooler) fry.


The chips in question are deeply brown and shiny, but your reasonable-sounding explanation suits my need for self-delusion so well that I will accept it whether it’s true or not.


The cones are the colour sensitive portion of the retina, but only make up a small percent of all the light detecting cells. The rods (more or less the brightness detecting cells) would still function in a deuteranopic person, so their luminance perception would basically be unaffected.

Also there’s something to be said about the fact that the eye is a squishy analog device, and so even if the medium wavelengths cones are deficient, long wavelength cones (red-ish) have overlap in their light sensitivities along with medium cones so…


The rods are only active in low-light conditions; they're fully active under the moon and stars, or partially active under a dim street light. Under normal lighting conditions, every rod is fully saturated, so they make no contribution to vision. (Some recent papers have pushed back against this orthodox model of rods and cones, but it's good enough for practical use.)

This assumption that rods are "the luminance cells" is an easy mistake to make. It's particularly annoying that the rods have a sensitivity peak between the blue and green cones [1], so it feels like they should contribute to colour perception, but they just don't.

[1]: https://en.wikipedia.org/wiki/Rod_cell#/media/File:Cone-abso...


Consider myself educated, thanks!


i really do like kde, it just works and doesnt’t get in my way!

and there’s the fact that every update actually seems to improve the user experience is something quite astonishing in the current climate

im still reluctant about wayland but the work the kde team have put in to make it just work (x11 dpi, fractional scaling, actually supporting extensions) is just lovely


25 GiB for a single binary sounds horrifying

at some point surely some dynamic linking is warranted


To be fair, this is with debug symbols. Debug builds of Chrome were in the 5GB range several years ago; no doubt that’s increased since then. I can remember my poor laptop literally running out of RAM during the linking phase due to the sheer size of the object files being linked.

Why are debug symbols so big? For C++, they’ll include detailed type information for every instantiation of every type everywhere in your program, including the types of every field (recursively), method signatures, etc. etc., along with the types and locations of local variables in every method (updated on every spill and move), line number data, etc. etc. for every specialization of every function. This produces a lot of data even for “moderate”-sized projects.

Worse: for C++, you don’t win much through dynamic linking because dynamically linking C++ libraries sucks so hard. Templates defined in header files can’t easily be put in shared libraries; ABI variations mean that dynamic libraries generally have to be updated in sync; and duplication across modules is bound to happen (thanks to inlined functions and templates). A single “stuck” or outdated .so might completely break a deployment too, which is a much worse situation than deploying a single binary (either you get a new version or an old one, not a broken service).


I've hit the same thing in Rust, probably for the same reasons.

Isn't the simple solution to use detached debug files?

I think Windows and Linux both support them. That's how phones like Android and iOS get useful crash reports out of small binaries, they just upload the stack trace and some service like Sentry translates that back into source line numbers. (It's easy to do manually too)

I'm surprised the author didn't mention it first. A 25 GB exe might be 1 GB of code and 24 GB of debug crud.


> Isn't the simple solution to use detached debug files?

It should be. But the tooling for this kind of thing (anything to do with executable formats including debug info and also things like linking and cross-compilation) is generally pretty bad.


> I think Windows and Linux both support them.

Detached debug files has been the default (only?) option in MS's compiler since at least the 90s.

I'm not sure at what point it became hip to do that around Linux.


Since at least October 2003 on Debian:

[1] "debhelper: support for split debugging symbols"

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=215670

[2] https://salsa.debian.org/debian/debhelper/-/commit/79411de84...


Can't debug symbols be shipped as separate files?


The problem is that when a final binary is linked everything goes into it. Then, after the link step, all the debug information gets stripped out into the separate symbols file. That means at some point during the build the target binary file will contain everything. I can not, for example, build clang in debug mode on my work machine because I have only 32 GB of memory and the OOM killer comes out during the final link phase.

Of course, separate binaries files make no difference at runtime since only the LOAD segments get loaded (by either the kernel or the dynamic loader, depending). The size of a binary on disk has little to do with the size of a binary in memory.


> The problem is that when a final binary is linked everything goes into it

I don't think that's the case on Linux, when using -gsplit-dwarf the debug info is put in separate files at the object file level, they are never linked into binaries.


Yes, but it can be more of a pain keeping track of pairs. In production though, this is what's done. And given a fault, the debug binary can be found in a database and used to gdb the issue given the core. You do have to limit certain online optimizations in order to have useful tracebacks.

This also requires careful tracking of prod builds and their symbol files... A kind of symbol db.


Yes, absolutely. Debuginfo doesn't impact .text section distances either way, though.


I’ve seen LLVM dependent builds hit well over 30GB. At that point it started breaking several package managers.


To be fair, they worked at Google, their engineering decisions are not normal. They might just decide that 25 GiB binaries are worth a 0.25% speedup at start time, potentially resulting in tens of millions of dollars' worth of difference. Nobody should do things the way Google does, but it's interesting to think about.


The overall size wouldn't get smaller just because it is dynamically linked, on the contrary (because DLLs are a dead code elimination barrier). 25 GB is insane either way, something must have gone horribly wrong very early in the development process (also why, even ship with debug information included, that doesn't make sense in the first place).


Won't make a bit of difference because everything is in a sort of container (not Docker) anyway. Unless you're suggesting those libraries to be distributed as base image to every possible Borg machine your app can run on which is an obvious non-starter.


the :1 is short for :0001 basically and then just put that bit of the address at the very end and put the first bit of the address at the front, and then just fill each missing group inbetween with 0000


"just"


Yes, in fact "just". This isn't remotely hard.


These types of complaints are how I know the objection to v6 is not serious.


Well, okay, show us how to follow those instructions then.

"the :1 is short for :0001 basically" is easy enough: you get 2001::0001::0001.

Then "just put that bit at the very end" -- but which bit? If it means the ":0001", then there's two of them and they can't both go at the very end. If not, then it fails to specify which bit. Either way I don't see how these instructions are followable at all, let alone easily.


along those lines, I saw a video where a light strobing pattern from quake was still used in a section half-life alyx (with much better lighting lol)

the fact the quake engine’s internals still live on in so many modern games is quite a feat


Half life was built on quake 1 code and the codebase was never abandoned but they kept improving and improving on it, essentially all valve games have something inherited from it from TF2 to Dota to Alyx.


goodness that’s a lot of ads

this is probably one of the most promising candidates for an init system aside from systemd, right? I know openrc is still having trouble with parallel startups as a result of its design which is almost a must at this point. s6 is in the works from what I hear, but not configurable via simple config files yet.


The best challenger to systemd in terms of feature parity is probably dinit: https://davmac.org/projects/dinit/

Have a look at Chimera Linux if you want to give it a try: https://chimera-linux.org/

runit, s6, and OpenRC don't have the downsides of systemd, but they also only cover a subset of its features


> s6 is in the works from what I hear,

s6 is just a process supervisor meant work alongside an init. That said, an init was developed exclusively for s6, and it's ready according to the author's website. You could in theory setup a full Linux system with it.

The author is currently working on a user-friendly UI/frontend for the system. But that's not an essential component.

> but not configurable via simple config files yet.

I don't know if that's the goal of the project at all. Config files defeat the entire purpose of its design.


Systemd isn't merely an init system though, so I always find these comparisons unfair.

They should focus on one simple and good alternative to the startup-functionality for non-systemd infected systems though. Void has one advantage: they have many clever people, a bit like how Arch used to be before they succumbed to systemd (today's arch is different from when Judd was in charge).


I made https://github.com/andrewbaxter/puteron/ ! It's more similar to systemd in that it has a dependency graph, but I think it's simpler to use and better aligned with typical use cases. I haven't used it as a full init system though, only on top of systemd (same as I've seen runit used).


dinit is, imo, the one I'd pick. It's focused but it has dependencies and the services are defined by a DSL rather than startup scripts.

Having said that, I haven't used runit and from the look of it, it's a big improvement over SystemV at the very least.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: