Hacker Newsnew | past | comments | ask | show | jobs | submit | gardaani's commentslogin

Debian Trixie drops 32-bit x86 support. Ubuntu dropped 32-bit support already earlier, which meant that lightweight Lubuntu and Xubuntu don't support it either. It's sad to see old hardware support getting dropped like that. They are still good machines as servers and desktop terminals.

Are there any good Linux distros left with 32-bit x86 support? Do I have to switch to NetBSD?


> They are perfectly good machines as servers and desktop terminals.

On the power usage alone surely an upgrade to a still extremely old 64bit machine would be a significant upgrade. For a server that you run continuously a 20+ year old machine will consume quite a bit.


Among the possible reasons for keeping old machines are specific expansion cards or peripherals that only work on old motherboards.

Whether that motivates debian support is another question.


Indeed. I still keep around a couple of old computers, because they have PCI slots (parallel, not PCIe), unlike any newer computer that I have, and I still use some PCI cards for certain purposes.

However, those computers are not so old as to have 32-bit CPUs, they are only about 10 years old, but that was because I was careful at that time to select MBs that still had PCI slots, in order to be able to retire all older computers.


The only peripherals that truly don't work in more modern boards would be AMR/ACR/CNR? ISA and plain PCI being reasonably easy to acquire expansion boxes (might be a problem for EISA though, I guess...)


I mean, if you have antique hardware why not run an antique OS on it? What's the point of upgrading?


> Are there any good Linux distros left with 32-bit x86 support?

I would rather be surprised if there isn't. I think antiX is one option,[1] PuppyLinux and probably Alpine Linux.

[1]: https://www.antixforum.com/forums/topic/will-antix-24-suppor...


> It's sad to see old hardware support getting dropped like that.

The problem is that someone needs to work to support different hardware architectures. More exotic hardware, more complicated and expensive the work becomes.

People who run these 32-bit machines are unlikely to vouch in terms of work contributed or money contributed to get the people paid for this work, so it is better to drop the support and focus the same developer resources on areas which benefits larger user base.


the real problem is Linux is written in such manner that APIs change very often, and code all over the place including drivers is monkeyed without testing and often broken, sometimes in very subtle and hard to debug ways.

It is an overall design issue. Linux has a huge maintenance burden.


Indeed it is. I am not sure if BSD people have better support for legacy hardware. Outside BSD, there is no competition for open source operating systems, or generally legacy PC hardware systems overall as Windows support was dropped long time ago.


The kernel APIs change never and 32bit support is still within the kernel without any issue at all. How does this implicate drivers?

The rest is user space code. Usually in C. Which has plentiful facilities for managing this. We've been doing it for decades. How is it a sudden burden to continue?

Where are the examples of distributions failing because of this "issue?"


The internal kernel API used by device drivers changes at every version, even at minor versions.

If you maintain an out-of-tree driver, it is extremely likely that it will fail to compile at every new kernel release.

The most common reasons are that some definitions have been moved from one kernel header to another or some members have been added to a structure or deleted from it, or some arguments have been added to a function invocation or deleted from it.

While the changes required to fix a broken device driver are typically small, finding which they are can require wasting a lot of time. The reason is that I have never seen any of the kernel developers who make these changes that break unrelated device drivers writing any migration document that instructs the device maintainers how to change the old source code to be compatible with the new interfaces.

Normally there is absolutely no explanation about the purpose of the changes and about what the users of the old API should do. Only in rare cases scanning the kernel mail messages may find some useful information. In other cases you just have to read the kernel source, to discover the intentions of whoever has changed the API.

That does the job eventually, but it consumes far more time than it should.

Even in-tree drivers for old devices, which may no longer have a current maintainer, will become broken eventually.


If you think it is not a burden you are free to maintain yourself. This is the core value promise of open source.


> are unlikely to vouch in terms of work contributed or money contributed

Unlikely? Did anyone bother to ask?


Yes


They must not have done a very good job then.


> Are there any good Linux distros left with 32-bit x86 support?

When your kernel stopped support over a decade ago (iirc) it does seem inevitable that distros will slowly evaporate.


Linux didn't drop support for 32bit x86.

You might be thinking of i386 or something.


OK, I have to precise it because this comment is not super clear...

They're no longer providing 32-bit images. However if you have a 64-bit processor, and a 64-bit image, you can still run 32-bit binaries.


I have actually upgraded a really old 32 bit only laptop from bookworm to trixie - it works. Two important (for the desktop environment) packages required SSE3 which the CPU doesn't support, so... I installed package "sse3-support" with environment variable IGNORE_ISA=1 set. The kernel was not upgraded because trixie doesn't contain an i386 kernel. The laptop works surprisingly okay, though of course it's not much fun with its weak performance, low RAM and mechanical disk.


Void Linux, MX Linux, antiX, and Slackware still maintain 32-bit x86 support with active development.


Isn't MX Linux Debian based? Would this mean they'll also be dropping it when they shift their base to Trixie?


They could always build upon the remaining 32 bit support and provide a kernel build themselves (I don't know the project though)


32-bit userspace packages are still supported, though with increased hardware requirements compared to Bookworm. You may find that you're able to run a more-or-less complete 32-bit Trixie userspace, while staying on Bookworm wrt. the kernel and perhaps a few critical packages.

If 32-bit support gets dropped altogether (which might happen for 'forky' or 'duke') it can probably move to the unofficial Debian Ports infrastructure, provided that people are willing to keep it up-to-date.


I kept my old 32-bit laptop alive for quite a bit using archlinux32 but in the end more and more software started breaking, so that is not really a route I can recommend anymore. I was using the laptop mainly if I was traveling or on holidays, so not so very often so it felt wasteful to to buy a new one. But this year the software breakages really started costing too much time, so I bought a new one. RIP old laptop (2007-2025).


Alpine still supports x86 and other 32bit platforms. It’s also very lightweight, though I’d say it targets a very different userbase than Debian or Ubuntu — the default install is quite minimal and requires more setup.


Does dropping 32-bit support just mean that there are no supported x86-32 OS images, or that 32-bit applications are generally not supported?

If it means that no 32-bit apps are supported, how does Steam handle this? Does it run 32-bit games in a VM? Is the Steam client itself a 64-bit application these days or still stuck on 32-bits?


There is no 32-bit kernel build or ISO image anymore but you can upgrade to Trixie from Bookworm.


No x86-32 images.


Ah ok, that sounds much less dramatic than 'dropping 32-bit support' :)


The "i386" is now basically x86-32 compatibility. I think there were suggestions of an unofficial port which hosted x86-32.


Just keep in mind that this does not apply to armhf, which is 32 bits and all old Raspberry Pi boards use.

What's your use case for 32-bit x86 where you are still keeping Debian at its latest version? Alone for the power consumption you might be better off by switching to a newer low-spec machine.


Gentoo supports pretty much everything under the sun.


>It's sad to see old hardware support getting dropped like that. They are still good machines as servers and desktop terminals

How many 32 bit PCs are still actively in used at scale to make that argument? Linux devs are now missing kernel regression bugs on 64bit Core 2 Duo hardware because not enough people are using them anymore to catch and report these bugs, and those systems are newer and way more capable for daily driving than 32 bit ones. So then if nobody uses Core 2 Duo machines anymore, how many people do you think are using 32bit Pentium 4/Athlon XP era machines to make that argument?

But let's pretend you're right, and assume there's hoards of Pentium 4 users out there refusing to upgrade for some bizarre reason, and are unhappy they can't run the latest Linux, then using a Pentium 4 with its 80W TDP as a terminal would be an insane waste of energy when that's less capable than some no-name Android table with a 5W ARM SoC which can even play 1080p Youtube while the Pentium 4 cannot even open a modern JS webpage. Even most of the developing world now has more capable mobile devices in their pockets and don't have use for the Pentium 4 machines that have long been landfilled.

And legacy industrial systems still running Pentium 4 HW, are just happy to keep running the same Windows XP/Embedded they came from the factory since those machines are airgapped and don't need to use latest Linux kernel for their purpose.

So sorry, but based on this evidence, your argument is literally complaining for the sake of complaining about problems that nobody outside of retro computing hobbyists have who like using old HW for tinkering with new SW as a challenge. But it's not a real issue for anyone. So what I don't get is the entitled expectations that the SW industry should keep writing new SW for free to keep it working for your long outdated 25+ year old CPU just because you, for some reason, refuse to upgrade to more modern HW that can be had for free.

There are good faith arguments to be had about the state of forced obsolescence in the industry with Microsoft, Apple, etc, but this is not one of them.

>Are there any good Linux distros left with 32-bit x86 support? Do I have to switch to NetBSD?

Yes there are, tonnes: AntiX, Devuan, Damn Small Linux, Tiny Core Linux, etc


> 80W TDP as a desktop terminal would be an insane waste of energy when that's less capable than some no-name Android table with a 5W ARM SoC which can even paly 1080p Youtube while the Pentium 4 cannot

Insane how far hardware got: the pentium 4 engineers probably felt like the smartest people alive and now a pentium 4 looks almost as ridiculous and outdated to us as vacuum tube computers.


What I find truly insane is how we're "wasting" the hardware.

I ran a mailserver for thousands of people on a 486 DX 33Mhz. It had smtp, pop3 and imap. It was more than powerful enough to handle it with ease.

I had a Pentium 3 w/1GB of RAM, and it was a supremely capable laptop.

These days I have a a machine from 2018 or 2019, which I upgraded to 32G of RAM and I added an NVME drive in addition to the spinning rust earlier this year.. because firefox got (extremely, more than a minute to start the browser) sluggish due to an HDD instead of NVME.

Now, it's obvious that an NVME drive is superior, but it surprises me how incredibly lackadaisical we've gotten with resource usage. It surprises me how little "extra" we get, except ads that requires more and more resources. Sure, we've got higher resolution photos, higher resolution videos, and now AI will require vast resources (which of course is cool). At the same time, we don't get that much more utility out of our computers.


>What I find truly insane is how we're "wasting" the hardware.

Only if you ignore the business economic realities of the world we live in. Unless you work at hyperscaleres of MS/google/meta where every fraction of percent optimization save millions of dollars, nobody is pays SW engineers to optimize SW for old consumer devices because it's money wasted you won't recoup so you offload it on the customer to buy better HW. Rinse and repeat.

>I had a Pentium 3 w/1GB of RAM, and it was a supremely capable laptop.

Why isn't it supremely capable anymore? It will still be just as capable if you run the same software from 1999 on it. Or do you expect to run 2025 SW on it?


Chips that old would be on a ~100nm process node, which is ancient. Anything using Flintstone transistors like those isn't going to hold up.

> the pentium 4 engineers probably felt like the smartest people alive

The P4's NetBurst architecture had some very serious issues, so that might not be true. NetBurst assumed that 10GHz clock speeds would soon be a thing, and some of the engineers back then might have had enough insight to guess that this approach was based on an oversimplified view of semiconductor industry trends. The design took a lot of stupid risks, such as the ridiculously long instruction pipeline, and they were always unlikely to pay off.


> that's less capable than some no-name Android table

You cannot run the software you want on that device, so I don't see how you can claim that.


Did you read the comment I was replying to? They said old 32 bit systems can still be kept around to be used as a terminal. I replied saying that Android tablets can also be used as terminals with the right SW if that's what you're after.

Or if you have specific X86 terminal SW, use a PC with a 64 bit CPU if you want to run modern PC SW. They've been making them since 2003. That's 23 years of used 64bit HW on the market that you can literally get for free at this point.

Or keep using your old 32 bit system to run your old 32 bit terminal SW. Nobody's taking away your ability to run your old 32 bit version of the SW when new 64 bit SW comes out.


> Oklab is a nightmare in practice - it's not linked to any perceptual color space, but it has the sheen of such in colloquial discussion. It's a singular matmul that is supposed to emulate CAM16 as best as it can.

Oklab is perceptually uniform and addresses issues such as unexpected hue and lightness changes in blue colors present in the CIELAB color space. https://en.wikipedia.org/wiki/Oklab_color_space

Oklab is used in CSS because it creates smoother gradients and better gamut mapping of out-of-gamut colors than Lab. Here's a picture how Oklch (on the left) creates smoother gamut mapping than CIE Lch (on the right) ("Explore OKLab gamut mapping in oklch"): https://github.com/w3c/csswg-drafts/issues/9449#issuecomment...


> Oklab is perceptually uniform and addresses issues such as unexpected hue and lightness changes in blue colors present in the CIELAB color space. https://en.wikipedia.org/wiki/Oklab_color_space

This isn't true. Oklab is a singular matmul meant to approximate a perceptually accurate color space.

> Oklab is used in CSS because it creates smoother gradients and better gamut mapping of out-of-gamut colors than Lab.

That's not true, at all. Not even wrong. Gamut mapping is separate from color space.

> Here's a picture how Oklch (on the left) creates smoother gamut mapping than CIE Lch (on the right)

I love the guy who wrote this but we have an odd relationship, I'd have people tell me all the time he wondered why I wasn't reaching out to him, and we've never met, he's never contacted me, etc.

If you're him, we should talk sometime.

I doubt you're him, because you're gravely misunderstanding the diagram and work there. They're comparing gamut mapping algorithms, not comparing color spaces, and what is being discussed is gamut mapping, not color spaces.


Oklab is not perceptually uniform. It's better than other color spaces with equally simple conversion functions, but in the end, it was created as a simple approximation to more complex color spaces, so compared to the best you could do, it's merely OK (hence the name).


> Oklab is not perceptually uniform

By what metric? If the target is parity with CAM16-UCS, OKLab comes closer than many color spaces also designed to be perceptually uniform.


If the target is parity with CAM16-UCS, CAM16-UCS is best, tautologically. Sure, if you need a fast approximation, by all means fall back to Oklab, but that optimization isn't going to be necessary in all cases.


Obviously; but this doesn’t suggest that OKLab is not a perceptually uniform color space.

There is no “one true” UCS model - all of these are just approximations of various perception and color matching studies, and at some point CAM16-UCS will probably be made obsolete as well.


No offense, but I do find the interlocution here somewhat hard-headed.

In a sentence, color science is a science.

The words you are using have technical meanings.

When we say "Oklab isn't a perceptually accurate color system", we are not saying "it is bad" - we are saying "it is a singular matmul that is meant to imitate a perceptually accurate color system" -- and that really matters, really -- Google doesn't launch Material 3 dynamic color if we just went in on that.

The goal was singular matmul. Not perceptual accuracy.

Let me give you another tell something is really off that you'll understand intuitively.

People love quoting back the Oklab blog post, you'll also see in a sibling comment something about gradients and CAM16-UCS.

The author took two colors across the color wheel, blue and yellow, then claimed that because the CAM16-UCS gradient has gray in it, Oklab is better.

That's an absolutely insane claim.

Blue and yellow are across the color wheel from each other.

Therefore, a linear gradient between the two has to pass through the center of the color wheel.

Therefore a gradient, i.e. a lerp, will have gray in it -- if it didn't, that would be really weird and indicate some sort of fundamental issue with the color modeling.

So of course, Oklab doesn't have gray in the blue-yellow gradient, and this is written up as a good quality.

If they knew what they were talking about at the time, they wouldn't have been doing gradients in CAM16-UCS, and not done a lerp, but used the standard CSS gradient technique of "rotating" to the new point.

Because that's how you avoid gray.

Not making up a new color space, writing it up with a ton of misinfo, then leaving it up without clarification so otherwise-smart people end up completely confused for years, repeating either the blog post or "nothings perfect" ad naseum as an excuse to never engage with anything past it. They walk away with the mistaken understanding a singular matmul somehow magically blew up 50 years of color science.

I just hope this era passes within my lifetime. HSL was a tragedy. This will be worse, if it leaves the ability to do actual color science some sort of fringe slow thing in people's heads.


Yes, it's a matmul; many color models just boil down to simple math. For example, look at Li and Luo's 2024 "simple color appearance model"[1], which is very similar to OKLab (just matmul!), and created for many of the same reasons (just an approximation!). Like OKLab, it also improves upon CAM16-UCS hue linearity issues in blue. Ironically, Luo was one of the authors who proposed CAM16-UCS in 2017. And, although it certainly improves upon CAM16-UCS for many applications, I'm not yet convinced it is superior to OKLab (you can see my implementation here: [2]).

And I think you might be mis-remembering Ottosson's original blog post; he demonstrates a gradient between white and blue, not blue and yellow.

[1] https://opg.optica.org/oe/fulltext.cfm?uri=oe-32-3-3100

[2] https://github.com/texel-org/color/blob/main/test/spaces/sim...


> Yes, it's a matmul; many color models just boil down to simple math.

All do. :)

> For example, look at Li and Luo's 2024 "simple color appearance model"[1], which is very similar to OKLab (just matmul!), and created for many of the same reasons (just an approximation!)

I don't understand what this shows me. I don't see anyone in the thread arguing there can only be one color model with one matmul. I feel self-concious I'm missing something, so I thought maybe the implication is this a real scientist working on a real space therefore our haranguing about "actual" perceptual spaces is hair-splitting, as we see a color scientist making an approximation. You also note that it is an approximation, as does the paper, so I don't think that's the case...idk :(

> Like OKLab, it also improves upon CAM16-UCS hue linearity issues in blue. Ironically, Luo was one of the authors who proposed CAM16-UCS in 2017.

What's the ironic part? (my understanding: you read this as a competition, so you find it ironic, in the colloquial sense, that the color space you perceive us advocating for, or that Oklab can replace, was created by someone who made a singular matmul type-space like Oklab in a paper?)

> I'm not yet convinced it is superior to OKLab (you can see my implementation here: [2]).

I appreciate your work and desire here and you have firey curiosity. In practice, color science uses UCS spaces to measure color difference, not render colors. (he uses CAM16-UCS and CAM16 interchangeably as well, it's confusing)

> And I think you might be mis-remembering Ottosson's original blog post; he demonstrates a gradient between white and blue, not blue and yellow.

You're right! That makes a whole lot less obvious that there's something wrong. :( Here, the sin is throwing away the whole science bit and says that's fine, look at this example.

See gradients here. https://m3.material.io/blog/science-of-color-design

Note particularly the black and white one. Gives a great sense of how much of an outlier Oklab is, and you can't fuck around with lightness that much, that's how you measure contrast.


I have been using lima to run Linux VMs on macOS. This new Apple tool looks very similar and I might replace lima with it.


Does your crawler respect robots.txt? Does it request pages with nice delays so that it doesn't bring down servers? Related: https://news.ycombinator.com/item?id=43422413


We don’t respect robots.txt because estate agents often block listing pages - not necessarily to prevent indexing, but for SEO reasons (Google penalises large sites with transient pages).

That said, we do crawl responsibly i.e. we use reasonable request delays, respect rate limits etc. We want agents to like us, ultimately, and blowing up their servers doesn't help with that. If an agent prefers opt-out, we always honour it.


Many modern file formats are based on generic container formats (zip, riff, json, toml, xml without namespaces, ..). Identifying those files requires reading the entire file and then guessing the format from the contents. Magic numbers are becoming rare, which is a shame.


Wikipedia has a good explanation why the PNG magic number is 89 50 4e 47 0d 0a 1a 0a. It has some good features, such as the end-of-file character for DOS and detection of line ending conversions. https://en.wikipedia.org/wiki/PNG#File_header


The old PNG specification also explained the rationale: http://www.libpng.org/pub/png/spec/1.2/PNG-Rationale.html#R....

But the new spec doesn't explain: https://www.w3.org/TR/2003/REC-PNG-20031110/


That is unfortunate. Not enough standards have rationale or intent sections.

On the one hand I sort of understand why they don't "If it is not critical and load-bearing to the standard. Why is it in there? it is just noise that will confuse the issue."

On the other hand, it can provide very important clues as to the why of the standard, not just the what. While the standards authors understood why they did things the way they did, many years later when we read it often we are left with more questions than answers.


Clear rationales doesn't sell well: better to obfuscate specs with hidden design decisions and build complexity moat.


At first I wasn't sure why it contained a separate Unix line feed when you would already be able to detect a Unix to DOS conversion from the DOS line ending:

0D 0A 1A 0A -> 0D 0D 0A 1A 0D 0A

But of course this isn't to try and detect a Unix-to-DOS conversion, it's to detect a roundtrip DOS-to-Unix-to-DOS conversion:

0D 0A 1A 0A -> 0A 1A 0A -> 0D 0A 1A 0D 0A

Certainly a very well thought-out magic number.


Unix2dos is idempotent on CRLF, it doesn’t change it to CRCRLF. Therefore converted singular LFs elsewhere in the file wouldn’t be recognized by the magic-number check if it only contained CRLF. This isn’t about roundtrip conversion.


It's also detecting when a file on DOS/Windows is opened in "ASCII mode" rather than binary mode. When opened in ASCII mode, "\r\n" is automatically converted to "\n" upon reading the data.


I can count the number of times I've had binary file corruption due to line ending conversion on zero hands. And I'm old enough to have used FTP extensively. Seems kind of unnecessary.


“Modern” FTP clients would auto detect if you were sending text or binary files and thus disable line conversations for binary.

But go back to the 90s and before, and you’d have to manually select whether you were sending text or binary data. Often these clients defaulted to text and so you’d end up accidentally corrupting files if you weren’t careful.

The pain was definitely real


And, if you were using a Windows client talking to a Unix server, you didn't want to get a text file in binary mode, since most programs at the time couldn't handle Unix line endings. This is much better nowadays, to the point that it rarely matters on either side of the platform divide which type of line endings you use.


It can easily happen with version control across Windows and Unix clients. I’ve seen it a number of times.


Not for binary files. I've seen it for text files, sure.


I’ve seen it with binary files a number of times.


Have you tried using git with Windows clients?

There are so many random line conversions going on and the detection on what is a binary file is clearly broken.

I don't understand why the default would be anything but "commit the file as is"


> I don't understand why the default would be anything but "commit the file as is"

Because it’s not uncommon for dev tools on Windows to generate DOS line endings when modifying files (for example when adding an element to an XML configuration file, all line endings of the file may be converted when it is rewritten out from its parsed form), and if those where committed as-is, you’d get a lot of gratuitous changes in the commit and also complaints from the Unix users.

For Git, the important thing is to have a .gitattributes file in the repository with “* text=auto” in it (plus more specific settings as desired). The text/binary auto-detection works mostly fine.


Up until just a few years ago, Notepad on Windows could not handle Unix-style line endings. It probably makes sense now to adopt the as-is convention, but for a while, it made more sense to convert when checking out, and then to prevent spurious diffs, convert back when committing.


Line endings between windows and unix-like systems were so painful that when I started development on my shell scripting language, I wrote a bunch of code to all Linux to handle Windows files and visa versa.

Though this has nothing to do with FTP. I’d already abandoned that protocol by then.


Yes I have. For binary files it's never an issue because Git detects those and doesn't to line ending conversion.

And I agree "commit the file as-is" should be the default - what programming editor can't handle unix newlines?


Bluetooth is also turned on after every OS update. I don't understand why macOS does these. They can't be bugs because they have been around for years.


Because a newb might complain that their earbuds/pencil is not working.


That they cannot be bugs definitely does not follow. One look at Windows tray icons, monitor recognition, sound volume management, and many other things will tell you that much. Broken since forever. So definitely big companies and tech giants can keep bugs in there for many years. Also note the bugs on iOS described by someone else here in this thread.


That's why I've decided that my next mouse won't have any rubber in it, but it's difficult to find a good mouse without rubber. I'm still looking for one.

Generally, I try to avoid buying anything with rubber. It is usually the first part that goes bad. Either it gets sticky and starts melting or it gets hard and dry and breaks up. Also, I avoid using rubber bands. They usually end up damaging objects they hold together.

Here's a good page about conservation of rubbers and plastics for those who like to preserve their vintage stuff for a long time. https://www.canada.ca/en/conservation-institute/services/con...


If you have don't care about the price, there's always FinalMouse. Their mice are carbon fiber, with PTFE feet. I forget if the wheel has any, I'd have to check once I'm home.


Another annoying Xcode feature is that building a macOS app with Xcode command line tools (xcodebuild) sometimes complains that my iPhone is locked:

"An error occurred whilst preparing device for development -- Failed to prepare the device for development. Domain: com.apple.dtdevicekit Code: 806 Recovery Suggestion: Please unlock and reconnect the device. The device is locked. Domain: com.apple.dt.MobileDeviceErrorDomain"


For those that don't understand exactly how annoying this is, I once took the time to reverse engineer Xcode to figure out exactly where it was coming from and then inject code into it to make it shut up: https://github.com/saagarjha/dotfiles/blob/master/xcodebuild...


Thanks for sharing. Do you build that into your app or should it run as a command line program on its own?


You need to disable SIP and then you can DYLD_INSERT_LIBRARIES (or equivalent) it into xcodebuild


> Ideally, you can constrain the set of inputs to only valid ones by leveraging types. But if that's not possible and a truly invalid input is passed, then you should panic.

But how can the caller know what is "a truly invalid input"? The article has an example: "we unfortunately cannot rely on panic annotations in API documentation to determine a priori whether some Rust code is no-panic or not."

It means that calling a function is like a lottery: some input values may panic and some may not panic. The only way to ensure that it doesn't panic is to test it with all possible input values, but that is impossible for complex functions.

It would be better to always return an error and let the caller decide how to handle it. Many Rust libraries have a policy that if the library panics, then it is a bug in the library. It's sad that the Rust standard library doesn't take the same approach. For println!(), it would mean returning an error instead of panicking.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: