Hacker Newsnew | past | comments | ask | show | jobs | submit | more binaryturtle's commentslogin

I run a setup like that on my (outdated) Yosemite machine to provide multiple private TLDs for local deployment/development needs.

I set that up in like 2014? Even back then it was known already that the quick /etc/resolver way was the deprecated way to do things. So I guess they finally killed that feature off?

The proper (more awkward) way is to use scutil directly (which then stores the settings in some binary plist somewhere, I assume).

Maybe try this and see if it still works afterwards?


It would be less hypocritical if that critique of the situation wasn't posted on a website that itself loads unnecessary 3rd party resources (e.g. cloudflare insights).

Luckily I use a proper content blocker (uBlock Origin in hard mode).


Even with ad blocking, it's transferring over 200KB of data, half of which is to load a couple of fonts. Not terrible but the basic HTML is only 17KB.


So does OpenTTD gets a part of the generated income then? The team has costs too to support the game.


That's the content why I check HN! :)


I can't select text. A huge accessibility issue. A site like that is disqualified by default. The whole content is in a thin stripe in the middle, left and right huge areas unused. Resizing the browser window causes weird effects.

If that's a case study how to not do things: Congrats! Job well done!


Text selection works for me (using Firefox).

The squash and stretch effect is certainly not to my taste but it's a personal site, where people should feel they can try things out. I do think the effect should be disabled when users have "reduce motion" enabled.


Actually I can select text too it seems, but I can't see the selection and it acts weirdly. Probably some JavaScript overloading user interactions.

I need to use the "Allow Right Click" extension (it also fixes other annoyances than just pure right click blocking) to make it act sanely.


Those are some intense system requirements.


Ah, yes, I’m not able to test in Intel Macs. Will try to make it available to lower macOS versions at least.


You might find some volunteers here willing to make Intel builds :)


it's open source, anyone can help with it, I'm happy to merge PRs


Just added support to macOS Sequoia.


(unintentional) satire? almost a meditation on the seemingly inevitable bloat of software


Pretty much any Mac bought in the past 5 years can fulfil the requirements, which doesn't feel terribly unreasonable, and I bet the Intel case would be straightforward to cover too, and now you're catering for every Mac bought in the past 6 years.

Apple are dicks about making it easy to test on older macOS revisions, but I'm sure that'd actually be easier than you'd expect too. (I have a FOSS project that has macOS as a supported target. It targets OS X Mavericks, released in 2013. I don't have any easy way of testing anything older than macOS Ventura, released in 2022, and to be honest I don't really care to figure out how to do any better, but, last time somebody complained about OS X Mavericks incompatibility, and I fixed the problem, which was actually very easy to do: it did apparently work.) Put in a modicum of effort and I'm sure you can make this thing work for every Mac sold since like 2015, and there'd be a non-zero chance it'd work for some older ones too.

Thinking back to when BBSs were a thing, since that'd be on topic: perhaps Americans got a lucky break with the Apple II, in production from 1977-1993 (says Wikipedia) and seemingly a viable commercial platform for a measurable fraction of the period? For me, growing up in the UK in the late 20th century, the whole computer ecosystem seemed to get pretty much completely refreshed about every 10 years at the very most. Buy a BBC Micro in 1983: platform dead by 1990. Buy a ZX Spectrum in 1983: platform dead by 1991. Buy an Atari ST in 1985: platform dead by 1992. Buy an Amiga in 1986: platform dead by 1994. The PC was a bit of an exception, as the OS remained the same (for good or for ill...) for longer, but the onward march of hardware progress meant that you'd need new hardware every few years if you wanted to actually run recent software.

Anyway, my basic thinking is that if in March 2026 you are releasing some software that requires you to have a computer manufactured at some point in the 2020s, then this is hardly without historical precedent. It might even simply be the natural order of things.

Me? I set the displays to go to sleep after N minutes.


.onion, aka a TOR internal URL. They look like this.


Documentation: https://support.torproject.org/about-tor/onion-services/what...

There are also many web sites that provide an onion address in addition to their clearnet address. For example, the BBC: https://www.bbc.com/news/technology-50150981


If I ever get a .onion (everyone should have an onion probably) I'll also register the same domain "dot net, it's dot com" just for the lols.


It doesn't cost anything to have a .onion. You just run Tor and enable hidden service mode.


And onion urls are a sha hash of i think the private key of the site


They contain the full key now which is why they are longer. Apparently it was necessary to fix the vulnerabilities in the previous version.

Public key obviously, not private.


Isn't the real issue here that tons of projects that depend on the "chardet" now drag in some crappy still unverified AI slop? AI forgery poisoning, IMHO.

Why does this new project here needed to replace the original like that in this dishonourable way? The proper way would have been to create a proper new project.

Note: even Python's own pip drags this in as dependency it seems (hopefully they'll stick to a proper version)


This indeed the real issue (not the AI angle per se, but the wholesale replacement. The licensing issue is real, but less important IMO).

Half a million lines of code have been deleted and replaced over the course of four days, directly to the main branch with no opportunity for community review and testing. (I've no idea whether depending projects use main or the stable branch, but stable is nearly 4 years old at this point, so while I hope it's the version depending projects use, I wouldn't put money on it.)

The whole thing smells a lot like a supply chain attack - and even if it's in good faith, that's one hell of a lot of code to be reviewed in order to make sure.


Woah. As someone not in this particular community but dependent on these tools this is exactly the terrifying underbelly we've all discussed with the user architecture of tools like pip and npm. It's horrifying that a major component just got torn apart, rebuilt, and deployed to anyone who uses those python ecosystems (... many millions? ... billions of people?)


The test coverage is going to be entirely different, unless of course they copied the tests, which would then preclude them from changing the license. They didn't even bother to make sure the CI passed on merging a major version release https://github.com/chardet/chardet/actions/runs/22563903687/...


The drop"-in" compatibility claims are also just wrong? I ran it on the old test suite from 6.0 (which is completely absent now), and quickly checking:

- the outputs, even if correctly deduced, are often incompatible: "utf-16be" turns into "utf-16-be", "UTF-16" turns into "utf-16-le" etc. FWIW, the old version appears to have been a bit of a mess (having had "UTF-16", "utf-16be" and "utf-16le" among its outputs) but I still wouldn't call the new version _compatible_,

- similarly, all `ascii` turn into `Windows-1252`

- sometimes it really does appear more accurate,

- but sometimes it appears to flip between wider families of closely related encodings, like one SHIFT_JIS test (confidence 0.99) turns into cp932 (confidence 0.34), or the whole family of tests that were determined as gb18030 (chinese) are now sometimes determined as gb2312 (the older subset of gb18030), and one even as cp1006, which AFAIK is just wrong.

As for performance claims, they appear not entirely false - analyzing all files took 20s, versus 150s with v6.0. However, looks like the library sometimes takes 2s to lazy initialize something, which means that if one uses `chardetect` CLI instead of Python API, you'll pay this cost each time and get several times slower instead.

Oh, and this "Negligible import memory (96 B)" is just silly and obviously wrong.


Yeah, there's really low quality code added if you take a look.


Why do we assume this was created with AI? Is there some marker we can use to detect that?


The amount of code committed per day suggest some kind of automation.

Also, a passionate programmer usually will add a "why this exists" in his readme.

I'd be very surprised if this wasn't AI.


When I load the site in my (slightly older) Firefox I just get some random junk and gibberish (markov chain generated nonsense?)

<bleep> that nonsense!


I suspect you're hitting the page where they're running https://iocaine.madhouse-project.org/

Perhaps you got bot flagged or something


That URL gives me a 418 I'm a teapot error with no body. I'm guessing they don't like my VPN.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: