Hacker Newsnew | past | comments | ask | show | jobs | submit | saltcured's commentslogin

I agree in broad strokes. If I am incapacitated, that is when things like durable power-of-attorney, medical advance directives, and living trusts come into play.

The important thing is to ensuring your computer is not a single point of failure. Instead of losing a password, you could have theft, flood, fire, etc. Or for online accounts, you are one vendor move away from losing things. None of these should be precious and impossible to replace. I've been on the other side of this, and I think the better flow is to terminate or transfer accounts, and wipe and recycle personal devices.

A better use of your time is to set up a disaster-recovery plan you can write down and share with people you trust. Distribute copies of important data to make a resilient archive. This could include confidential records, but shouldn't really need to include authentication "secrets".

Don't expect others to "impersonate" you. Delegate them proper access via technical and/or legal methods, as appropriate. Get some basic legal advice and put your affairs in order. Write down instructions for your wishes and the "treasure map" to help your survivors or caregivers figure out how to use the properly delegated authority.


I think the "genie" that is out of the bottle is that there is no broad, deeply technical class who can resist the allure of the AI agent. A technical focus does not seem to provide immunity.

In spite of obvious contradictory signals about quality, we embrace the magical thinking that these tools operate in a realm of ontology and logic. We disregard the null hypothesis, in which they are more mad-libbing plagiarism machines which we've deployed against our own minds. Put more tritely: We have met the Genie, and the Genie is Us. The LLM is just another wish fulfilled with calamitous second-order effects.

Though enjoyable as fiction, I can't really picture a Butlerian Jihad where humanity attempts some religious purge of AI methods. It's easier for me to imagine the opposite, where the majority purges the heretics who would question their saints of reduced effort.

So, I don't see LLMs going away unless you believe we're in some kind of Peak Compute transition, which is pretty catastrophic thinking. I.e. some kind of techno/industrial/societal collapse where the state of the art stops moving forward and instead retreats. I suppose someone could believe in that outcome, if they lean hard into the idea that the continued use of LLMs will incapacitate us?

Even if LLM/AI concepts plateau, I tend to think we'll somehow continue with hardware scaling. That means they will become commoditized and able to run locally on consumer-level equipment. In the long run, it won't require a financial bubble or dedicated powerplants to run, nor be limited to priests in high towers. It will be pervasive like wireless ear buds or microwave ovens, rather than an embodiment of capital investment.

The pragmatic way I see LLMs _not_ sticking around is where AI researchers figure out some better approach. Then, LLMs would simply be left behind as historical curiosities.


The first half of your post, I broadly agree with.

The last part...I'm not sure. The idea that we will be able to compute-scale our way out of practically anything is so much taken for granted these days that many people seem to have lost sight of the fact that we have genuinely hit diminishing returns—first in the general-purpose computing scaling (end of Moore's Law, etc), and more recently in the ability to scale LLMs. There is no longer a guarantee that we can improve the performance of training, at the very least, for the larger models by more than a few percent, no matter how much new tech we throw at it. At least until we hit another major breakthrough (either hardware or software), and by their very nature those cannot be counted on.

Even if we can squeeze out a few more percent—or a few more tens of percent—of optimizations on training and inference, to the best of my understanding, that's going to be orders of magnitude too little yet to allow for running the full-size major models on consumer-level equipment.


This is so objectively false. Sometimes I can’t believe im even on HN anymore with the level of confidently incorrect assertions made.

You, uh, wanna actually back that accusation up with some data there, chief?

Compare models from one year ago (GPT-4o?) to models from this year (Opus 4.5?). There are literally hundreds of benchmarks and metrics you can find. What reality do you live in?

Comparing two data points gets you a line.

If you want to prove that there are not diminishing returns, you need to add at least one more data point in there.

You're really not showing any evidence that you even understand how this kind of math works, let alone that my statement is false.


A lot of (older than me) enthusiasts I knew got an MSDN subscription even though they weren't really developing apps for Windows. This gave them a steady stream of OS releases on CD-ROMs, officially for testing their fictional apps. So, they were often upgrading one or more systems many times rather than buying new machines with a bundled new OS.

Personally, yeah, I had Windows 3.0/3.11 on a 386. I think I may have also put an early Windows NT (beta?) release on it, borrowing someone's MSDN discs. Not sure I had got value from it except seeing the "pipes" software OpenGL screensaver. Around '93-94, I started using Linux, and after that it was mostly upgrading hardware in my Linux Box(en) of Theseus.

I remember my college roommate blowing his budget upgrading to a 180 MHz Pentium Pro, and he put Windows 95 on it. I think that was the first time I heard the "Start me up!" sound from an actual computer instead of a TV ad.

After that, I only encountered later Windows versions if they were shipped on a laptop I got for work, before I wiped it to install Linux. Or eventually when I got an install image to put Windows in a VM. (First under VMware, later under Linux qemu+kvm.)


It's these discussions where I realize people use phones in such different ways.

I abandoned Nova last year when I read about this looming problem. I found that Fossify Launcher beta (from F-Droid) works well enough for me on my Pixel 8a.

I don't really need much out of a launcher. My main goal was to have one like my older Android and not be forced to have a search bar or assistant triggers on my home screen.

All I need from the home screen is to be able to place basic widgets like clock and calendar and shortcuts for the basic apps I use frequently. A plain app drawer is fine for the rest, because I don't really install that many apps and instead disable/remove many. My app drawer shows 35 apps and has several blank rows remaining on the first page with 5 icons per row.


Since well before the pandemic, I've have dual 28" 4K screens on my desk. When ordering them, I liked the fact that they had the same pixel pitch as my 14" 2K laptop screen. One monitor was like a borderless 2x2 grid of those laptop screens.

I found myself repositioning things so that one is in front of the keyboard as a primary screen and the other is further off to the side as a secondary dumping ground. I found myself neglecting the second display most of the time so it was just a blank background. Eventually, I noticed I wasn't even using the entire primary screen. I favored a sector of it and pushed some windows off to the edges.

Ironically, with work from home, I've started roaming around the house with the laptop instead of staying at my desk. So I'm mostly back to working in a 14" screen with virtual desktops, like I was 20 years ago. I am glad that laptops are starting to have 16:10 again after the long drought of HDTV-derived screens.


The popular HTTP validation method has the same drawback whether using DNS or IP certificates? Namely, if you can compromise routes to hijack traffic, you can also hijack the validation requests. Right?


Yes, there have been cases where this has happened (https://notes.valdikss.org.ru/jabber.ru-mitm/), but it's really now into the realm of

1) How to secure routing information: some says RPKI, some argues that's not enough and are experimenting with something like SCION (https://docs.scion.org/en/latest/)

2) Principal-Agent problem: jabber.ru's hijack relied on (presumably) Hetzner being forced to do it by German law agents based on the powers provided under the German Telecommunications Act (TKG)


> some says RPKI

Part of the issue with RPKI is its taking time to fully deploy. Not as glacial as IPv6 but slower than it should be.

If there was 100% coverage then RPKI would have a good effect.


Well, not exactly in that there are cultivars and farm differences. In that way it is a little bit like grape wine, where different processing can produce very different wines from the same grapes, but there are also differences in grapes that can come through within a style.


In a way, yes; Wuyi rock oolong will be different than a high mountain Taiwanese oolong. But what most people think of as green vs black tea, they don't realize that it's the same exact plant. Camellia sinensis has only 2 cultivars, var. sinensis (the main one) and var. assamica.


Right. A lot of people also don't realize red and white wines often come from the same red grapes.


This is quite incorrect. Of the top 10 planted wine varietals in the world [0], all ten are red grapes to red wine or white grapes to white wine:

Top grape varieties by planted hectares 1. Cabernet Sauvignon - red grape, red wine. 2. Merlot - red grape, red wine. 3. Tempranillo - red grape, red wine. 4. Airén - white grape, white wine. 5. Chardonnay - white grape, white wine. 6. Syrah - red grape, red wine. 7. Grenache Noir - red grape, red wine. 8. Sauvignon Blanc - white grape, white wine. 9. Pinot Noir - red grape, red wine. 10. Trebbiano Toscano / Ugni Blanc - white grape, white wine.

There are some wines which are produced with red grapes which are not left on skins so there is no impartation of red colour, but they are really not common and the result is most of the time a bit closer to a light rose than what would be considered a white wine. Perhaps the only style that would be semi-frequently encountered are some French Blanc de Noirs wines, various champagne examples being the most common of these. (And of course standard champagne itself, but I am not sure if that is really considered a white wine). Still, rare. It is also not possible to produce a red wine with a white grape, there is no colour in the skin to impart.

[0]: https://londonwinecompetition.com/en/blog/insights-1/how-the...


Thanks for the correction!

This was some trivia I learned long ago, but I guess without enough context for how often that process is done. Clearly, I am not a wine expert...


I feel like some of these proponents act like a poet has the goal to produce an anthology of poems and should be happy to act as publisher and editor, sifting through the outputs of some LLM stanza generator.

The entire idea using natural language for composite or atomic command units is deeply unsettling to me. I see language as an unreliable abstraction even with human partners that I know well. It takes a lot of work to communicate anything nuanced, even with vast amounts of shared context. That's the last thing I want to add between me and the machine.

What you wrote futher up resonates a lot for me, right down to the aphantasia bit. I also lack an internal monologue. Perhaps because of these, I never want to "talk" to a device as a command input. Regardless of whether it is my compiler, smartphone, navigation system, alarm clock, toaster, or light switch, issuing such commands is never going to be what I want. It means engaging an extra cognitive task to convert my cognition back into words. I'd much rather have a more machine-oriented control interface where I can be aware of a design's abstraction and directly influence its parameters and operations. I crave the determinism that lets me anticipate the composition of things and nearly "feel" transitive properties of a system. Natural language doesn't work that way.

Note, I'm not against textual interfaces. I actually prefer the shell prompt to the GUI for many recurring control tasks. But typing works for me and speaking would not. I need editing to construct and proof-read commands which may not come out of my mind and hands with the linearity it assumes in the command buffer. I prefer symbolic input languages where I can more directly map my intent into the unambiguous, structured semantics of the chosen tool. I also want conventional programming syntax, with unambiguous control flow and computed expressions for composing command flows. I do not want vagaries of natural language interfering here.


If the threat is observation and tracking, you really want to turn off all radios, right? Cellular, wifi, bluetooth, NFC. Otherwise you are hoping some anonymization/obfuscation is preventing your signal from being correlated to those captured at other locations and times.

If the threat is self-incrimination after the fact, you also don't want to carry any device that is determining and persisting its own location info. Don't track your protest as a fitness activity on your GPS sports watch...


In my western US dialect, it is abnormal to use it as a subject-verb-object (SVO) construct. I have to guess at intent.

For me, there are three idiomatic forms:

1. Using "lag behind" gives a target/reference as a prepositional relationship, not as an object of the verb "to lag".

2. Using "caused to lag" allows one to specify a causal agent, but again not as an object of the verb "to lag".

3. Using "lag" alone is a subject-verb construct, leaving an implicit target/reference from context expectations. A coach or supervisor might scold someone for lagging.

As a bit of a tangent, I actually wonder if the etymology of "to lag" is more Germanic than some people assume. The verb lagern has many uses for placing, storing, and leaving behind. It's where our English concept of a "lager" beer comes from too, referencing the way the beer is fermented in (cold) storage. If this linguistic connection remained fresh, we might think of an SVO construct of lagging as the opposite of the intent in this article. The leader would lag the follower by leaving them behind!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: