Hacker Newsnew | past | comments | ask | show | jobs | submit | athrowaway3z's commentslogin

Easy set up + WhatsApp messages + wakes up regularly to make it feel more alive + larger memory (local fs) => non-dev fascination

I think the issue you have is one of perspective.

Reminds me of the Dropbox launch on HN where the top comment was something like:

> Yes but why not just rsync?


> Reminds me of the Dropbox launch on HN where the top comment was something like:

> > Yes but why not just rsync?

Commenting on this comment is so out of date. Dropbox is an anti-user pile of shit, and rsync is way better.


finally in 2026 the why not rsync comment would even make to paul graham more sense than the dropbox pitch

Exactly. And I kind of believe that anyone citing that comment in 2026 has either been asleep, or does it more to take part on the HN cool in-group than for the substance of it.

Why not rsync rahrah remember guys? You know the one right guys rahrah


Thank you!

Every time i see some new orchestrator framework worth more than a few hundred loc i cringe so hard. Reddit is flooded with them on the daily and HN has them on the front page occasionally.

My current setup is this;

- `tmux-bash` / `tmux-coding-agent`

- `tmux-send` / `tmux-capture`

- `semaphore_wait`

The other tools all create lockfiles and semaphore_wait is a small inotify wrapper.

They're all you need for 3 levels of orchestration. My recent discovery was that its best to have 1 dedicated supervisor that just semaphore_wait's on the 'main' agent spawning subagents. Basically a smart Ralph-wiggum.

https://github.com/offline-ant/pi-tmux if anybody is intrested.


The tmux + lockfile approach is underrated. We went through a whole phase of building proper orchestration infra and ended up ripping most of it out. The overhead of coordinating agents through a framework is often worse than just letting them talk through the filesystem. The dirty secret of multi-agent systems is that the coordination layer is usually where the bugs live, not the agents themselves.

you cringe while simultaneously posting a github link with your “current setup” - do you see the irony?

I have no clue what you're trying to say.

The whole point is that people don't throw away their original device.

Yours situation seems rather niche, and it sounds like you might be going out of 'business' while at the same time allowing 1000x times the number of people to want to do dummy-self-repairs (i.e. replace their batteries) even if it's with a bit more theater about who is licensed.

The total number of people means much more demand - even for what you cook-up manually as not-a-business.


Like I said I don't have a business. I don't even do repairs for other people (unless for friends for free). Being accused of being a business was really annoying, I'm just a tinkerer so I have a lot of stuff I play with yes. I'm a member of a makerspace so used electronics are really nice.

I'm just worried they will start tracking individual components of devices too like they do with car batteries now and cause a lot of hassle if you do something that doesn't fit the standard flow. When it comes to EVs I don't give a shit because I hate cars, but once I can't repurpose other electronics anymore as I see fit, it will be a problem. I view this as a sneaky way of introducing a subscription model to electronics, like you don't really own the stuff you buy anymore. Like that evil WEF slogan: "You will own nothing and you will be happy".


I've seen this played out 3 times with none devs i know personally. Somebody had an idea, starts vibing and feeling like they're making insane progress and cool stuff, but what can most generously be summarized as: a big Meh.

> Most of all, there is now an illusion of a lower barrier to entry.

Arguably, there has never been a higher barrier to entry.

The benefits accrue to the skilled. We all got X% more powerful, and those who were already skilled to begin with get a proportionally better outcome.


https://mariozechner.at/posts/2025-11-30-pi-coding-agent/

This coding agent is minimal, and it completely changed how I used models and Claude's cli now feels like extremely slow bloat.

I'd not be surprised if you're right in that this is companies / management will prefer to "pay for a complete package" approach for a long while, but power-users should not care for the model providers.

I have like 100 lines of code to get me a tmux controls & semaphore_wait extension in the pi harness. That gave me a better orchestration scheme a month ago when I adopted it, than Claude has right now.

As far as I can tell, the more you try to train your model on your harness, the worse they get. Bitter lesson #2932.


> I'd not be surprised if you're right in that this is companies / management will prefer to "pay for a complete package" approach for a long while

I mean I suspect for corporate usage Microsoft already has this wrapped up with Microsoft & GitHub Co-Pilots.


OpenAI, Anthropic, Google, Microsoft certainly desire path dependence but the very nature of LLMs and intelligence itself might make that hard unless they can develop models which truly are differentiated (and better) from the rest. The Chinese open source models catching up make me suspect that won't happen. The models will just be a commodity. There is a countdown clock for when we can get Opus 4.6+ level models and its measured in months.

The reason these LLM tools being good is they can "just do stuff." Anthropic bans third party subscription auth? I'll just have my other tool use Claude Code in tmux. If third party agents can be banned from doing stuff (some advanced always on spyware or whatever), then a large chunk of the promise of AI is dead.

Amp just announced today they are dumping IDE integration. Models seem to run better on bare-bones software like Pi, and you can add or remove stuff on the fly because the whole things open source. The software writes itself. Is Microsoft just trying to cram a whole new paradigm in to an old package? Kind of like a computer printer. It will be a big business, but it isn't the future.

At scale, the end provider ultimately has to serve the inference -- they need the hardware, data centers & the electricity to power those data centers. Someone like Microsoft can also provide a SLA and price such appropriately. I'll avoid a $200/month customer acquisition cost rant, but one user, running a bunch of sub agents, can spend a ton of money. If you don't own a business or funding source, the way state of the art LLMs are being used today is totally uneconomical (easy $200+ an hour at API prices.)

36+ months out, if they overbuild the data centers and the revenue doesn't come in like OpenAI & Anthropic are forecasting, there will be a glut of hardware. If that's the case I'd expect local model usage will scale up too and it will get more difficult for enterprise providers.

(Nothing is certain but some things have become a bit more obvious than they were 6 months ago.)


Thinking about this a little more -> "nature of LLMs and intelligence"

Bloated apps are a material disadvantage. If I'm in a competitive industry that slow down alone can mean failure. The only thing Claude Code has going for it now is the loss making $200 month subsidy. Is there any conceivable GUI overlay that Anthropic or OpenAI can add to make their software better than the current terminal apps? Sure, for certain edge cases, but then why isn't the user building those themselves? 24 months ago we could have said that's too hard, but that isn't the case in 2026.

Microsoft added all of this stuff in to Windows, and it's a 5 alarm fire. Stuff that used to be usable is a mess and really slow. Running linux with Claude Code, Codex, or Pi is clearly superior to having a Windows device with neither (if it wasn't possible to run these in Windows; just a hypothetical.)

From the business/enterprise perspective - there is no single most important thing, but having an environment that is reliable and predictable is high up there. Monday morning, an the Anthropic API endpoint is down, uh oh! In the longer term, businesses will really want to control both the model and the software that interfaces with it.

If the end game is just the same as talking to the Star Trek computer, and competitors are narrowing gaps rather than widening them (e.g. Anthropic and OpenAI releases models minutes from each other now, Chinese frontier models getting closer in capability not further), then it is really hard to see how either company achieves a vertical lock down.

We could actually move down the stack, and then the real problem for OpenAI and Anthropic is nVidia. 2030, the data center expansion is bust, nVidia starts selling all of these cards to consumers directly and has a huge financial incentive to make sure the performant local models exist. Everyone in the semiconductor supply chain below nvidia only cares about keeping sales going, so it stops with them.

Maybe nvidia is the real winner?

Also is it just me or does it now feel like hn comments are just talking to a future LLM?


> I have reviewed your generated article. [Discombobulating] After review we have decided a second pass to ensure its not obvious AI slop is not required; Marking task as Complete

> [X] Review Article for tells that it is AI.

> Bash(git commit && git push && post2hn)


> A blogger named Croissanthology re-ran the study with nearly 10x as many participants (446 vs. 45 in the original). The effect did not replicate. No replication is perfect, but no original study is either. And remember, this kind of effect is supposed to be so robust and generalizable that we can deploy it in court.

This should not be used in court today, but I do believe there is also a big component of cultural antibodies developing over time - and thus the study can't be replicated by definition.

In 1975 a sober high-quality source suddenly writing bait "BREAKING: politician SLAMMED diplomat on issue" would register as interesting. Now, people are constantly drowning in information presented that way.


Without knowing any details and thinking about this for just a min, i dont think this actually makes sense.

Most of this stuff AFAIK is destroyed to keep brand value or as the cheapest solution to oversupply.

Oversupply is less likely because it costs more, and the cost of removal now at minimum is the cost of a shipment.

For actual good clothes, the company can now decide if they want to pay more to destroy it elsewhere in an attempt to hold brand value, or simply not put in a destruction clause in the sales contract before it is shipped off and maybe make a bit of profit.



A bit of a tangent, but I wish the makepad project would get more traction in rust. Their cross-platform approach is extremely ambitious.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: