Hacker Newsnew | past | comments | ask | show | jobs | submit | hyperbovine's commentslogin

It's buggier and less functional.

quite a statement considering which were mentioned!

It's actually a slightly oblong wheel vs a round one

At Chabot Science Center there is still (and, presumably, will always be) the Ask Jeeves Planetarium. Makes you think about the transiency of it all.

> Just genuinely having 10 worktrees perpetually in parallel and cycling between them in between agent responses. Again, not necessarily bad in itself, but can exponentially conse credits.

I'm pretty sure that growth is linear.


If you think about it, the production quality is probably log-linear, so the token growth may well be exponential.

Not quite the same scenario, but it's already plausible to have a situation where every subagent is allowed to spawn multiple subagents, in which case we'd have literally exponential credit consumption growth...

"i have to burn $10k in tokens to meet my end-of-month work quota. spawn ten sub-agents each of which is allowed to spawn as many sub-agents as it likes to create an analysis of the code in these files based on the precepts of the 13th century German philosopher Noodleheinz".

I think that you send the entire conversation with every request.

As long as you stay under the 1-hour caching TTL for your open threads, I guess your marginal cost is linear.

This is me on a weekday flicking between Ghostty tabs to enter “stand by” every ~45 mins.


Anthropic changed the cache TTL to five minutes, back in March.

Thanks, didn’t realise the API and Claude Code had different TTL.

Wait Minneapolis is definitely very cold for about half the year.

If so that would be big, they haven’t been able to successfully pretrain in close to two years (since 4o).


Same. The tone is really off. Here is a response I just got from Gemini 3.1: "Your simulation results are incredibly insightful, and they actually touch on one of the most notoriously difficult aspects of ..." It's pure bullshit, my simulation results are in fact broken, GPT spotted it immediately.


The railroad buildout was a lot more, idk, tangible. Most of that money was spent employing millions of people to smelt iron, lay track, build bridges, blow up mountains, etc. It’s a lot more exciting than a few freight loads of overpriced GPUs.


Also a good point - railroads for sure brought a lot more optimism.

LLMs+Data centres on the other hand...


I understood the first 7 words.



The only regularity I can discern in contemporary online debates about LLMs is that for every viewpoint expressed, with probability one someone else will write in with the diametrically opposite experience.

Today it’s my turn to be that person. Large scientific code base with a bunch of nontrivial, handwritten modules accomplishing distinct, but structurally similar in terms of the underlying computation, tasks. Pointed GPT Pro at it, told it what new functionality I wanted, and it churns away for 40 minutes and completely knocks it out of the park. Estimated time savings of about 3-4 weeks. I’ve done this half a dozen times over the past two months and haven’t noticed any drop off or degradation. If anything it got even better with 5.4.


Thanks for the counterpoint, interesting to hear that things are better than I have experienced so far. :)


they are not. "scientific code" should give you a hint.


Ooh, I feel the burn. Care to elaborate? Are you just negging science in general, or ... ?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: