Hacker Newsnew | past | comments | ask | show | jobs | submit | MichaelDickens's commentslogin

Economics has the Journal of Comments and Replications in Economics: https://jcr-econ.org/

Altman has personally claimed that we are close to AGI. Therefore, according to him, OpenAI should invoke the self-sacrifice clause.

Of course he claims that, he seeks money from investors but the charter is likely be written by people who took it seriously

OP says one query uses 0.3 Wh. Driving an electric car for 10 miles = 3,000 Wh which is roughly 10,000 Wh per hour.

I'm not sure how many queries is equivalent to an hour of Claude code use, but maybe 5 seconds, which means an hour of continuous use = 216 Wh, or ~50x less than an electric car.

OP has a longer article about LLM energy usage: https://hannahritchie.substack.com/p/ai-footprint-august-202...


Beside the point, but 10,000 Wh per hour is kind of an insane unit. It's 10,000 watts. Or 10 kW if you're really into the whole brevity thing.

My point is that Claude might easily be about 50x more energy intensive than normal ChatGPT prompting.

A coding agent runs near-constantly, so of course it'd require a lot more compute than running even, say, a multi-minute query with a thinking model every hour. How much exactly is pretty hard to calculate because it requires some guesswork, but...

For a long input of n tokens from a model with N active parameters, the cost should scale as O(N n^2) (this is due to computing attention - for non-massive n, the O(N n) term is bigger, which is why API costs per token are fixed until a certain point and then start to rise). From the estimates from [1], it's around 40Wh for n=100k, N=100B. I multiply by 2.5 to account for Opus probably being ~2.5x larger than gpt-4o, and also multiply by 2 to pessimistically assume we're always close to Opus's soft context limit of 200k (it's possible to get a bigger context for extra cost, but I suspect people compact aggresively to not have to use it). That gets me 7.2J/t, which at a rough throughput estimate of 20t/s gives me power of 144W. Like a powerful CPU or a mediocre GPU, and still orders of magnitude lower than a car.

[1] https://epoch.ai/gradient-updates/how-much-energy-does-chatg...


> It's a piece of software that predicts the most likely token, it is not and can never be conscious.

A brain is a collection of cells that transmit electrical signals and sodium. It is not and can never be conscious.


I think this is a useful way to look at things. We often point out that LLMs are not conscious because of x, but we tend to forget that we don't really know what consciousness is, nor do we really know what intelligence is beyond the Justice Potter Stewart definition. It's helpful to occasionally remind ourselves how much uncertainty is involved here.


Except an LLM actually is a piece of software. And the brain is not what you said.


Which part of what he said is wrong?

> A brain is a collection of cells that transmit electrical signals and sodium. ...

That it is a collection of cells? Or that they transmit electrical signals and sodium?

Or do you feel that he's leaving out something important about how it works (like generated electrical fields or neural quantum effects)?


> I think agents should manage their own context too.

My intuition is that this should be almost trivial. If I copy/paste your long coding session into an LLM and ask it which parts can be removed from context without losing much, I'm confident that it will know to remove the debugging bits.


I generally do this when I arrive at the agent getting stuck at a test loop or whatever after injecting some later requirement in and tweaking. Once I hit a decent place I have the agent summarize, discard the branch (it’s part of the context too!) and start with the new prompt


In my experience virtually every magazine is like this, not just Quanta. I open an article hoping to learn something about some scientific or mathematical discovery, but instead the article is almost entirely about the discoverer.

For learning about actual discoveries, YouTube is much better (Veritasium, Numberphile, 3Blue1Brown, ...).


> Yes it was a pragmatic change, no it was not a change in their values. The commentary here on HN about Anthropic's RSP change was completely off the mark. They "think these changes are the right thing for reducing AI risk, both from Anthropic and from other companies if they make similar changes", as stated in this detailed discussion by Holden Karnofsky, who takes "significant responsibility for this change":

Can you imagine a world where Anthropic says "we are changing our RSP; we think this increases AI risk, but we want to make more money"?

The fact that they claim the new RSP reduces risk gives us approximately zero evidence that the new RSP reduces risk.


Well, the original claim of risk was also evidence-free.

It’s fair because the folks who are making the claim never left the armchair.


That misses my point: the evidence is the extensive argumentation provided for why it reduces risk. To quote Karnofsky:

> I wish people simply evaluated whether the changes seem good on the merits, without starting from a strong presumption that the mere fact of changes is either a bad thing or a fine thing. It should be hard to change good policies for bad reasons, not hard to change all policies for any reason.


It's both. Saving time is a form of status signaling. Professionalism usually entails spending longer on something than is optimal for effective communication, which is a way of signaling "my time is less valuable than yours". Writing short messages with grammatical errors is a way of signaling "my time is more valuable than your comprehension".


> On the positive side of this, research papers by competent people read very clearly with readable sentences, while those who are afraid that their content doesn't quite cut it, litter it with jargon, long complicated sentences, hoping that by making things hard, they will look smart.

I often find that to be true. Another important factor is that research skill is correlated with writing skill. Someone who's at the top of their field is likely to be talented in other ways, too, and one such talented is making complex topics easier to understand.


I would think that, by default, noise would not have a bias? Adding noise doesn't change the mean, it just increases the variance, right?


The Wikipedia page on this is not bad: https://en.wikipedia.org/wiki/Regression_dilution


It pushes confidence bounds closer to the null hypothesis.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: