> We assert that artificial intelligence is a natural evolution of human tools.
While nowhere in the paper this is actually asserted but the abstract, a whiggish narrative of a genuinely unprecedented technology --such that it can replace and supersede human "labour" altogether (one is reminded of The Evolution of Human Science by Ted Chiang)-- sounds naive at best, dangerous at worst.
A common error in historical thinking tends to see human tools essentially as a positive linear plot between time and progress. But these tools until AI had the common property of being enhancing of human cognition, because they couldn't do the thinking _for you_. AI can do just that, and for all the benefit it brings, seeing it simply as the next step in the "natural evolution of human tools" is alarmingly disarming coming from frontier thinkers.
> these tools until AI had the common property of being enhancing of human cognition, because they couldn't do the thinking for you
I have a different take, centered around this idea: Not everyone was into thinking about everything all the time even before AI. I'd say most people most of the time outsourced actual thinking to someone else.
1) Reading non-fiction books:
Not all books, even the non-fiction ones, necessarily require any thinking by the reader. A book that narrates history, for example, requires much less thinking than something like "The Road to Reality" or "Godel Escher Bach."
Most of us outsourced the thinking and historical method to the authors of the history book and just passively consumed some facts or factoids. Some of us memorize and remember these factoids well, but that's not thinking, just knowledge storage.
Philosophically, what's the difference between consuming books this way and reading an LLM's output?
2) Reading research papers:
Most people don't read any research papers at all. No thinking there.
Most people don't head to some forum to ask about latest research either.
Also, researchers in most fields don't come out and do outreach regularly.
Indeed, an LLM may actually be the only pathway for a lot of people to get at least _some_ knowledge and awareness about latest research.
Those of us in scientific, engineering, humanities, healthcare fields may read some to many papers.
But only a small subset reads very critically, looking for data errors, inconsistencies, etc.
For most of us, the knowledge and techniques may be beyond our current understanding and possibly without any interest in understanding them in future either.
Most of us are just interested in the observations or conclusions or applications. Those may involve some thinking but also may not involve any thinking, just blind acceptance of the paper's claims and possible applications.
3) Coding:
Again, deep thinking is only done by a small set of programmers. Like the ones who write kernels, compilers, distributed algorithms, complex libraries.
But most are just passive consumers who read some examples online or ask stackoverflow or reddit for direct answers.
Some even outsource all their coding entirely to gig sites. Not much thinking there except pricing and scheduling.
What's the difference between that and asking an LLM or copying an LLM's answers? At least, the LLMs patiently explain their code, unlike salty SO users!
----
IMO, most people weren't doing much thinking even pre-AI.
Post-AI, it's true that some people who did do some thinking may reduce it.
But it's equally true that those people who weren't doing much thinking due to access or language barriers can actually start doing some thinking now with the help of AI.
> I'd say most people most of the time outsourced actual thinking to someone else.
Someone else being human, until now. That may change. That's the whole point!
But I concur with your general point on the upstream production of thinking and knowledge. Indeed, such elite thinkers are those in economic history referred to as the "upper-tail human capital". Terence Tao being one of them giving license to the kind of thinking that accepts AI as a simple tool that is not fundamentally breaking our relationship with technology is what exactly I am protesting.
> But it's equally true that those people who weren't doing much thinking due to access or language barriers can actually start doing some thinking now with the help of AI.
If only we keep thinking that thinking is a comparative advantage of our species, I suppose!
> 1. Let's be clear: what you're describing is faith.
And what I am countering is not? You can easily argue technology generally improves lives. Is it optimal? Beyond abuse? No. Does it have unintended consequences? Yes. Are we better of without it? No.
> 2. And what are you smoking to assert "all technology before ... AI [improved] most people's lives"?
OK social media is an unmitigated disaster, I'll give you that.
But, for example, nukes have given us the longest period of relative peace in history.
>> 1. Let's be clear: what you're describing is faith.
> And what I am countering is not? You can easily argue technology generally improves lives. Is it optimal? Beyond abuse? No. Does it have unintended consequences? Yes. Are we better of without it? No.
The faith part is assuming it will always turn out as well as it has in the past. But every technology is different, and the patterns of the last 100-200 years are not some fixed law.
Look at each technology and its characteristics with fresh eyes, and think of the consequences without that assumption. That's not faith.
> But, for example, nukes have given us the longest period of relative peace in history.
I think celebrating nuclear weapons is a bit premature. A non-zero chance of obliteration over a long enough period of time becomes a near-certainty.
>its becoming very likely the user is becoming delusional or may engage in dangerous behavior.
Talking to AI might be the very thing that keeps those tendencies below the threshold of dangerous. Simply flagging long conversations would not be a way to deal with these problems, but AI learning how to talk to such users may be.
As an avid coffee-lover, I am sympathetic to the efforts to make coffee-brewing more of a science than an art. After all, it seems that doing so improves science just as much! https://today.ucsd.edu/story/coffee-and-turbulence
That said, I am less sympathetic to the concept of a "perfect cup", which seems to make coffee an exclusively competitive endeavour. I mean, why not enjoy coffee-brewing as one might enjoy thinkering?
People have widely varying tastes in coffee (as in everything). There is no "perfect cup." But I think they might be talking about improving extraction from the ground coffee. This might not be so relevant to the home brewer, but could be useful for, say, manufacturers of coffee machines.
I see the hyperbole is the point, but surely what these machines do is to literally predict? The entire prompt engineering endeavour is to get them to predict better and more precisely. Of course, these are not perfect solutions - they are stochastic after all, just not unpredictably.
Prompt engineering is voodoo. There's no sure way to determine how well these models will respond to a question. Of course, giving additional information may be helpful, but even that is not guaranteed.
Also every model update changes how you have to prompt them to get the answers you want. Setting up pre-prompts can help, but with each new version, you have to figure out through trial and error how to get it to respond to your type of queries.
I can't wait to see how bad my finally sort-of-working ChatGPT 5.1 pre-prompts work with 5.2.
It definitely isn’t voodoo, it’s more like forecasting weather. Some forecasts are easier to make, some are harder (it’ll be cold when it’s winter vs the exact location and wind speed of a tornado for an extreme example). The difference is you can try to mix things up in the prompt to maximize the likelihood of getting what you want out and there are feasibility thresholds for use cases, e.g. if you get a good answer 95% of the time it’s qualitatively different than 55%.
No, it's not. Nowadays we know how to predict the weather with great confidence. Prompting may get you different results each time. Moreover, LLMs depend on the context of your prompts (because of their memory), so a single prompt may be close to useless and two different people can get vastly different results.
Around May, Altman said to FT that his job was the "most important job maybe in history" (FT: https://www.ft.com/content/a3d65804-1cf3-4d67-ac79-9b78a10b6...). He did come back from brink of death before as well. But steering OpenAI into an "ecosystem" rather than a focusing on the product when you are up against the likes of Google? Seems like cashing in on the hype too early.
> Altman said to FT that his job was the "most important job maybe in history"
The lack of self-awareness in some experienced senior execs (who should know better) continues to stun me. Even if I suspected that sentiment about the importance of the job I was doing might be correct, I'd never say it - especially in an on-the-record media interview. Two reasons: 1. Any statement like that has a high probability of coming across very poorly, 2. It's a proposition that can only be judged in retrospect by people external to the context.
Frankly, I think the odds are at least 50/50 that Open AI will someday be considered the Pets.com (or even Enron) of early AI.
> Open AI will someday be considered the Pets.com (or even Enron) of early AI.
Very unlikely.
Pets.com never produced anything of note. OpenAI was the leader in AI for 3 years. Maybe now Gemini 3 and Opus 4.5 are slightly ahead, but that's a bit subjective. For all practical purposes OpenAI is in a 3-way tie with Google and Anthropic.
Google would love to sink them and Anthropic, and remain the monopoly in the AI space, like they are in the search space.
But Microsoft, Nvidia, AMD, Oracle, and a few others, would be much less happy with a monopolistic Google, and they'll do their best to help OpenAI and Anthropic stay afloat (as long as it does not cost their business to much to do so).
As for Enron, I don't think another Enron is likely to happen, not at this level of visibility. People simply don't like going to prison. Could one person try to fudge some numbers? Maybe. Elizabeth Holmes comes to mind. But Theranos did not have 800 million active users, did not have every analyst in the world scrutinizing them. At the level of OpenAI, you would need many people to be in a conspiracy to fudge numbers, and when there's many people, there's many possibilities for someone to blow the whistle.
True, but the horses' population started (slightly) rising again when they went from economic tools to recreational tools for humans. What will happen to humans?
While nowhere in the paper this is actually asserted but the abstract, a whiggish narrative of a genuinely unprecedented technology --such that it can replace and supersede human "labour" altogether (one is reminded of The Evolution of Human Science by Ted Chiang)-- sounds naive at best, dangerous at worst.
reply