One of the things that I can't help but notice as I read the comments here on HN is the extent to which we _want_ AI to be, if not a deity, than at least a grand new chapter for humanity; we want this even more than we want the new chapter to be _good_.
What I wish Alang had explored more was just this phenomenon; it worries me far more than paperclip maximization. Yet he does a good job characterizing the underlying social context, i.e. the Anderseenian view that all problems are soluble in sufficient technology (a position which, of course, is vacuously true -- the _real_ issue is what the cost is. For example, a universe-sized machine of infinite complexity could do absolutely anything; the cost, of course, is absolutely everything.)
A machine that makes art and does taxes is one thing. A machine that _is tendentiously presented by its creators as divine_ is a theohazard.
I believe there's a very mundane explanation for that. AI is the new gold rush in the startup world. People are funding AI companies, working for AI companies, they need to believe all the hype.
The only other comment here at this point is "Not worth a read.": is reading itself worth it? Is being human worth it?
It's long, it's philosophical. It's not something I'd normally read specifically with the objective of enabling commenting intelligently on HN. Rather it's something I'd read, knowing others had read it, to share our thoughts over beers in a setting quiet enough and free from distractions that we could hear ourselves think; or maybe during a long walk in the woods.
If the title is misleading it is only because it itself is so short and the piece is so long.
The article is shallow and offers nothing new. It reads like a very well written freshman essay. If you're impressed by offhand references to Derrida then you might enjoy it. The author also bizarrely says that AI can "only" do data analysis and hence isn't very useful. A dismissive characterization like, "it's only more efficient at handling spreadsheets", as if this isn't world shattering on its own.
The point about derrida was actually pretty good -- I've myself reflected that Derrida, Deleuze and a couple of other French poststructuralists got weirdly close to the same basic ideas that power LLMs (high-dimensional spaces, etc.) The Derridan trope that always hits me when I see AI art is the 'virtual image', which, IIRC, is about as close to 'the latent space' as I could hope to find in source materials so far apart.
Which is impressive, and, in a way, helps support the LLM/transformer paradigm -- if we're seeing results that were carefully deduced sixty years ago with a wildly different methodology, it suggests that both Derrida and ML research are on, if not the right track, than at least, a shared track; that's in itself a result.
If you assume that any use of the word 'Derrida' is a form of education-signalling, you're probably taking a cognitive shortcut.
Thanks for your response. I'm reminded of the old adage that the quickest way to learn something online is to say something controversial and wrong, because someone wiser than you will quickly jump in to correct you.
People who are made uncomfortable by the idea that AI might be overhyped (to the extent of making an empty passive aggressive comment like that) might be the same people who don't think reading is worthwhile, hence the desire to be saved from having to process a substantive work themself.
Goodness it waffles on a bit but cutting to the last paragraph he says in ten or twenty years AI and the world won't be much different. Which seems pretty dubious to me being more of the Wait But Why school of thought.
You'd think it'd sigma curve but for more than a century on it goes, with no immediate sign of stopping.
For most of that time improved computation hasn't been very noticeable - a 2010 computer does much the same as a 2000 computer. However the point around now when they overtake us is kind of a big deal. The switch from biological life to tech processing being the smartest things in our bit of the universe for the first and only time since the big bang.
Looking back we had our ancestors reproducing and dying for a trillion generations or so since unicellular life, going forward our virtual descendants being immortal. I think the author has it wrong with nothing much changing. (1trn source https://www.quora.com/How-many-generations-lie-between-me-an...)
I've always thought death rather depressing especially with family and friends so it seems a change for the better to me.
> The classic story here is that of an AI system whose only—seemingly inoffensive—goal is making paper clips. According to Bostrom, the system would realize quickly that humans are a barrier to this task, because they might switch off the machine.
I am at a loss to understand how this agent of doom (the AI not Bostrom) can be both "intelligent" and not understand that there are enough paperclips.
Unless I assume the argument rests on the word intelligent being meaningless.
Would a superintelligence reach the conclusion that humans are a cancer on earth that must be destroyed? That's a better example at the core of the alignment issue. Some values that humans hold in high regard, like the continued existence of billions of humans on earth, may not be there in a non-human-biased superintelligence.
AI safety researchers generally define intelligence in terms of ability to reach goals (which certainly not a good general definition, but a useful one in this context). Intelligence is independent of what that goal is, and intelligent entities including humans don't generally choose their root goals, only intermediate ones. A super-intelligent paperclip maximiser would likely realise that humans don't actually want this many paperclips, but do it anyway, if that's the goal that it's set (it's important not to anthropomorphize such an entity too much: humans tend to have a complex set of goals with some balancing between them that will generally avoid such a single-minded approach. But an intelligent machine needn't have that).
(LLMs, at least, seem not to suffer much from this. In fact they're pretty hard to direct in general, and mimic a lot of the human elements in their dataset. So at the moment I don't think they're the kind of thing that will result in such an entity. But I also don't think they're likely to result in a super intelligent machine: super knowledgable, maybe, but I don't expect superhuman ability to synthesise new insights from that knowledge)
> "I am at a loss to understand how this agent of doom (the AI not Bostrom) can be both "intelligent" and not understand that there are enough paperclips."
Having offspring might be ill-advised for a variety of reasons (medical, financial, etc.) but that doesn't stop humans from being horny and producing offspring. If powerful drives can be encoded in to a sapient organism well below the level of conscious thought, perhaps similar drives can be present in an artificial sapient being.
> If powerful drives can be encoded in to a sapient organism
The article says nothing about this. Is anyone demonstrating sentient synthetic life? This idea has nothing to do with the technology at hand.
But my point is definitional regarding the word "intelligence".
I'd accept a counter argument that humanity is already wrecking global ecosystems by making too many "paperclips," and given that the only measure we have for AI is the imitation game, then doom QED.
But this amounts to a diagnosis of physician heal thyself. "Dr. it hurts when I do this!"
To distinguish a false from a real god one at first has to know the features of a real god. The article is not really explaining the ladder so I think it cannot conclude that AI is a false god.
While reading this I did have a thought, that yeah we carry almost all the information in the world in our pockets, and we are far more alienated from each other than we were before that happened.
AI will have a big hurdle to overcome. If it's not reliable, meaning we have to verify or check its work, as is the case currently and may be for a long time, then how much better is it than Google and the world of information already in our pockets? Can it move past being simply an accelerant of our already highly productive work lives?
I'm not a skeptic. I see measurable productivity gains amongst devs when using CoPilot and other AI completion tools. And likewise the realized reduction in support staff by AI chatbots (yes it comes with a quality drop, but that's frankly fine for a lot of businesses) is there. AI art will probably affect the already heavily squeezed computer graphics industry the most. Most other forms of decorative art and muzak were already cheap enough that AI isn't going to flip flop things on its head there no matter how good it gets.
But industries where mistakes have to be avoided, where verifiability is paramount, where there is already an assumption that your knowledge workers can look up and learn anything they need to get the job done, how much will AI revolutionize those industries vs just slightly bumping up productivity? That's a tough question to answer, frankly
This article could have been written by an AI. It leads with a begged question of God, and goes on to summarize conventional definitions of "intelligence", for which the preferred definition happens to also to be begged with the term "artificial intelligence": you can't have artificial intelligence if you don't already have intelligence, right?
> ...“Nimbus” the cloud found a corner of the sky and made peace with a sunny day. You might not call the story good, but it would certainly entertain my five-year-old nephew
There's no comparison between a class of phenomena that might entertain a five year old nephew and the phenomena of an actual five year old nephew. So what's the point of this anecdote?
> “The only truly linguistic things we’ve ever encountered are [xxx things that have minds xxx] ourselves. And so when we encounter something that looks like it’s doing language the way we do language, all of our [xxx priors get pulled in xxx] our genetic endowment is aroused, and we think, ‘Oh, this is clearly [xxx a minded thing xxx] like me.’”
—
"It is forbidden to make a machine in the likeness of the human mind."
—
If you are willing consider AI in first principles as a transformer that takes a linguistic plane as its input and tranduces it to a linguistic output plane— in other words it's a linguistic filter with some novel ingress/egress properties for the system operator— you can take a sign of relief that it's merely one more form of computation, and because we have no theory of mind and therefore substitute computational models for the theory we lack, then the only danger we face is incompetent people applying filters where human judgment is required, which is a problem for cybernetics and systems theory.
The author makes a pathetically weak effort to look back into history, but only as far as the 1990s.
While 18th century artisans may how wowed high-society with automatons that screedled coresspondence or emulated a duck's digestive tract, they never put one in the role of a doctor applying leaches, bloodletting or balancing the humors.
The Turing Test is evoked for the 10,000th time while failing to note Turings most important observation, and its in the first paragraph of his paper:
//I propose to consider the question, "Can machines think?" This should begin with
definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a
statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is
expressed in relatively unambiguous words.//
IOW having no conception of what it means to think, he decided to completely obviate the question.
Has any progress been made on a definition? If so it's remarkable that it's never discussed, whereas everyone with a perspective on AI rushes to the "imitation game" as primary reference.
The desire has a narratival, almost religious quality -- I'm reminded of _Palo Alto_ https://www.nytimes.com/2023/02/14/books/malcolm-harris-palo... (Malcolm Harris, 2023)
What I wish Alang had explored more was just this phenomenon; it worries me far more than paperclip maximization. Yet he does a good job characterizing the underlying social context, i.e. the Anderseenian view that all problems are soluble in sufficient technology (a position which, of course, is vacuously true -- the _real_ issue is what the cost is. For example, a universe-sized machine of infinite complexity could do absolutely anything; the cost, of course, is absolutely everything.)
A machine that makes art and does taxes is one thing. A machine that _is tendentiously presented by its creators as divine_ is a theohazard.