You can't attack others like this on HN, regardless of how wrong they are or you feel they are. It's not what this site is for, and destroys what it is for.
Btw, it's particularly important not to do this when your argument is the correct one, since if you happen to be right, you end up discrediting the truth by posting like this, and that ends up hurting everybody. https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...
I agree with your comment, FWIW - I have no idea what OP is trying to demonstrate - but to maybe suggest some context: Gödel incompleteness is a commonly suggested "proof" as to why computers can't be intelligent in the way that humans can, because (very very roughly) humans can "step out" of formal systems to escape the Gödel trap. Regardless of your feelings about AI it's a silly argument; I think possibly the popularizer of this line of thinking was Roger Penrose in "The Emperor's New Mind".
I haven't re-read Searle since college but as far as I recall he never brings it up.
Wnat about Gödel incompleteness? Comptuers aren't formal systems. Turing machines have no notion of truth. Their programs may. So a program can have M > N axioms in which case one of the N+1 axioms recognizes the truth that G ≡ ¬ Prov_S(⌜ G ⌝) because it was constructed to be true. Alternatively, construct a system that generates "truth" statements, subject to further verification. After all, some humans think that "Apollo never put men on the moon" is a true statement.
As for intentionality, programs have intentionality.
The CRA proves that the algorithm by itself can never understand anything by virtue of solely symbol manipulation. Syntax, by itself, is insufficient to produce semantics.
It proves nothing, and has, in fact, been overtaken by events (see another of my replies [1]).
As for its alleged conclusion, I love David Chalmers’ parody “Cakes are crumbly, recipes are syntactic, syntax is not sufficient for crumbliness, so implementation of a recipe is not sufficient for making a cake.”
But your Chalmers line is also literally true. If you're a Martian on Mars and you don't have cake ingredients available, the recipe won't work. If you're on Earth and you have the ingredients, it works fine. Even if (like me) you have almost no understanding of what the ingredients are, how they are made, or why the recipe works.
If I'm following you correctly, you are saying that the conclusion of Chalmers' parody is actually correct, as having a recipe is indeed not sufficient to successfully bake a cake: you will not succeed without the ingredients, for example.
This is indeed true, but we should bear in mind that Chalmers' parody is just that: a parody, not a rigorous argument. It seems clear that, if Chalmers wanted to make it more rigorous, he would have concluded with something like "therefore, even if you have all the prerequisites for baking a cake (ingredients, tools, familiarity with basic cooking operations...), no recipe is sufficient to instruct you in successfully completing the task." This would be a better argument, but a flabbier, less to-the-point parody, and it is reasonable for Chalmers to leave it to his readers to get his point.
The question of whether, or to what extent, LLMs understand anything is an interesting one, tied up with our difficulty in saying what 'understanding' means, beyond broad statements along the lines of it being an ability to see the implications of our knowledge and use it in non-rote and creative ways.
The most honest answer to these questions I can give is to say "I don't know", though I'm toying with the idea that they understand (in some sense) the pragmatics of language use, but not that language refers to an external world which changes according to rules and causes that are independent of what we can and do say about it. This would be a very strange state to be in, and I cannot imagine what it would be like to be in such a state. We have never met anybody or anything like it.
Simulation of a recipe is not sufficient for crumbliness which is the only thing a computer can do at the end of the day. It can perform arithmetic operations & nothing else. If you know of a computer that can do more than boolean arithmetic then I'd like to see that computer & its implementation.
Searle tries this approach against the 'simulation agument' in this paper (see page 423) and also elsewhere, saying "a simulation of a rainstorm will not get you wet" (similarly, Kastrup says "a simulation of a kidney won't pee on my desk"), to which one can reply "yet a simulation of an Enigma machine really will encode and decode messages."
The thing is, minds interact with the outside world through information transmitted on the peripheral neural system, so the latter analogy is the relevant one here.
You're not addressing the actual argument & not bridging the explanatory gap between arithmetic simulation & reality. Saying people can read & interpret numbers by imbuing them w/ actual meaning & semantics is begging the question.
I'm showing that Searle's argument against the simulation reply doesn't hold up against relatively straightforward scrutiny. If you think you know of a better one, present it.
You can believe whatever you want about arithmetic as a foundation of your own metaphysics. I personally think it's silly but I'm not interested in arguing this any further b/c to close the explanatory gap from extensionality to intentionality you'd essentially have to solve several open problems in various branches of philosophy. Write out your argument in full form & then maybe you'll have something worth discussing.
The great thing about your latest reply is that it takes no time at all to see that you have still not offered any justification or explanation of anything you have claimed.
Then you will have no difficulty in pointing out where that happens. While you are about it, you can point out where you think I said people can read & interpret numbers by imbuing them w/ actual meaning & semantics.
It's in your response. You're welcome to elaborate your argument about Enigma ciphers in other terms if you want but you'll reach the same conclusion as I did.
So, nothing to see here so far - I can't really respond to allegations that are imaginary.
You also claim that I am begging the question. How do you justify that? It is not, of course, begging the question for opponents of Searle to suppose his conclusion is wrong: everyone disputing any argument does that.
The Enigma response is very straightforward. While, in general, simulations are not equivalent to what is being simulated, it is often the case that for information manipulation they are, owing to the substrate independence of information. It is Searle who needs a better argument here, and he never came up with one.
I agree there is nothing to see in any substrate independent computation unless there is a conscious observer involved which is why you are confused about your own argument.
Ah, now we are making some progress on where the confusion lies (not that saying "you're confused" without justification was ever much of an argument.)
The first thing to note is that it is not necessary to dispute the notion that semantics come from a conscious observer in order to demonstrate that Searle's argument fails, as that argument is precisely about whether that conscious observer could be an entity deriving its consciousness from a digital computation. As things stand, the only thing saving you from formally begging the question is that you have still not presented a specific argument against my objection[1] to Searle's response to the simulation reply; you still seem to be trying to insinuate that it fails without being specific.
Maybe you also mistakenly think that what I am saying in this thread is supposed to be an argument for computationalism? It is just an argument that Searle failed to make his case against computationalism on account of (among other things) an unsuccessful response to the simulation reply. (I suspect that computationalism is essentially correct, but I do not claim to know that it is.)
[1] It's not just my objection; Dennett, the Churchlands, even Putnam and David "hard problem" Chalmers have raised similar objections.
Strictly speaking, it does not even prove that: the claim that the guy in the room will not end up understanding Chinese is a premise, and some people argue that it an unjustified one. Personally, I think Searle's argument fails without one having to resort to such nit-picking.
Fine, it doesn't prove that. But I'm comfortable assuming it. Searle doesn't need to say the guy doesn't end up understanding Chinese. All he has to say is the guy doesn't need to understand Chinese. And then ... some†hing some†hing ... and then ... suddenly Chinese isn't understood by the algorithm either.
It's that last part that I can't follow and (so far) totally disbelieve.
To be clear, I don't think the guy in the room will end up understanding whichever Chinese language this thought experiment is being conducted in, either.
You have put your finger on the fundamental problem of the argument: Searle never gave a good justification for the tacit and question-begging premise that if the human operator did not understand the language, then nothing would (there is a second tacit premise at work here: that performing the room's task required something to understand the language. LLMs arguably suggest that the task could be performed without anything resembling a human's understanding of language.)
Searle's attempt to justify this premise (the 'human or nothing' one) against the so-called 'systems reply' is to have the operator memorize the book, so that the human is the whole system. Elsewhere [1] I have explained why I don't buy this.
"there is a second tacit premise at work here: that performing the room's task required something to understand the language. LLMs arguably suggest that the task could be performed without anything resembling a human's understanding of language."
Yeah. I used to assume that. But it's much less obvious now. Or just false or something.
It's actually kind of spooky how well Searle did capture/foreshadow something about LLMs decades ago. No part of the system seems to understand much of anything.
My theory is that Searle came up with the CR while complaining to his wife (for the hundredth time) about bright undergrads who didn't actually understand anything. She finally said "Hey, you should write that down!" Really she just meant "holy moly, stop telling it to me!" But he misunderstood her, and the rest is history.