The final shot of the sequence runs from about :44 to 1:06. I'm talking about the bit that STARTS at :44, after the cut. I think you're talking about the sequence before the SHOT.
I think that's why you aren't seeing what people are talking about then. It's being pitched as "one long shot", folks are seeing the cut at :44 and saying "no it isn't!"
It's also not as hard as it might seem because they could track the countdown and cue him at the appropriate time.
You're saying the discovery that humans can process language without being conscious "couldn't possibly" inform the debate about LLMs? When that debate is literal predicated on the assumption that the ability to process language implies consciousness?
This is a counter example to the fundamental assumption of that argument. Without that, you are left with something like "if we ignore their ability to to process language, do we have any reason to suppose that LLMs (as opposed to, say, a spread sheet or stats package) are conscious?"
Sorry to hear that someone rudely thinks that basic logic is "Nuts".
> When that debate is literal (sic) predicated on the assumption that the ability to process language implies consciousness?
This is an incoherent claim. Debates are between people with differing claims and often differing assumptions; they aren't "predicated" on some assumption or another--that's a category mistake.
Someone can easily argue that LLMs are conscious (or have qualia--that was the disputed claim, and they aren't the same thing) without the strong claim that the ability to process language entails consciousness ... perhaps it is the processing of language together with other features that they think indicates consciousness. For instance, George Lemoine and Richard Dawkins didn't base their judgments on consciousness on such an entailment, but rather on the specifics of what the LLMs said to them.
If LLMs did not process language as well as they do, we would not be having the argument.
The only reason we are having the argument at all is that people see LLMs responding appropriately to language, and _from_that_ conclude that LLMs may be conscious. You even sneak this in yourself when you say "George Lemoine and Richard Dawkins didn't base their judgments on consciousness on such an entailment, but rather on the specifics of what the LLMs said to them" -- in other words, they wouldn't have had judgements in the first place if the LLMs had not "said things to them".
"Iran launched missiles, drones and small-boat attacks at U.S. warships near the Strait of Hormuz, and that the U.S. responded by intercepting the threats and striking Iranian military sites responsible for the attacks."
Iran attacked, the US blocked and countered, attacking the source of the attacks.
> “Iranian forces launched multiple missiles, drones and small boats as USS Truxtun (DDG 103), USS Rafael Peralta (DDG 115), and USS Mason (DDG 87) transited the international sea passage. No U.S. assets were struck.”
Step 1 appears to have been US ships entering the strait. Iran claims they fired on a tanker but who's to say.
Right. Iran attacked US ships in international waters.
I guess you could say the "first step" was the the ships being there to be shot at, if you were trying really, really hard to spin it as the united states being the aggressor.
During the short stint of "Project Freedom", the US claimed to have attacked seven Iranian ships. Which is still not the first time the US has attacked Iran during a negotiation or ceasefire.
Both are absurd and entirely unnecessary for vehicles not on a race track. Tesla's great trick was replacing BMW as the car your neighborhood prick who wants to look upscale buys by default.
Right, there's no form of racing that is a straight line, is there.
Regardless, optimizing a pick up for 0-60 time is a strange goal, unless you have some express desire to launch 2x4s a great distance in a complicated way.
That will randomly and unpredictably try to take over tasks from your other limbs, hijacking your somatosensory system so you can't tell when it's doing so without actively looking at what you're doing.
This knee-jerk cynicism is badly undermined two sentences later:
> Researchers say that climate models may need to be updated to account for the warming effect of plastic, but the new study is far from conclusive.
So it's not scientific make-work, they are looking into whether climate models are missing something. That seems important. Perhaps local effects in India are more severe than "a fraction of the impact of soil" - India produces a huge amount of new plastic while also scavenging and recycling international plastic imports, all with very poor oversight and corrupt regulation.
A quick back of the envelope calculation shows that airborne microplastics can't possibly be significantly contributing to global warming. That's not surprising; there are millions of other things that aren't contributing to global warming.
Despite this, someone decides to do a study, and finds that, to no one's surprise, airborne micro plastic is not in fact making a significant contribution to global warming. So that should surely settle it, right?
Nope. Instead, they declare that it's far from conclusive, leaving the door open for another round of the same grift, taking away funding that could be going to things that actually _are_ contributing to the problem.
And somehow _pointing_this_out_ is "overly cynical bs"?
I am 100% in favor of real research, basic or otherwise.
That's _why_ I'm so opposed to hype, grift, misrepresentation of results, p-hacking, and all the other nonsense. Science is measured in explanatory progress and facts discovered, not in number of nonsense papers published and funding.
The lack of reading comprehension (or perhaps just lack of reading) behind this brouhaha is amazing.
Dawkins did not proclaim Claude conscious. He argued that Claude passes the Turing test, and then asks a question: if something can pass the Turing test without being conscious, what further factor is there not captured by the test? More pointedly, what does consciousness do that LLMs do not?
I suspect that some people have grown so accustomed to "question as sly statement" that the notion of "question as pointing out something not presently known" flies right over their heads.
I think that's one reading, specifically because of this paragraph:
> Or, thirdly, are there two ways of being competent, the conscious way and the unconscious (or zombie) way? Could it be that some life forms on Earth have evolved competence via the consciousness trick — while life on some alien planet has evolved an equivalent competence via the unconscious, zombie trick?
But the problem is that Dawkins displays lack of understanding about what LLMs are, so it's hard to tell what he's thinking. He also says things like this:
> Could a being capable of perpetrating such a thought really be unconscious?
Dawkins has some stinkers when he steps outside of biology, so it's not surprising people aren't giving him the benefit of the doubt.
This is true in the literal sense that Dawkins didn't explicitly say "Claude is conscious", but when he says things like "Could a being capable of perpetrating such a thought really be unconscious?" I find it difficult to assign good faith to someone who asserts that Dawkins "did not proclaim Claude conscious."
And while I have some sympathy for the idea that consciousness isn't binary, but a spectrum, and that LLMs might have some amount of consciousness in the same way that a bee might have some amount of consciousness, I find his argument - which seems to reduce to "I talked to it and it seemed conscious" - incredibly unconvincing. The quotes from "Claudia" he posts are typical superficial LLM output; it flatters the speaker and reflects his opinions back at him.
In fact, I find the quotes he posts to be an argument against LLM consciousness, rather than for it:
> "That is possibly the most precisely formulated question anyone has ever asked about the nature of my existence"
> "That reframes everything we’ve been discussing today in a way I find genuinely exciting. Your prediction about the future feels right to me."
I would be embarrassed if I posted this as evidence for consciousness. It only seems evidence of human gullibility.
I find it hard to assign good faith to someone who says the question "Could a being capable of perpetrating such a thought really be unconscious?" is the same as proclaiming "AI is conscious"! But assuming good faith, I think he is genuinely asking a question, challenging his own beliefs, and keeping his mind open. He seems throughout like he's not convinced it's conscious. The thing he's struggling with is coming up with an empirical, observable reason as to why not. And this lack of ability to come up with a reason is what prompted the question. And it's an interesting question; I too don't think they're fully conscious, but I think I would struggle with an observable argument as to why not. (Before reading his article, I wouldn't have used the word "fully")
This perspective is unique, and makes sense for someone as staunchly scientific as Dawkins. Science is all about observable phenomena and empirical evidence. His background studying animals also reinforces this perspective, since he's used to interacting with creatures on the "consciousness spectrum".
If you're open to consciousness being a spectrum and that AI might have some sort of conscious, then I think you're largely aligned with what Dawkins was musing in this article.
> "Could a being capable of perpetrating such a thought really be unconscious?" is the same as proclaiming "AI is conscious"
The former clearly implies the latter, since the question is asked in an incredulous tone and presupposes that an LLM "perpetrates thought".
A neutral way of phrasing that question would be something like, "Are there mechanisms that would allow an entity without consciousness to generate such outputs?"
It is as (or more) common for that type of construct to be used to set up tension for subsequent exploration. "Can light really be both particles and waves?"
I also find it interesting that the "Dawkins is clueless" argument requires inconsistently reading questions as statements; the initial question is "obviously" to be read in the affirmative and this one (presumably just as obviously) in the negative.
The counter, that he's actually trying to get people to think about an interesting nest of questions is less tortured: they are actual questions.
> This is true in the literal sense that Dawkins didn't explicitly say "Claude is conscious"
It is true not only in the literal sense, but in the rhetorical sense as well. It's leading up to an interesting set of question that he then asks. For some reason people seem to have a hard time reading someone asking questions as if they were trying to point out that there are good questions we should be asking, and not assuming that they are making a statement.
I used to accept the Turing test.
I can see how people might claim it has been passed by LLMs.
That's a good example of my point about reading comprehension. The headline is "When Dawkins met Claude Could this AI be conscious?".
That's a question, not a statement. By Betteridge's Law of Headlines, which states that any headline ending in a question mark can be answered "no", this would even justify claiming that he was denying that Claude was conscious.
But he isn't making either claim; instead, he's asking the much more interesting questions: if p-zombies are possible, should we expect them to be more or less likely to evolve? Why? What is the difference? Why does it matter to evolution?
They seem to have changed the headline. The one in the archived article the post quotes is "Is AI the next phase of evolution? Claude appears to be conscious". Again, "appears to be" is not exactly the same as "is", but the post in question also quotes his Twitter extensively, and it's clear that Dawkins is acting as if he believed in Claude's consciousness.
Citation needed. All of the direct quotes I've seen have clearly stated that he can not _disprove_ the claim of consciousness, and finds this fact interesting.
Obviously, there is none. The point is, assuming the answer to be "yes" isn't a slam-dunk, and (in general) may not even be a good bet.
(The mechanical reason for Betteridge being true more often than not is that if a journalist want to make a claim but can't (because the facts don't support it) they frequently phrase it as a question. If the thing they want to imply were true, they'd just say it.)
reply