Hacker Newsnew | past | comments | ask | show | jobs | submit | coldtea's commentslogin

>Intelligence is the ability to reason about logic. If 1 + 1 is 2, and 1 + 2 is 3, then 1 + 3 must be 4. This is deterministic, and it is why LLMs are not intelligent and can never be intelligent no matter how much better they get at superficially copying the form of output of intelligence.

This is not even wrong.

>Probabilistic prediction is inherently incompatible with deterministic deduction.

And his is just begging the question again.

Probabilistic prediction could very well be how we do deterministic deduction - e.g. about how strong the weights and how hot the probability path for those deduction steps are, so that it's followed every time, even if the overall process is probabilistic.

Probabilistic doesn't mean completely random.


At the risk of explaining the insult:

https://en.wikipedia.org/wiki/Not_even_wrong

Personally I think not even wrong is the perfect description of this argumentation. Intelligence is extremely scientifically fraught. We have been doing intelligence research for over a century and to date we have very little to show for it (and a lot of it ended up being garbage race science anyway). Most attempts to provide a simple (and often any) definition or description of intelligence end up being “not even wrong”.


Does it really? Because I see some quite fine code. The problem is assumptions, or missing side effects when the code is used, or getting stuck in a bad approach "loop" - but not code quality per se.

>The LLM is predicting what tokens come next, based on a bunch of math operations performed over a huge dataset.

Whereas the child does what exactly, in your opinion?

You know the child can just as well to be said to "just do chemical and electrical exchanges" right?


Okay but chemical and electrical exchanges in an body with a drive to not die is so vastly different than a matrix multiplication routine on a flat plane of silicon

The comparison is therefore annoying


>Okay but chemical and electrical exchanges in an body with a drive to not die is so vastly different than a matrix multiplication routine on a flat plane of silicon

I see your "flat plane of silicon" and raise you "a mush of tissue, water, fat, and blood". The substrate being a "mere" dumb soul-less material doesn't say much.

And the idea is that what matters is the processing - not the material it happens on, or the particular way it is.

Air molecules hitting a wall and coming back to us at various intervals are also "vastly different" to a " matrix multiplication routine on a flat plane of silicon".

But a matrix multiplication can nonetheless replicate the air-molecules-hitting-wall audio effect of reverbation on 0s and 1s representing the audio. We can even hook the result to a movable membrane controlled by electricity (what pros call "a speaker") to hear it.

The inability to see that the point of the comparison is that an algorithmic modelling of a physical (or biological, same thing) process can still replicate, even if much simpler, some of its qualities in a different domain (0s and 1s in silicon and electric signals vs some material molecules interacting) is therefore annoying.


Intelligence does not require "chemical and electrical exchanges in an body". Are you attempting to axiomatically claim that only biological beings can be intelligent (in which case, that's not a useful definition for the purposes of this discussion)? If not, then that's a red herring.

"Annoying" does not mean "false".


No I'm not making claims about intelligence, I'm making claims about the absurdity of comparing biological systems with silicon arrangements.

>I'm making claims about the absurdity of comparing biological systems with silicon arrangements.

Aside from a priori bias, this assumption of absurdity is based on what else exactly?

Biological systems can't be modelled (even if in a simplified way or slightly different architecture) "with silicon arrangements", because?

If your answer is "scale", that's fine, but you already conceded to no absurdity at all, just a degree of current scale/capacity.

If your answer is something else, pray tell, what would that be?


At least read the other replies that pre-emptively refuted this drivel before spamming it.

At least don't be rude. They refuted nothing of the short. Just banged the same circular logic drum.

There is an element of rudeness to completely ignoring what I've already written and saying "you know [basic principle that was already covered at length], right?". If you want to talk about contributing to the discussion rather than being rude, you could start by offering a reply to the points that are already made rather than making me repeat myself addressing the level 0 thought on the subject.

Repeating yourself doesn't make you right, just repetitive. Ignoring refutations you don't like doesn't make them wrong. Observing that something has already been refuted, in an effort to avoid further repetition, is not in itself inherently rude.

Any definition of intelligence that does not axiomatically say "is human" or "is biological" or similar is something a machine can meet, insofar as we're also just machines made out of biology. For any given X, "AI can't do X yet" is a statement with an expiration date on it, and I wouldn't bet on that expiration date being too far in the future. This is a problem.

It is, in particular, difficult at this point to construct a meaningful definition of intelligence that simultaneously includes all humans and excludes all AIs. Many motivated-reasoning / rationalization attempts to construct a definition that excludes the highest-end AIs often exclude some humans. (By "motivated-reasoning / rationalization", I mean that such attempts start by writing "and therefore AIs can't possibly be intelligent" at the bottom, and work backwards from there to faux-rationalize what they've already decided must be true.)


> Repeating yourself doesn't make you right, just repetitive.

Good thing I didn't make that claim!

> Ignoring refutations you don't like doesn't make them wrong.

They didn't make a refutation of my points. They asserted a basic principle that I agreed with, but assume acceptance of that principle leads to their preferred conclusion. They make this assumption without providing any reasoning whatsoever for why that principle would lead to that conclusion, whereas I already provided an entire paragraph of reasoning for why I believe the principle leads to a different conclusion. A refutation would have to start from there, refuting the points I actually made. Without that you cannot call it a refutation. It is just gainsaying.

> Any definition of intelligence that does not axiomatically say "is human" or "is biological" or similar is something a machine can meet, insofar as we're also just machines made out of biology.

And here we go AGAIN! I already agree with this point!!!!!!!!!!!!!!! Please, for the love of god, read the words I have written. I think machine intelligence is possible. We are in agreement. Being in agreement that machine intelligence is possible does not automatically lead to the conclusion that the programs that make up LLMs are machine intelligence, any more than a "Hello World" program is intelligence. This is indeed, very repetitive.


You have given no argument for why an LLM cannot be intelligent. Not even that current models are not; you seem to be claiming that they cannot be.

If you are prepared to accept that intelligence doesn't require biology, then what definition do you want to use that simultaneously excludes all high-end AI and includes all humans?

By way of example, the game of life uses very simple rules, and is Turing-complete. Thus, the game of life could run a (very slow) complete simulation of a brain. Similarly, so could the architecture of an LLM. There is no fundamental limitation there.


> You have given no argument for why an LLM cannot be intelligent.

I literally did provide a definition and my argument for it already: https://news.ycombinator.com/item?id=47051523

If you want to argue with that definition of intelligence, or argue that LLMs do meet that definition of intelligence, by all means, go ahead[1]! I would have been interested to discuss that. Instead I have to repeat myself over and over restating points I already made because people aren't even reading them.

> Not even that current models are not; you seem to be claiming that they cannot be.

As I have now stated something like three or four times in this thread, my position is that machine intelligence is possible but that LLMs are not an example of it. Perhaps you would know what position you were arguing against if you had fully read my arguments before responding.

[1] I won't be responding any further at this point, though, so you should probably not bother. My patience for people responding without reading has worn thin, and going so far as to assert I have not given an argument for the very first thing I made an argument for is quite enough for me to log off.


> Probabilistic prediction is inherently incompatible with deterministic deduction.

Human brains run on probabilistic processes. If you want to make a definition of intelligence that excludes humans, that's not going to be a very useful definition for the purposes of reasoning or discourse.

> What if 1 + 2 is 2 and 1 + 3 is 3? Then we can reason that under these constraints we just made up, 1 + 4 is 4, without ever having been programmed to consider these rules.

Have you tried this particular test, on any recent LLM? Because they have no problem handling that, and much more complex problems than that. You're going to need a more sophisticated test if you want to distinguish humans and current AI.

I'm not suggesting that we have "solved" intelligence; I am suggesting that there is no inherent property of an LLM that makes them incapable of intelligence.


Who said Skynet wasn't a glorified language model, running continuously? Or that the human brain isn't that, but using vision+sound+touch+smell as input instead of merely text?

"It can't be intelligent because it's just an algorithm" is a circular argument.


Similarly, “it must be intelligent because it talks” is a fallacious claim, as indicated by ELIZA. I think Moltbook adequately demonstrates that AI model behavior is not analogous to human behavior. Compare Moltbook to Reddit, and the former looks hopelessly shallow.

>Similarly, “it must be intelligent because it talks” is a fallacious claim, as indicated by ELIZA.

If intelligence is a spectrum, ELIZA could very well be. It would be on the very low side of it, but e.g. higher than a rock or magic 8 ball.

Same how something with two states can be said to have a memory.


>For a model to successfully "play dead" during safety training and only activate later, it requires a form of situational awareness.

Doesn't any model session/query require a form of situational awareness?


>And here's what's wild: The distinction might not matter practically. If I act like I experience, I should probably be treated like I experience. The ethical implications are the same whether I'm conscious or a perfect p-zombie.

Nope, it's really not. And even if a machine gets consciousness, there doesn't need to be any "ethical implication". Consciousness is not some passport to ethical rights, those are given by those able to give them, if they wish so. Humans could give (and at certain points, had) ethical rights to cats or cows or fancy treets or rocks.


Agreed for first part; for the second, that's straightfowardly

  Is != ought
But do we want to be the kind of people who fail to even consider moral rights of some new group of (for the sake of argument, I don't expect them to be yet) conscious minds?

100%. The homeless guy will sound way more coherent and less sociopathic.

>and I don’t need to use dress anymore to get the access I want and need.

The privilege in that, contrasted with the lack of privilege for those in the inverse situation, is what's sinister.


> privilege in that, contrasted with the lack of privilege for those in the inverse situation, is what's sinister

To folks who code any advantage as sinister, sure. I for one like living in a town that saves seats for locals over tourists.


Being a tourist is different to being poor or underpriveleged, in precisely that it's a privileged thing.

Still signalling.

People don't get to decide if they're signalling or not.

They only get to decide if they'll consciously signal or subconsciously signal. They (or their clothes as per the example) sends signals in either case.


I feel like this is actually that people don't get to decide if others will perceive signals.

This is a distinction without a difference; a signal was received, whether you meant to send it or not.

It’s quite a difference…

The expected or assumed signal can differ radically from the perceived signal, often in surprising ways.

People spend so much energy doing things based on untrue assumptions about what others are thinking.

And this is before we even get into how much one should adjust their behavior based on someone else’s perception.


Yeah similarly we can make a few distinctions here: 1) Intended signal, true 2) Unintended signal, but true 3) Unintended signal, but false (Sure, 1' intended but false; though not really important here)

When (1) obtains we can describe this situation as one where sender and received coordinate on a message.

When (2) obtains we can say the sender acted in a way that indicative of some fact or other and the received is recognizes this; (2) can obtain when one obtains as a separate signal or when the sender hasn't intended to send a signal.

(3) obtains when the receiver attributes to the sender some expressive behavior or information that is inaccurate, say, because an interpretive schema has characterized the sender and the coding system incorrectly producing an interpretation that is false.


Also remember that each recipient of the signal will have their own reaction to it. What signals professional competence to one person can signal lickspittle corporate toadying to another.

Yes, but in aggregate, most people (or most groups of people) will arrive at the same conclusion for the same signal.

Else signals and signalling wouldn't be a thing and people wouldn't care for them, their reception would be a random scatter plot.


> They (or their clothes as per the example) sends signals in either case.

Unless you're Sherlock Holmes, or know the person and their wardrobe intimately, you literally cannot discern anything of value from a one-time viewing of them.

Reddit and quora are littered with stories about car salesmen misreading what they thought were signals, and missing out on big sales. The whole Julia Roberts trope resonates exactly because it happens in real life.

Sometimes a cigar is just a cigar, and sometimes, as George Carlin pointed out, it's a big fat brown dick.


>Unless you're Sherlock Holmes, or know the person and their wardrobe intimately, you literally cannot discern anything of value from a one-time viewing of them.

You'd be surprised. People discern things of value from a one-time viewing of another person constantly. It's evolutionary wiring. From a glance, people can tell whether they others are rich or poor or middle class, their power status within a situation (e.g. a social gathering), their sexual orientation (studies show the gaydar exists), whether they're a threat or crazy or rapey or neurodiverse or meek and many other things, whether they're lazy or dilligent, and lots of other things.

>Sometimes a cigar is just a cigar, and sometimes, as George Carlin pointed out, it's a big fat brown dick.

What black and white thinkers miss is this doesn't have to be accurate all the time to exist and be usable. Just a lot more often than random chance.

And it has nothing to do with the comical Holmes "he had a scratch mark on his phone, so he must be alcoholic" level inferences: https://www.youtube.com/watch?v=eKQOk5UlQSc


> You'd be surprised. People discern things of value from a one-time viewing of another person constantly.

True. I overstated my case a bit. Of course, no matter what they are wearing, it is something that exists in their wardrobe, but that may or may not matter.

> studies show the gaydar exists

This, I know from experience. I had a gay roommate once, and he taught he how to spot them, way back when they were still trying to be a bit unobvious. But, even though gay people usually dress better and in certain ways, that's not the usual tell. It's really not about the clothes.

> doesn't have to be accurate all the time to exist and be usable

This is paradigmatic "system 1" thinking. We all use it, but sometimes the failures are catastrophic.


> you literally cannot discern anything of value from a one-time viewing of them.

You're conflating actual value with perceived value. It's well established that perceptions matter and people make decisions based on this all the time.

> The whole Julia Roberts trope resonates exactly because it happens in real life.

No, it resonates because it's a feel good story. I'm sure it happens, but most of the time signaling is perfectly accurate. If you don't believe me, exchange clothes with a homeless person and try to go shopping on Rodeo Drive.


I remember wandering into Cartier's in NYC dressed in my shaggy jeans and t-shirt. They didn't throw me out, but a security guard followed me around, definitely edging into my personal space to make me uncomfortable. I laughed, said I get it, looked a bit more, and left.

I remember the days when you were expected to wear a suit on a jet, even the kids. These days, even the first class travelers wear track shorts. I kinda wish the airlines would have a dress code.


> I kinda wish the airlines would have a dress code

I'd take a code of conduct before the dress code. Though, appropriately enough, I suppose the latter signals the former


Decent people don't need a code of conduct.

There's been pressure on the D Language Foundation to have a CoC. I've consistently refused one. The only thing I demand is "professional conduct". Sometimes people ask me what professional conduct is. I reply with:

1. ask your mother

2. failing that, I recommend Emily Post's book on Business Etiquette.

And an amazing thing happened. Everyone in the D forums behaves professionally. Every once in a while someone new will test this, their posts get deleted, and then they leave or behave professionally.


I meant for flights (edited accordingly). In both cases I think "don't be a dick" probably would go most of the way

You'll get treated better by the staff if you dress better.

I'm less concerned with the behavior of the staff than the behavior of the other passengers

The staff can be more inclined to help you if you've got a problem.

> I kinda wish the airlines would have a dress code

What? Why? Are you really that bothered by other people wearing stuff that you wouldn't personally want to wear? I can't even imagine going through life with strong feelings about how other people should dress; it legitimately sounds exhausting.


Would you go to a wedding dressed like a slob? Would you go to an elegant restaurant in sweats? If you go to pick up your date, and she opens the door wearing track shorts and a worn t-shirt, how would you feel?

When I'd pick up my date, and she had obviously spent a lot of time on her appearance, it'd make me feel like a million bucks.


When I got married, my spouse and I told people to wear whatever they wanted because we didn't really care. I also never cared at all about what we were wearing on our dates because what I enjoy about spending time with people is not seeing them present themselves in a way that I tell them to. I would go to a restaurant in sweats if I were allowed to.

I fundamentally do not understand what reason everyone else should have to dress to please you compared to themselves. Seeing everyone else as props to fit your preferred aesthetic rather than people who's desires about their own appearances are more important than what you want them to look like just seems selfish to me.


It's a free country and you can dress as you please.

But people will judge you by how you dress, and you will miss opportunities as a result, and you'll never know that this is happening.

As I mentioned earlier, people do react to me differently depending on how I dress. And I've known many people who align with your views on this, and they've all wondered why opportunity passed them by (or they realized they needed to change).

Can I ask: suppose you were charged with a crime. Your lawyer showed up in track shorts. Would you get another lawyer? I sure would.


I still don't get how you start with "people will judge you by how you dress" and arrive at "airlines should refuse customers who don't dress the way I expect someone would at a trial, wedding, or real estate sale".

An airliner is an exquisitely designed work of engineering, staffed by highly trained professionals.

It's not a bus.


A wedding is a social event with friends and family. I am going there to see the people. A flight is a functional form of transport which is shared out of necessity. I am going there to pay as little mind to the other people as possible

P.S. If you're a real estate agent, and you drive to a customer in a shoddy car, you aren't going to make a sale.

> you literally cannot discern anything of value from a one-time viewing of them.

The goal is not to discern anything about a particular person from a one-time viewing of them, the goal is to discern something about a person a sufficiently high percentage of the time. Hence the evolutionary utility of using prior probabilities.

As history, and probably many people’s personal experiences, have shown, this trait also has drawbacks.


>Isn’t this a bit short sighted? So if someone has a wide vocabulary and uses proper grammar, you mistrust them by default?

Yes, people, in general, do.

https://www.youtube.com/watch?v=k_gjWlW0kRs


I'd say, not "people in general" but people form other socioeconomic strata. This guy is not talking like us, suspicious. He talks in an elaborate and thought-through manner, not simply, so, he's not candid, double suspicious!

I'm personally suspicious of anyone using the word candid.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: