Hacker Newsnew | past | comments | ask | show | jobs | submit | cadabrabra's commentslogin

If we know that it cannot work, then we have evidence of no X, where X is the possibility of homeopathy working.


Huh?


Correct


It’s a common confusion, often subconsciously deployed in the context of trying to cope with the possibility of something undesirable turning out to be true.


It's also important to distinguish "I don't believe X" from "I believe not-X".

English is poorly suited to expressing this distinction.


No. What happened here is you confused semantics for logic.


Glad you agree about Moltbook. We can leave it at that, we don’t have to agree on everything.

That’s because we’re in a recession. It has nothing to do with AI. AI can’t replace a god damn drive thru worker. McDonald’s literally tried and failed, that’s the funny part.

Weird. Most of the MacDonald's where I live use AI. And they're not the only ones. Doesn't seem like they failed.

MacDonald's is always trying "new" things intermittently in different markets. Removal from one market doesn't equal failure or permanence.

Most of the failures are a direct result of people intentionally trying to make it fail. Ordering a gazillion big macs and then replacing a third of them with egg mcmuffins is hardly something people would normally wish to do. Discoveries, like its inability to work with fractional food orders, are total nonsense since literally nobody orders food that way, unless they're trying to get lulz for their social accounts.



Huh? I'm not seeing it...

Ahh .. now I see it! ..Batman!

OOOHHH! Thank you!

I’m not being pedantic. I’m being precise in my use of language.

Maybe not in theory but definitely in practice, as we’ve seen with GPT-5. These companies are lightning money on fire. If they reduce the cost, expect a proportional decrease in quality. All of the GPT-5 anecdotes confirm this. When the data and anecdotes disagree, the anecdotes are usually right, and the data is usually bullshit.

GPT-5's issues were due to router shenanigans which Claude models do not do.

No dude, the latest versions of the models it routes to are markedly poorer in performance than their predecessors.

I’m observing a law that states: There appears to be a direct relationship between model performance and cost, such that whenever a company claims to have reduced inference costs, customers immediately notice a corresponding decline in model performance.


It’s already obvious that it will be a scam. Higher benchmark scores and lower cost are two signs that customers are about to get scammed. We saw it with GPT-5.

Respectfully,

Claude 3 Opus: $15.00 (Input) / $75.00 (Output) per 1M tokens

Claude 4 Opus: $15.00 (Input) / $75.00 (Output) per 1M tokens

Claude 4.1 Opus: $15.00 (Input) / $75.00 (Output) per 1M tokens

Claude 4.5 Opus: $5.00 (Input) / $25.00 (Output) per 1M tokens


This actually proves my point because if you read the anecdotes, you will notice a marked decline in performance. The version number goes up but the actual performance declines. The benchmarks can tell any story you want them to.

Is it? It might be possible that it's a scam, but for something to be "obvious" it has to release first.

There are plenty of ways to reduce inference cost for a high-intelligence model. Making sparser weights, for example, can increase the parameter count while reducing the inference cost and time.


I get what you’re saying, but I still think that it will be a scam. Bookmark this thread and let’s continue the conversation after it’s released.

I think you are informed by more of an emotional interest than a technical one, here. You've written several such posts and many of them are astronomically unlikely predictions.

Ok but didn’t Karpathy make it clear that we live in the vibe era? I’m inclined to trust vibes more than technical jargon, and boy are the vibes off with what’s been happening!

Let’s see what happens :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: