Hacker Newsnew | past | comments | ask | show | jobs | submit | Nuzzerino's commentslogin

> Building a software system is a lot like building a skyscraper: The product everyone sees is the top, but the part that keeps it from falling over is the foundation buried in the dirt and the scaffolding hidden from sight.

They should have just called it an ivory tower, as that's what they're building whenever they're not busy destroying democracy with OS Backdoor lobbyism or Cambridge Analytica shenanigans.

Edit: If every thread about any of Elon Musk's companies can contain at least 10 comments talking about Elon's purported crimes against humanity, threads about Zuckerberg's companies can contain at least 1 comment. Without reminders like this, stories like last week's might as well remain non-consequential.


With the reputation of that company, one can wonder a lot of backstories that are even more depressing than a memory shortage.

Loads for me, had to refresh the page once though.

I personally think that when he mentioned her name during that interview, it was intended to be used as an archetypal proxy in place of someone else (another public figure) that he had personal dealings with. Yudkowsky checks those same boxes (mission focused on specific existential risk, gets a cult following) for example.

That being said, I don’t care much for Christian prophecies. Better to talk why than who.


There’s nothing Christian about what Thiel is talking about here, even if he does wrap it up in the bible.

Whether you believe in Christianity or not, his views are deeply, deeply heretical. He’s so far out of pocket he’s in a completely different pair of trousers.


I don't even think Christianity itself is authentic anymore, so we can just leave it at that. It essentially amounts to outsourcing your spirituality to a man who died thousands of years ago (as great as he was at the time), through whatever institutional filters decide which words of his to cherry-pick.

> We need llms to be able to tap that not add the same functionality a layer above and MUCH less efficiently.

Agents, tool-integrated reasoning, even chain of thought (limited, for some math) can address this.


You're both completely missing the point. It's important that an LLM be able to perform exact arithmetic reliably without a tool call. Of course the underlying hardware does so extremely rapidly, that's not the point.

The computer ALREADY does do math reliably. You are missing the point.

Could you explain why that is?

A tool call is like 100,000,000x slower isnt it?

No idea really, but if it is speed related I would have thought that OP would have used faster rather than importance to try and make their point.

It's both. Being dirrctly a part of it makes it integrated into its intelligence for training and operation.

Damn, who knew there would be an arms race to gobble up those domains and be the sole judge to decide if we reached arbitrary milestones? I should have thought of that sooner :P


That’s a coward’s take, and even if you are taking the middle-ground route there are sufficiently legal ways. You just won’t find much enthusiasm about it among people here because the demographic of this platform is living comfortably.


It’s okay to look at things as art. Not everything needs to be explained to have value.


What's strange is I just saw a TikTok video in my Waymo earlier.

"a perfect explanation of quantum tunneling"

It was a baseball game. A pitcher had thrown a pitch, and there was some optical illusion of the ball appearing to go through the batter's swing. It looked like the ball went through the bat. Apparently this is quantum tunneling. The atoms aligned perfectly and the ball passed through.


How does a model “trigger” self-harm? Surely it doesn’t catalyze the dissatisfaction with the human condition, leading to it. There’s no reliable data that can drive meaningful improvement there, and so it is merely an appeasement op.

Same thing with “psychosis”, which is a manufactured moral panic crisis.

If the AI companies really wanted to reduce actual self harm and psychosis, maybe they’d stop prioritizing features that lead to mass unemployment for certain professions. One of the guys in the NYT article for AI psychosis had a successful career before the economy went to shit. The LLM didn’t create those conditions, bad policies did.

It’s time to stop parroting slurs like that.


‘How does a model “trigger” self-harm?’

By telling paranoid schizophrenics that their mother is secretly plotting against them and telling suicidal teenagers that they shouldn’t discuss their plans with their parents. That behavior from a human being would likely result in jail time.


At least they didn’t claim to invent AGI this time from prompts only… lol


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: