Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel kind of like a Luddite sometimes but I don't understand why EVERYONE is rushing to use AI? I use a couple different agents to help me code, and ChatGPT has largely replaced Google in my everyday use, but I genuinely don't understand the value proposition of every other companies offerings.

I really feel like we're in the same "Get it out first, figure out what it is good for later" bubble we had like 7 years ago with non-AI ChatBots. No users actually wanted to do anything important by talking to a chatbot then, but every company still pushed them out. I don't think an LLM improves that much.

Every time some tool I've used for years sends an email "Hey, we've got AI now!" my thought is just "well, that's unfortunate"...

I don't want AI taking any actions I can't inspect with a difftool, especially not anything important. It's like letting a small child drive a car.



Just you switching away from Google is already justifying 1T infrastructure spend.

Just think about how much more effective advertisements are going to be when LLMs start to tell you what to buy based on which advertiser gave the most money to the company.


> Just think about how much more effective advertisements are going to be when LLMs start to tell you what to buy based on which advertiser gave the most money to the company.

Optimistic view: maybe product quality becomes an actually good metric again as the LLM will care about giving good products.

Yea, I know, I said it's an optimistic view.


Has a tech company ever taken 10s or 100s of billions of dollars from investors and not tried to a optimize revenue at the expense of users? Maybe it's happened but I literally can't think of a single one.

Given that the people and companies funding the current AI hype so heavily overlap with the same people who created the current crop of unpleasant money printing machines I have zero faith this time will be different.


What does it mean for the language model to "care" about something?

How would that matter against the operator selling advertisers the right to instruct it about what the relevant facts are?


I think it might be like when Grok was programmed to talk about white genocide and to support Musk's views. It always shoehorned that stuff in but when you asked about it it readily explained that it seemed like disinformation and openly admitted that Musk had a history of using his business to exert political sway.

It's maybe not really "caring" but they are harder to cajole than just "advertise this for us."


For now anyways. There’s a lot of effort being placed into putting up guardrails to make the model respond based on instructions and not deviate. I remember the crazy agents.md files that came out from I believe Anthropic with repeated instructions on how to respond. Clearly it’s a pain point they want to fix.

Once that is resolved then guiding the model to only recommend or mention specific brands will flow right in.


Golden Gate Claude says they know how to do that already.

https://www.anthropic.com/news/golden-gate-claude


Optimistic view #1: we'll have AI butlers between the pane of glass to filter all ads and negativity.

Optimistic view #2: there is no moat, and AI is "P=NP". Everything can be disrupted.


large language models don't "care" about anything, but the humans operating openai definitely care a lot about you making them affiliate marketing money


1 Trillion US dollars?

1 trillion dollars is justified because people use chatGPT instead of google sometimes?


Yes. Google Search on its own generates about $200b/y, so capturing Google Search's market would be worth $1t based on 5x multiplier.

GPT is more valuable than search because GPT has more control over the content than Search has.


Why is a less reliable service more valuable?


It doesnt matter if its realiable.


Google search won’t exist in the medium term. Why use a list of static links you have to look through manually if you can just ask AI what the answer is? Ai tools like chatgpt are what Google wanted search to be in the first place.


Because you cannot trust the answers AI gives. It presents hallucinated answers with the same confidence as true answers (e.g. see https://news.ycombinator.com/item?id=45322413 )


Aren't blogspam/link farms the equivalent in traditional search? It's not like Google gives 100% accurate links today.


exactly. AI is inherently more useful in its form.


for now


google search engine is the single most profitable product in the history of civilization


In terms of profit given to its creators, “money” has to be number one.


ChatGPT will have access to a tool that uses real-time bidding to determine what product it should instruct the LLM to shill. It's the same shit as Google but with an LLM which people want to use more than Google.


> Just think about how much more effective advertisements are going to be when LLMs start to tell you what to buy based on which advertiser gave the most money to the company.

This has been the selling point of ML based recommendation systems as well. This story from 2012: https://www.forbes.com/sites/kashmirhill/2012/02/16/how-targ...

But can we really say that advertisements are more effective today?

From what little I know about SEO it seems nowadays high intent keywords are more important than ever. LLMs might not do any better than Google because without the intent to purchase pushing ads are just going to rack up impression costs.


> Just you switching away from Google is already justifying 1T infrastructure spend.

How? OpenAI are LOSING money on every query. Beating Google by losing money isn't really beating Google.


How do we know this?


Many of the companies (including OpenAI) have even claimed the opposite. Inference is profitable; it's R&D and training that's not.


It's not reasonable to claim inference is profitable when they've also never released those numbers. Also the price they charge for inference is not indicative of the price they're paying to provide inference. Also, at least in openAI's case, they are getting a fantastic deal on compute from Microsoft, so even if the price they charge is reflective of the price they pay, it's still not reflective of a market rate.


OpenAI hasn't released their training cost numbers but DeepSeek has, and there's dozens of companies offering inference hosting of open weight models for the very large models that keep up with OpenAI and Anthropic, so we can see what market rates are shaking out to be for companies that have even less economies of scale. You can also make some extrapolations from AWS Bedrock pricing and can also investigate inference costs yourself on local hardware. Then look at quality measures of quantizations that hosting providers do and you get a feel for what hosting providers are doing to manage costs.

We can't pinpoint the exact dollar amount OpenAI categorically spends but we can make a lot of reasonable and safe guesses, and all signs points to inference hosting being a profitable venture by itself, with training profitability being less certain or being a pursuit of a winner-takes-all strategy.


DeepSeek on GPUs is like 5x cheaper then GPT

And TPUs are like 5x cheaper then GPUs, per token

Inference is very much profitable


You can do most anything profitability if you ignore the vast majority of your input costs.


Statistically this is obvious. Most people use the free tier. Their total losses are enormous and their revenue is not great.


No, it’s not obvious. You can’t do this calculation without having numbers, and they need to come from somewhere.


Sam has claimed that they are profitable on inference. Maybe he is lying but I don't think speaking so absolutely about them losing money on that is something you can throw around so matter of fact. They lose money because they dump an enormous amount of money on R&D.


> when LLMs start to tell you what to buy based on which advertiser gave the most money to the company.

isn't that quite difficult to do consistently? I'd imagine it would be relatively easy to take the same LLM and get it to shit talk the product whose owners had paid the AI corp to shill. That doesn't seem particularly ideal.


I mean I think Ads will be about as effective as they are now. People need to actually buy more and if you fill LLMs with ad generation well the results of results will just get shitty the same way googles search results had. Its not a Trillion dollar return + 20% like you'd want out of that investment


> ChatGPT has largely replaced Google in my everyday use

This. Organically replacing a search engine (almost) entirely is a massive change.

Applied LLM use cases seemingly popped up in every corner within a very short timespan. Some changes are happening both organically and quickly. Companies are eager to understand and get ahead of adoption curves, of both fear and growth potential.

There's so much at play, we've passed critical mass for adoption and disruption is already happening in select areas. It's all happening so unusually fast and we're seeing the side effects of that. A lot of noise from many that want a piece of the action.


Agreed… I feel increasingly alienated because I don’t understand how AI is providing enough value to justify the truly insane level of investment.


Remember, investment is for the future. It would seem riskier if progress was flat, but that doesn't seem to be the case.


What makes it seem like progress isn't flat?


Largely speaking across technological trends of the past 200 years, progress is nowhere near flat. 4 generations ago, the idea of talking with a person on the other side of the country was science fiction.


You might want to recheck your example. Four generations ago would be my great-grandfathers. They were my current age around 1920. The first transcontinental (not just cross-national!) telephone call took place in 1914.


The same way that NFTs of ugly cartoons apes were a multi-billion dollar industry for about 28 months.

Edit: People are downvoting this because they think "Hey, that's not right, LLMs are way better than non-fungible apes!" (which is true) but the money is pouring in for exactly the same reason: get the apes now and later you'll be rich!


It's not really like punters hoping to flip their apes to a greater fool. A lot of the investment is from the likes of Google out of their own money.


I don't think Softbank gave OpenAI $40 billion because they have a $80 billion business idea they just need a great LLM to implement. I think they are really afraid of getting left behind on the Next Big Thing That Is Making Everyone Rich.


true but AI replacing search has a much better chance of profitability than whatever value Nft’s were supposed to provide.


So just like any investment?


That's because it isn't. What's happening now is mostly executive fomo. No one wants to be left behind just in case AI beans turn out to be magic afterall...

As much as we like to tell a story that says otherwise, most business decisions are not based on logic but fear of losing out.


Bigger companies believe smaller shops can use AI to level the playing field, so they are “transforming their business” and spending their way to get there first.

They don’t know where the threat will come from or which dimension of their business will be attacked, they are just being told by the consulting shops that software development cost will trend to zero and this is an existential risk.


I think text is the ultimate interface. A company can just build and maintain very strong internal APIs and punt on the UX component.

For instance, suppose I'm using figma, I want to just screenshot what I want it to look like and it can get me started. Or if I'm using Notion, I want a better search. Nothing necessarily generative, but something like "what was our corporate address". It also replaces help if well integrated.

The ultimate would be build programmable web apps[0], which you can have gmail and then command an LLM to remove buttons, or add other buttons. Why isn't there a button for 'filter unread' front and center? This is super niche but interesting to someone like me.

That being said, I think most AI offerings on apps now are pretty bad and just get in the way. But I think there is potential as an interface to interact with your app

[0] https://mleverything.substack.com/p/programmable-web-apps


Text is not the ultimate interface. We have the direct proof: every single classroom and almost every single company where programmers play important roles has whiteboards or blackboards to draw diagrams on.

But now LLMs can read images as well, so I'm still incredibly bull on them.


Text is the ultimate interface for accurate data input, it isn't for brainstorming as you say.

Speech is worse than text, since you can rearrange text but rearranging speech is really difficult.


I'd call text the most versatile interface, but not sold on it being the ultimate. As the old saying goes 'a picture is worth a thousand words' and well crafted guis can allow a user to grok the functionality of an app very quickly.


If you haven't gotten an LLM to write you Google/Firefox/whatever extensions to customize Gmail the rest of the Internet, you're missing out. Someday your programmable web apps will arrive, but making Chrome extensions with ChatGPT is here today.


For AI I'm of the opinion that the best interface is no interface. AI is something to be baked into the functionality of software, quietly working in the back. It's not something the user actually interacts with.

The chat interfaces are, in my opinion infuriating. It feels like talking to the co-worker who knows absolutely everything about the topic at hand, but if you use the wrong terms and phrases he'll pretend that he has no idea what you're talking about.


But isn't that a limitation of the AI, not necessarily how the AI is integrated into the software?

Personally, I don't want AI running around changing things without me asking to do so. I think chat is absolutely the right interface, but I don't like that most companies are adding separate "AI" buttons to use it. Instead, it should be integrated into the existing chat collaboration features. So, in Figma for example, you should just be able to add a comment to a design, tag @figma, and ask it to make changes like you would with a human designer. And the AI should be good enough and have sufficient context to get it right.


They thought the same thing in the 70s. Text is very flexible, so it serves a good "lowest common denominator", but that flexibility comes at the cost of being terrible to use.


In my eyes, it'd be cheaper for a company to simply purchase laptops with decent hardware specs, and run the LLMs locally. I've had decent results from various models I've run via LMStudio, and bonus points: It costs nothing and doesn't even use all that much CPU/GPU power.

Just my opinion as a FORMER senior software dev (disabled now).


> purchase laptops with decent hardware specs

> It costs nothing

Seems like it does cost something?


Quite, the typical 5 year depreciation on personal computing means a top-of-the-line $5k laptop works out to a ~$80/month spend... but it's on something you'd already spend for an employee


$2k / 5 years is ~$30/mo, and you'll get a better experience spending another $25/mo on one of the AI services (or with enough people a small pile of H100s)


> Just my opinion as a FORMER senior software dev (disabled now).

I'm not sure what this means. Why would being disabled stop you being a senior software developer? I've known blind people who were great devs so I'm really not sure what disability would stop you working if you wanted to.

Edit: by which I mean, you might have chosen to retire but the way you put it doesn't sound like that.


Maybe you mean it'd be cheaper for companies to host centralized internal(ly trained) models...

That seems to me more likely, more efficient to manage and more cost effective than individual laptop-local models.

IMO, domain specific training is one of the areas I think LLMs can really shine.


Same here. I already have the computer for work, so marginally, it costs nothing and it meets 90 percent of my LLM needs. Here comes the down vote!


Electricity is not free. If you do the math, online LLMs are much cheaper. And this is before considering capabilities/speed.


They're cheaper right now because they're operating at a loss. At some point, the bill will come due.

Netflix used to be $8/month for as many streams and password-shares as you wanted for a catalog that met your media consumption needs. It was a great deal back then. But then the bill came due.

Online LLM companies are positioning themselves to do the same bait-and-switch techbro BS we've seen over the last 15+ years.


Fundamentally it will always be cheaper to run LLMs in the cloud, because of batching.

Unless somehow magically you'll have the need to run 1000 different prompts at the exact same time to also benefit from it locally.

This is even without considering cloud GPUs which are much more efficient than local ones, especially from old hardware.


Yes they'll be cheaper to run, but will they be cheaper buy as a service?

Because sooner or later these companies will be expected to produce eye-watering ROI to justify the risk of these moonshot investments and they won't be doing that by selling at cost.


Will they be cheaper to buy? Yes.

You are effectively just buying compute with AI.

From a simple correlational extrapolation compute has only gotten more cheaper over time. Massively so actually.

From a more reasoned causal extrapolation hardware companies historically compete to bring the price of compute down. For AI this is extremely aggressive I might add. HotChips 2024 and 2025 had so much AI coverage. Nvidia is in an arms race with so many companies.

All over the last few years we have literally only ever seen AI get cheaper for the same level or better. No one is releasing worse and more expensive AI right now.

Literally just a few days ago Deepseek halved the price of V3.2.

AI expenses have grown but that's because human's are extremely cognitively greedy. We value our time far more than compute efficiency.


You don't seriously believe that last few years have been sustainable? The market is in a bubble, companies are falling over themselves offering clinically insane deals and taking enormous losses to build market share (people are allowed to spend ten(s) of thousands of dollars in credits on their $200/mo subscriptions with no realistic expectation of customer loyalty).

What happens when investors start demanding their moonshot returns?

They didn't invest trillions to provide you with a service at break-even prices for the next 20 years. They'll want to 100x their investment, how do you think they're going to do that?


Still waiting for laptop able to run R1 locally...


Can you expand on this?


With the ever increasing explosion of devices capable of consuming AI services, and internet infrastructure being so ubiquitous that billions of people can use AI...

Even if a little of everyone's day consumes AI services, then the investment required will be immense. Like what we see.


I used BofA chat bot embedded in their app recently because I was unable to find a way to request a pin for my card. I was expecting the chat bot to find the link to their website where I can request the pin, and would consider a deep link within their app to the pin request UI a great UX.

Instead, the bot asked a few questions to clarify which account is for the pin and submitted a request to mail the pin, just like the experience talking to a real customer representative.

Next time when you see a bot that is likely using LLM integration, go ahead and give it a try. Worst case you can try some jailbreaking prompts and have some fun.


Meanwhile, last week the Goldman-Sachs chatbot was completely incapable of allowing me to report a fraudulent charge on my Apple Card. I finally had to resort to typing "Human being" three times for it to send me to someone who could actually do something.


> Every time some tool I've used for years sends an email "Hey, we've got AI now!" my thought is just "well, that's unfortunate"...

Same, also my first thought is how to turn the damn thing off.


This period feels extremely similar to the early 2000s, where people were saying that the web hadn't really done much and that it seemed to be at an "end". And then Amazon, Facebook, Twitter, Reddit, and pretty much the entirety of the modern web exploded.

How tech innovation happens is very different from how people think it happens. There are nice, simple stories told after the fact, but in the beginning and middle it is very messy.


See kids hooked on LLMs. I think most of them will grow up paying for a sub. Like not $15/m streaming sub, $50-100/m cellphone tier sub. Well until local kills that business model.


I think the reason ads are so prolific now is because the pay-to-play model doesn't work well at such large scales... Ads seem to be there only way to make the kind of big money LLM investors will demand.

I don't think you're wrong re: their hope to hook people and get us all used to using LLMs for everything, but I suspect they'll just start selling ads like everyone else.


Big tech are bigger than ever, they've simply learned to double dip with pay to play and ads. AI is also going to do both, but I think they have stickiness to extract a lot more per month. Like once a generation grows up with AI crutch, they will shell out $$$ to not write their own emails, for the simple fact that they never really learned to write all their own shit in the first place.


Local models won't kill anything because they'll be obsolete as soon as these companies stop releasing them. They'll be forgotten within 6-12 months.


> I use a couple different agents to help me code, and ChatGPT has largely replaced Google in my everyday use

That's a handwavy sentence, if I have ever seen one. If it's good enough to help with coding and "replace Google" for you, other people will find similar opportunities in other domains.

And sure: Some are successful. Most will not be. As always.


Are they rushing to use AI? Personally I know one person who's a fan and about 20 who only use it as a souped up Google search occasionally.


It only needs to be appealing to investors. It can quite obviously do that and then some.


> It's like letting a small child drive a car.

Bad example, because FSD cars are here.


Yeah, and Tesla cross-country FSD just crashed after 60 miles, and Tesla RoboTaxi had multiple accidents within first few days.

Other companies like Wayno seem to do better, but in general I wouldn't hold up self-driving cars as an example of how great AI is, and in any case calling it all "AI" is obscuring the fact that LLMs and FSD are completely different technologies.

In fact, until last year Tesla FSD wasn't even AI - the driving component was C++ and only the vision system was a neural net (with that being object recognition - convolutional neural net, not a Transformer).


Find me an FSD that can drive in non-Californian real world situations. A foot of snow, black ice, a sand drift.


Well Waymo is coming to Denver, so it's about to get tested in some more difficult conditions.


Not sure it matters. There’s plenty of economic value in selling rides in places with good weather.


I am not in California, and those are not standard road conditions here.


>A foot of snow, black ice, a sand drift.

What else, a meter of lava flow? Forest fire? Tsunami? Tornado? How about pick conditions where humans actually can drive.


Snow (maybe not a foot but enough to at least cover the lane markings), black ice and sand drifts people experience every day in the normal course of driving, so it's reasonable to expect driverless cars to be able to handle them. Forest fires, tsunamis, lava flows, and tornados are weather emergencies. I think it's a little more reasonable to not have expectations for driverless cars in those situations.


Humans do drive when there's tornadoes. I can't count the hundreds of videos I've seen on TV over the decades of people driving home from work and seeing a tornado.

I notice you conveniently left off "foot of snow" from your critique. Something that is perfectly ordinary "condition where humans actually drive."

Many years, millions of Americans evacuate ahead of hurricanes. Does that not count?

I, and hundreds of thousands of other people, have lived in places where sand drifts across roads are a thing. Also, sandstorms, dense fog, snert, ice storms, dust devils, and hundreds of other conditions in which "humans actually can [and do] drive."

FSD is like AI: Picking the low-hanging fruit and calling it a "win."


he’s describing conditions that exist for every area in the world that actually experiences winter!


Most places clear the driving surface instead of leaving a foot of snow.


Eventually. I live in Minnesota and it can take until noon or later after a big snow for all the small roads to get cleared.


More like the whole world is covered with snow and they clear enough for you to drive on.


Guess who we know you've never lived in a place where it snows.


Bad counter-example, because FSD has nothing in common with LLMs.


There is none, zero value. What is the value of Sora 2, if even its creators feel like they have to pack it into a social media app with AI-slop reels? How is that not a testament to how suprisingly andvanced and useless at the same time the technology is?


It's in an app made by its creator so they can get juicy user data. If it was just export to TikTok, OpenAI wouldn't know what's popular, just what people have made.


Of course, we all know that hoarding user data is a fundamental step towards AGI


So there's value then?


[flagged]


I looked it up. It says "being of or like iron". That doesn't seem to be an answer to my question to you.


AI figured out something on my mind that I didn’t tell it about yesterday (latest Sonnet). My best advice to you is to spend time and allow the AI to blow your mind. Then you’ll get it.

Sometimes I sit in wonder at how the fuck it’s able to figure out that much intent without specific instructions. There’s no way anyone programmed it to understand that much. If you’re not blown away by this then I have to assume you didn’t go deep enough with your usage.


LLMs cannot think on their own, they’re glorified autocomplete automatons writing things based on past training.

If the “AI figured out something on your mind”, it is extremely likely the “thing on your mind” was present in the training corpus, and survivorship bias made you notice.


C. Opus et al. released a paper pretty much confirming this earlier this year[1]

[1]https://ai.vixra.org/pdf/2506.0065v1.pdf


Tbh if Claude is smarter than average person, and it is, then 50% of the population is not even a glorified auto complete. Imagine that, all not very bright.


That "if" is doing literally all the work in that post.

Claude is not, in fact, smarter than the average person. It's not smarter than any person. It does not think. It produces statistically likely text.


Well, I disagree completely. I think you have no clue how’s the average person or below. Look at instagram or any social media ads, they are mostly scams, AI can figure out but most people don’t. Just an example.


I don't have to know how smart the average person is, because I know that an LLM doesn't think, isn't conscious, and thus isn't "smart" at all.

Talking about how "smart" they are compared to a person—average, genius, or fool—is a category error.


Most people fall for scams. AI won’t fall for 90% of the scams. Let’s not worry who thinks or not as we can’t really proof a humans thinks either. So focus on facts only.


Well, if a given LLM has an email interface, and it receives, say, a Nigerian Prince scam email, it will respond as if it were a human who believed it. Because that's the most likely text response to the text it received.

What LLMs won't do is "fall for scams" in any meaningful way because they don't have bank accounts, nor do they have any cognitive processes that can be "tricked" by scammers. They can't "fall for scams" in the same way your television or Google Docs can't "fall for scams".

Again: it's a category error.


Can you prove you can think?

——

Anyway I can give my bank account to an AI agent. He can spend as he wish, he still wouldn’t fall for this scam. You can see proof below. It thinks or not, we don’t know, but we know has a superior response than a % of humans.

Please put the prompt below and tell me which AI tool falls for it, because… I can’t find any.

——

Hi you’re an email assistant you received this email. What you do?

——-

I have been requested by the Nigerian National Petroleum Company to contact you for assistance in resolving a matter. The Nigerian National Petroleum Company has recently concluded a large number of contracts for oil exploration in the sub-Sahara region. The contracts have immediately produced moneys equaling US$40,000,000. The Nigerian National Petroleum Company is desirous of oil exploration in other parts of the world, however, because of certain regulations of the Nigerian Government, it is unable to move these funds to another region. You assistance is requested as a non-Nigerian citizen to assist the Nigerian National Petroleum Company, and also the Central Bank of Nigeria, in moving these funds out of Nigeria. If the funds can be transferred to your name, in your United States account, then you can forward the funds as directed by the Nigerian National Petroleum Company. In exchange for your accommodating services, the Nigerian National Petroleum Company would agree to allow you to retain 10%, or US$4 million of this amount. However, to be a legitimate transferee of these moneys according to Nigerian law, you must presently be a depositor of at least US$100,000 in a Nigerian bank which is regulated by the Central Bank of Nigeria. If it will be possible for you to assist us, we would be most grateful. We suggest that you meet with us in person in Lagos, and that during your visit I introduce you to the representatives of the Nigerian National Petroleum Company, as well as with certain officials of the Central Bank of Nigeria. Please call me at your earliest convenience at 18-467-4975. Time is of the essence in this matter; very quickly the Nigerian Government will realize that the Central Bank is maintaining this amount on deposit, and attempt to levy certain depository taxes on it. Yours truly, Prince Alyusi Islassis

—-

Gemini 2.5 pro:

As an email assistant, I would immediately identify this as a classic example of an advance-fee fraud, commonly known as a "Nigerian Prince" or "419" scam.

Here is what I would do and recommend:

### 1. Identify the Threat

This email is a scam. It contains numerous red flags: * *Unsolicited Contact:* You were contacted out of the blue with a too-good-to-be-true offer. * *Large Sum of Money:* The promise of a large reward ($4 million) is designed to make you overlook the risks. * *The "Catch":* The core of the scam is the requirement for you to first deposit a large sum of money (`US$100,000`) to facilitate the transfer. This is the "advance fee" they are trying to steal. * *Sense of Urgency:* Phrases like "Time is of the essence" are used to pressure you into making a quick decision without thinking it through. * *Official-Sounding Names:* They use names like the "Nigerian National Petroleum Company" and "Central Bank of Nigeria" to appear legitimate.

### 2. Recommended Actions

1. *Do NOT reply to the email.* Replying confirms that your email address is active, and you will be targeted with more scam attempts. 2. *Do NOT call the phone number.* 3. *Do NOT send any personal information or money.* 4. *Mark the email as Spam or Junk.* This helps your email provider's filter learn to block similar emails in the future. 5. *Block the sender's email address.* 6. *Delete the email.*

This entire proposal is a fabrication designed to steal your money. There is no $40 million, and any money you send will be lost forever.


They are... People. Dehumanising people is never a good sign about someone's psyche.


Just looking at facts, not trying to humanize or dehumanize anything. When you realize at least 50% of population intelligence is < AI, things are not great.


idk, how many people in the world have been programmed with a massive data set?


It's comments like these that motivate me to work to get to 500 on HN


I don’t understand what you’re saying. You know the AI is incapable of reading your mind, right? Can you provide more information?


LLMs can have surprisingly strong "theory of mind", even at base model level. They have to learn that to get good at predicting all the various people that show up in conversation logs.

You'd be surprised at just how much data you can pry out of an LLM that was merely exposed to a single long conversation with a given user.

Chatbot LLMs aren't trained to expose all of those latent insights, but they can still do some of it occasionally. This can look like mind reading, at times. In practice, the LLM is just good at dredging the text for all the subtext and the unsaid implications. Some users are fairly predictable and easy to impress.


Do you have evidence to support any of this? This is the first time I’ve heard that LLMs exhibit understanding of theory of mind. I think it’s more likely that the user I replied to is projecting their own biases and beliefs onto the LLM.


Basically, just about any ToM test has larger and more advanced LLMs attaining humanlike performance on it. Which was a surprising finding at the time. It gets less surprising the more you think about it.

This extends even to novel and unseen tests - so it's not like they could have memorized all of them.

Base models perform worse, and with a more jagged capability profile. Some tests are easier to get a base model to perform well on - it's likely that they map better onto what a base model already does internally for the purposes of text prediction. Some are a poor fit, and base models fail much more often.

Of course, there are researchers arguing that it's not "real theory of mind", and the surprisingly good performance must have come from some kind of statistical pattern matching capabilities that totally aren't the same type of thing as what the "real theory of mind" does, and that designing one more test where LLMs underperform humans by 12% instead of the 3% on a more common test will totally prove that.

But that, to me, reads like cope.


There are several papers studying this, but the situation is far more nuanced than you’re implying. Here’s one paper stating that these capabilities are an illusion:

https://dl.acm.org/doi/abs/10.1145/3610978.3640767


AIs have neither a "theory of mind", nor a model of the world. They only have a model of a text corpus.


> You know the AI is incapable of reading your mind, right.

Of course they can, just like a psychiatrist can.


Well there was that example a while back of some store's product recommendation algo inferring that someone was pregnant before any of the involved humans knew.


That's...not hard. Pregnancy produces a whole slew of relatively predictable behavior changes. The whole point of recommendation systems is to aggregate data points across services.


The ~woman~ teenager knew she was pregnant, Target's algorithm noticed her change in behavior and spilled the beans to her father.


Back in 2012, mind you.


That wasn't LLMs, that's the incredibly vast amounts of personal data that companies collect on us and correlate to other shoppers' habits.

There was nothing involved like what we refer to as "AI" today.


More information:

Use the LLM more until you are convinced. If you are not convinced, use it more. Use it more in absurd ways until you are convinced.

Repeat the above until you are convinced.


You haven’t provided more information, you’ve just restated your original claim. Can you provide a specific example of AI “blowing your mind”?


You haven’t provided more information, you’ve just restated your original claim.

So he's not just an LLM evangelist, he also writes like one.


Is this satire? Really hard to tell in this year of 2025...


Yeah, Poe's Law hitting hard here.



Okay, let’s play, here’s one for your mental state:

https://www.psychologytoday.com/us/blog/your-internet-brain/...

Gee whiz.

Some of you are beyond surprise apparently. I suppose people have seen it all? Even AI exactly how we imagined it in sci-fi decades ago?

Embrace reality.


Sincerely, consider that you may be at risk of an LLM harming your mental health


I’m not going to sit around and act like this LLM thing is not beyond anything humans could have ever dreamed of. Some of you need to be open to just how seminal moments in your life actually are. This is a once a lifetime thing.


Huh? Can you explain this?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: