Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

At some point we have to be willing to call out, at a societal level, that LLMs have been fundamentally oversold. The response to "It made defamatory facts up" of "You're using it wrong" is only going to fly for so long.

Yes, I understand that this was not the intended use. But at some point if a consumer product can be abused so badly and is so easy to use outside of its intended purposes, it's a problem for the business to solve and not for the consumer.



Maybe someone else actually made up the defamatory fact up, and it was just parroted.

But fundamentally the reason ChatGPT became so popular as opposed to its incumbents like Google or Wikipedia, is that it dispensed with the idea of attributing quotes to sources. Even if 90% of the things it says can be attributed, it's by design that it can say novel stuff.

The other side of the coin is that for things that are not novel, it attributes the quote to itself rather than sharing the credit with sources, which is what made the thing so popular in the first place, as if it were some kind of magic trick.

These are obviously not fixable, but part of the design. I have the theory that the liabilities will be equivalent if not greater to the revenue recouped by OpenAI, but the liabilities will just take a lot longer to realize, considering not only the length of trials, but the length of case law and even new legislation to be created.

In 10 years, Sama will be fighting to make the thing an NFP again and have the government bail it out of all the lawsuits that it will accrue.

Maybe you can't just do things


Businesses can't just wave a magic wand and make the models perfect. It's early days with many open questions. As these models are a net positive I think we should focus on mitigating the harms rather than some zero tolerance stance. We shouldn't allow the businesses to be neglectful, but I don't see evidence of that.


> We shouldn't allow the businesses to be neglectful, but I don't see evidence of that.

Calling it "AI", shoving it into many existing workflows as if it's competently answering questions, and generally treating it like an oracle IS being neglectful.


Here on HN we talk about models, and rightfully so. Elsewhere though people talk about AI, which has a different set of assumptions.

It's worth noting too that how we talk about and use AI models is very different from how we talk about other types of models. So maybe it's not surprising people don't understand them as models.


Even if they had a magic wand, they still couldn't make them perfect. Because they are by nature, imperfect statistical machines. That imperfection IS their main feature.


It can't be perfect right? I mean the models require some level of entropy?


Businesses should be able to not lie. In fact, they should be punished for lying and exaggersting much more often - both by being criticised and loosing contracts and legally.


> As these models are a net positive

Uhhh… net positive for who exactly?


For the shareholders of a few companies (in the short term).


Chatgpt has 800 million weekly active users. I think it's a net positive for them.


Well, since that's 10x the number of weekly active opioid users, it's at least 10x more positive than fentynal.

Or am I not following your logic correctly?


You are not arguing in good faith.


You seem to be missing the obvious point: popularity of a product doesn't ensure the benefit of said product. There are tons of wildly popular products which have extremely negative outcomes for the user and society at large.

Let's take a weaker example, some sugary soda. Tons of people drink sugary sodas. Are they truly a net benefit to society, or a net negative social cost? Just pointing out that there are a high number of users doesn't mean it inherently has a high amount of positive social outcomes. For a lot of those drinkers, the outcomes are incredibly negative, and for a large chunk of society the general outcome is slightly worse. I'm not trying to argue sugary sodas deserve to be completely banned, but its not a given they're beneficial just because a lot of people bothered to buy them. We can't say Coca-Cola is obviously good for people because its being bought in massive quantities.

Do the same analysis for smoking cigarettes. A product that had tons of users. Many many hundreds of millions (billions?) of users using it all day every day. Couldn't be bad for them, right? People wouldn't buy something that obviously harms them, right?

AI might not be like cigarettes and sodas, sure. I don't think it is. But just saying "X has Y number of weekly active users, therefore it must be a net positive" as some example of it truly being a positive in their lives is drawing a correlation that may or may not exist. If you want to show its positive for those users, show those positive outcomes, not just some user count.


Net positive to me, means that the negative aspects are outweighed by the positive aspects.

How confident are you that 800M people know what the negative aspects are to make it a net positive for them?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: