Hacker Newsnew | past | comments | ask | show | jobs | submit | sureMan6's commentslogin

A new user is much more likely to scan the codebase and report vulnerabilities so they can be fixed than illegally exploit them since most people aren't criminals

Exactly. Who even hacks stuff? Most people will report the issue to earn xp and level up than actually exploit it.

This article is LLM written and unsourced so the entire thing could be a hallucination and there's no reason to trust any claim made in it, don't even read it

Not really incompetence if he hasn't looked at the article

> Another solution is to make software makers responsible and liable for the output of their products. It's long been a problem that there is little legal responsibility, but we shouldn't just accept it. If Ford makes exploding cars, they are liable. If OpenAI makes software that endangers people, it should be the same.

That kind of thinking is exactly why LLMs are so censored, because people think OAI should be liable if someone uses chatgpt to commit cyber crimes

How about cyber crimes are already illegal and we just punish whoever uses the new tools to commit crimes instead of holding the tool maker liable

This gets complex if LLMs enable children to commit complex crimes but that's different from just outright restricting the tool for everyone because someone might misuse it


There's always some wedge issue that means "don't punish the toolkmaker" is not politically viable. You can pick from guns to legal drugs to illegal drugs to all kinds of emotive things.

And once the wedge is in and the concept of maker responsibility is planted, it expands to people's pet issues, obviously.

The actual line of who gets punished just ends up at some equilibrium in the middle. Largely arbitrarily.


I think the classic one is pedophiles and protecting children.

If someone uses ChatGPT to create child porn or worse, to get help tracking down and meeting children, there is NO way in hell the public will accept "don't punish the toolmaker" as a principle.


"It's just a neutral tool" gets a lot harder to claim once a vendor starts specifically training and marketing the model for its ability to bypass security controls.

Yes, pentesting tools, even automated ones, are often legal. But they commonly do run up against legal restrictions and risks. They're marketed very differently from ChatGPT.


I bought a used Samsung and it started boot looping almost immediately, all these issues seem very specific to Samsung


The Seagate of cellphones


I'm not aware of Seagate ever having produced explosives.

>I bought a used Samsung and it started boot looping almost immediately

Maybe that's why the previous owner sold it.


I opt into it on my site it's just a login option you can ignore if you want to log in another way, but for those who use it it removes the friction of writing out a password and verifying the email


It can’t just be ignored, it covers content, and if someone accidentally clicks the wrong thing… poof, they now have that site linked to their Google account.

It’s a cancer on the Internet.


Thanks for sharing! It's not really easily ignored for some people (I ignore it the same way I ignored banner ads in the 00s). I'm curious if you have any metrics on bounce ratios with/without the option. The sentiment here on HN appears to be largely negative but HN does not represent the population at large. I find that many people don't mind or even like a lot of stuff that HN tends to hate.


I’m annoyed by it every time on every site when I have to dismiss it. Probably not the only one and probably depends on your type of site/visitors.


I'm sure some number of website owners ran A/B tests and determined that more people signed in when it was present.

I'm also sure that some number of website owners don't know or care that it's annoying to some people.

Personally I've just learned to ignore it; but if it did annoy me enough I'd zap it with uBlock.


Checking what the school is exposing the children to is part of parenting, if enough parents demand parental controls on the iPad you'd get that. Also it sounds insane that any school is given children iPads, if anything the studies show worse outcomes with iPads


> Checking what the school is exposing the children to is part of parenting, if enough parents demand parental controls on the iPad you'd get that.

Yes, but often times enough parents DONT demand that.

Most parents think "ipads are a good thing children need to learn tech in order to have good jobs". Other parents think "ipads aren't good but if I complain I'll be that annoying parent that no one likes". Only a minority is vocal.


> everyone is using AI for copyediting now aren't they?

If the studies that say that humans prefer AI writers are to be believed then you'd be a fool not to


Depends on the type of human you want to attract.


I believe it's already a solved problem especially with base models (pre RL) but they still push the LLM voice either to make it easy to identify or because they think it's likeable, so it's not that OAI, anthropic, Google can't get rid of the assistant voice it's that they don't want to


No swyping and no autocorrect make it DOA


The site specifically says it is swipe-friendly, and the GitHub site makes clear that it supports autocorrect.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: