Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is it me, or all these efforts seem like they are by definition always going to be insufficient.

It's just a matter of time for algorithm written text to actually be similar to some human written text. At that point, there is no longer a way to distinguish them, no matter how smart. If the texts are actually written the same way, there's no secret pattern that can be picked up on, and the fight is over.

I think to combat fake news, especially algorithmic one, we'll need to innovate around authentication mechanism that can effectively prove who you are and how much effort you put into writing something. Digital signatures or things like that.



As these generators get better, the amount of false positives will increase, eventually rendering the classifier as useless.

A sybil-resistant method of authentication, where each entity is tied to a single human, seems to be the only way. I suppose you could still pay people to publish under their credentials, or steal private keys, but this comes at a cost, and such accounts can be blacklisted.

Also, I don't think its correct to equate machine written news with fake news. It need not be the case. Eventually I think the only way to deter fake news is authentication + holding people accountable.


>It's just a matter of time for algorithm written text to actually be similar to some human written text.

https://xkcd.com/810/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: