Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Safety is actually more of a thing than you might guess if you read a lot from Zvi or Lesswrong. There's a large number of people working to develop safety systems. Given the nature of OpenAI, I saw more focus on practical risks (hate speech, abuse, manipulating political biases, crafting bio-weapons, self-harm, prompt injection) than theoretical ones (intelligence explosion, power-seeking). That's not to say that nobody is working on the latter, there's definitely people focusing on the theoretical risks. But from my viewpoint, it's not the focus."

This paragraph doesn't make any sense. If you read a lot of Zvi or LessWrong, the misaligned intelligence explosion is the safety risk you're thinking of! So readers "guesses" are actually right that OpenAI isn't really following Sam Altman's:

"Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could."[0]

[0] https://blog.samaltman.com/machine-intelligence-part-1



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: