Hacker Newsnew | past | comments | ask | show | jobs | submit | ndr's commentslogin

Worth checking this post from someone who actually has worked on this change:

> I take significant responsibility for this change.

https://www.lesswrong.com/posts/HzKuzrKfaDJvQqmjh/responsibl...


This guy from Effective Altruism pivoted away from helping the poor to help try to control AI from being a terminator type entity and then pivoted to being, ah, its okay for it to be a terminator type entity.

> Holden Karnofsky, who co-founded the EA charity evaluator GiveWell, says that while he used to work on trying to help the poor, he switched to working on artificial intelligence because of the “stakes”:

> “The reason I currently spend so much time planning around speculative future technologies (instead of working on evidence-backed, cost-effective ways of helping low-income people today—which I did for much of my career, and still think is one of the best things to work on) is because I think the stakes are just that high.”

> Karnofsky says that artificial intelligence could produce a future “like in the Terminator movies” and that “AI could defeat all of humanity combined.” Thus stopping artificial intelligence from doing this is a very high priority indeed.

https://www.currentaffairs.org/news/2022/09/defective-altrui...

He is just giving everyone permission to do bad things by saying a lot of words around it.


Effective Altruism is such a beautiful term for a pretentious Karen that needs to wrap their selfish actions with moral superiority.

It's that perfect blend of I'm doing what everyone else are doing, and I'm better than everyone else.

Chefs' Kiss


> then pivoted to being, ah, its okay for it to be a terminator type entity.

Isn’t that the opposite of what he’s saying? He’s saying it could become that powerful, and given that possibility it’s incredibly important that we do whatever we can to gain more control of that scenario


> Isn’t that the opposite of what he’s saying?

The quote was from 2022 for the first pivot to AI to prevent it from becoming a terminator style entity. The last pivot was not in the quote but is the topic of this current Hacker News post, where takes credit for dropping the safety pledge:

"That decision included scrapping the promise to not release AI models if Anthropic can’t guarantee proper risk mitigations in advance."

I expect the next pivot will be that we need to allow the US military to use Anthropic to kill people because otherwise they will use a less pure AI to kill people and our Anthropic is better at only killing the bad guys, thus it is the lesser evil.


I think the poster here has an axe to grind, considering they quoted something that directly contradicted their point and didn't even notice.

The quote was only for the 2022 pivot to AI safety, the 2026 pivot away from AI safety is the topic of this hacker news post.

Getting SBF vibes from this. "Earn to give" is an inherently flawed philosophy.

Effective altruism came from the "rationalist"

It was never about helping poor people.

For some reason, the rationalist movement and its offshoots are really pervasive in silicon valley. i don't see it much in the other tech cities.


> I generally think it’s bad to create an environment that encourages people to be afraid of making mistakes, afraid of admitting mistakes and reticent to change things that aren’t working

"move fast and break things" ?


"don't hold me liable"

> > I take significant responsibility for this change.

Empty words. I would like to know one single meaningful way he will be held responsible for any negative effects.


Did this guy actually write this?

Incredibly long and verbose. I will fall short of accusing him of using an AI to generate slop, but whatever happened to people's ability to make short, strong, simple arguments?

If you can't communicate the essence of an argument in a short and simple way, you probably don't understand it in great depth, and clearly don't care about actually convincing anybody because Lord knows nobody is going to RTFA when it's that long...

At best, you're just trying to communicate to academics who are used to reading papers... Need to expect better from these people if we want to actually improve the world... Standards need to be higher.


Perhaps they didn’t have the time to write a shorter version.

Or the discipline.

Maybe neither.


This is where people go to post long verbose statements.

You can usually find the short version on Twitter.


This style is in vogue for the less wrong community.

I genuinely believe that website is responsible for a lot of the worst ideas currently permeating the technology sector.

pretty much the intellectual equivalent of looksmaxxing

Been thinking about the nature of this behavior for a long time, you have nailed it so well, no one will be able to take out this nail.

Worth checking out what someone working on it actually has to say: https://www.lesswrong.com/posts/HzKuzrKfaDJvQqmjh/responsibl...

Will they ever have to go public? I imagine there's a way they can buy everything back.

Regulator link:

https://ico.org.uk/about-the-ico/media-centre/news-and-blogs...

But it's too long for HN limits.


Soon indeed. From today:

> Cardiologist wins 3rd place at Anthropic's hackathon.

https://x.com/trajektoriePL/status/2024774752116658539


https://archive.ph/tUUMd as the site randomly 404s


I wonder what they will find. They seemed to have acknowledged working on the problem before.

https://x.com/elonmusk/status/2011432649353511350


Have you seen some of the stuff in the Enron or Epstein emails? They can be rather candid and act as if there is nothing to hide or they will never get caught

Elites need a reckoning


If you have trouble with toe nail trauma (all chipped for instance) check out heel lock lacing. It will prevent your toes to hit against the front of the shoes.

One example here [0] for running shoes but it's useful also for normal walking. Ian of course has his own entry about this [1]

[0] https://www.youtube.com/watch?v=OBbc6TackDQ&t=68s [1] https://www.fieggen.com/shoelace/locklacing.htm


I found this to be incredibly helpful for blister prevention. Only downside is your laces have to be pretty long since this takes up a couple more inches on each side.


Very.

> Tesla was Norway's top-selling car brand for a fifth consecutive year, with a 19.1% market share, followed by Volkswagen at 13.3% of registrations and Volvo Cars at 7.8%.

https://www.reuters.com/sustainability/climate-energy/norway...


I would say it's almost impossible with typical packaging. What makes it easy?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: