No, Sam Altman is the CEO of OpenAI because he built it. He's incredibly smart and driven, the only wealthy person I've worked with who I think is, at his core, a good person who cares.
I don't know the backstory, but they've gone their separate ways since this podcast. Musk left OpenAI after a failed takeover attempt:
> Musk told fellow cofounder Sam Altman in early 2018 that he thought OpenAI, which has since created ChatGPT, was lagging behind Google, people familiar with the matter told Semafor. Musk offered to take charge of OpenAI to lead it himself, but when Altman and other co-founders said no, Musk stepped down from the board and backed out of a huge donation, per Semafor.
Have you considered that maybe his skill is making other people, like you, believe he is a good person who cares? He is, after all, most known for his ability to influence others (particularly billionaires) into doing what he wants. And the whole doomsday prepper thing is pretty weird.
Maybe! But I wasn’t rich or important when I worked with Sam, and he had no reason to charm me. He barely knew me, and everything I saw about him was genuine and kind — not to me, but how he interacted with the world.
I’ve known many “important” or “rich” people over the years, and there’s always something in their eye that betrays what they want you to think of them. Sam is likely the only person of that sort I’d ever speak fondly of — he’s a genuinely good person who cares about humans more than the abstract concept of humanity.
I think a lot of people wish this were true as it would justify any envy people have of him.
Unfortunately, I don't think you can be as successful at something like Artificial Intelligence by just sucking up to people with money.
I don't know him, but when a lot of smart independent voices are saying "this guy's wicked smart" I really think it would be a lot harder to fake it than to actually be wicked smart (and deeply altruistic - fella seems pretty genuine about progressing humanity instead of MSFT's share price).
Deeply altruistic? Isn’t he the guy behind worldcoin, creating a privacy nightmare? How is collecting biometric data from millions of people ethical, especially when most of those people likely don’t even understand what it is that they’re giving away, for little money?
Different views on what's "good" for humanity, and all that. Lots of historical villains have been doing something they thought was good for humanity -- it's just that most of humanity disagreed.
I agree with you, by the way. It's just an area with a lot of subjective gradation.
It seems to me that the really wicked smart people understood long ago that AI research is very dangerous and that founding a new company along the lines of OpenAI would increase the danger: yes, it would provide significant benefits to humanity, but then it would wipe out the human species, either directly or by helping another AI lab to do so.
I'm praying that the reckless actions of people like Altman do not kill me and everyone I care about.
Everyone knows someone who is wicked smart and very ambitious, but has a blind spot such that they cannot apply their intelligence to perceive correctly a situation in which they must lose status (i.e., admit that founding OpenAI was a mistake and shut it down) for the sake of the greater good. Maybe Altman is one of those.
Also, when in human history have the people who've been most successful in scrambling to the top of some sector of the economy been the ones who successfully recognized when that sector has become destructive? When the commercial excesses of America during the Gilded Age needed to be reined in, was it some industrial magnate like Carnegie or Rockefeller who sounded the alarm and led society's corrective action against the excesses? Was it the CEOs and leading scientists of the fossil-fuel companies that led the charge against global warming?
> It seems to me that the really wicked smart people understood long ago that AI research is very dangerous.
Every time I hear Elon or Altman claiming the "Dangers of AI" I don't see them as intelligent.
I see them as pandering to a misinformed public, trying to upsell AI as the Terminator, when it's nothing more than a sophisticated algorithm. What they're actually doing is anti-competitive, attempting to stop new entrants by securing a monopoly by way of government regulation and misinformation. It's not intelligence, it's sleazy.
You don't think people don't envy people like him?
Multi-millionaire. CEO of the most promising AI company in the world which might actually crack it. World Leaders are asking him when he's free for a quick chat. How fulfilled do you think he feels?
It would be weird for people not to have some envy of someone like that. It's not super healthy but the only thing productive about envy is that it pushes you to do something better.
> It would be weird for people not to have some envy of someone like that.
I guess that I'm weird, then, because I genuinely have no envy of him.
I made my comment because it's quite common to see people dismiss criticisms of others based on "they're just envious", when in fact envy is rarely the reason for the criticisms.
I know what you mean and I agree that it's an easy reason to use but I felt the original question wasn't a reasonable thought-out argument and sounded more bitter than analytical and bitterness leads to envy which leads to the dark side.
In a world where our jobs and entire careers are based on "impact" and "fulfilment" I think this an interesting question to ask about someone who's had the impact he's had as in I don't know anyone who hasn't heard of ChatGPT.
Another reminder that Elon Musk routinely lies about timelines. If Mars colonization had followed the timeline laid out here, we'd be three years from having lots of people on Mars.
On a side note, I loved Altman's interview style - he focused on the interviewee instead of soapboxing, which is unfortunately very rare in interviewers.
In 1965 I. J. Good predicted an ultraintelligent machine by 2000, Vinge (1993) predicted greater-than-human intelligence between 2005 and 2030, Yudkowsky (1996) predicted a singularity by 2021, and Kurzweil (2005) predicted human-level artificial intelligence by 2030. Moravec (1988) predicted human-level artificial intelligence in supercomputers by 2010 by extrapolating past trend using a chart, while Moravec (1998/1999) predicted human-level artificial intelligence by 2040, and intelligence far beyond human by 2050. In a 2017 interview, Kurzweil predicted human-level intelligence by 2029 and billion fold intelligence and singularity by 2045
I do not hold people who make unjustifiably confident predictions in high regard. Yud and Kurzweil are the ones from your quote who I am more familiar with and they are constantly wrong and constantly overconfident. They've spent decades making wrong predictions, and they keep doing it without updating their methods. To me, that's not making mistakes, that's lying.
Lying means they knew their prediction was wrong, but they did it anyway. Are you suggesting Yud and Kurzwell knew they were wrong? What about “updating model”? If only AI can be correctly predicted by models. Don’t you see the contradiction?
We are talking about Mars colonization prediction. Read first before comment?
This is the OP:
Another reminder that Elon Musk routinely lies about timelines. If Mars colonization had followed the timeline laid out here, we'd be three years from having lots of people on Mars.
And none of the ones you mentioned are talking about Mars colonization. I was just highlighting that holding people just pontificating about possible futures should not be held to the same level of scrutiny as a person saying "my company will do X in y years/months, give me money in advance".
Depends on the situation, I think. Did the person otherwise behave assuming their prediction is true? Have they told anyone else that they don't really believe it? Etc.
I wasn't claiming that that Musk (or anyone in particular) was lying in my my comment. I was just pointing out that you can, in fact, lie through making speculative statements.
"This year you will be able to drive coast to coast with no interventions".
"Tesla indicated that Elon is extrapolating on the rates of improvement when speaking about L5 capabilities. Tesla will not say if the rate of improvement would make it to L5 by end of calendar year."
If you had a ton of incredibly smart people working on the project, a ton of secured funding and a track record of actually delivering things that are incredibly complicated and ground breaking, then some people would choose to believe you, and they might even invest in you.
Of course, there will always be people that don't, and that's perfectly fine too.
I mean the guy said "the rockets will land themselves in 5 years". He might be off a couple of years but he delivered. Same with electric cars. Stands to reason he'll deliver on the Mars promise (though I think someone will beat him to the self-driving punch).
That's true I'm cherry-picking but you kind of have to when people are making impossible claims. Most of them won't come true (at least, yet) but as long as some of them come true I think that's a net positive.
Making impossible claims in investor calls, in corporate communications, to be clear, to the point where the company has later had to asterisk said claims as "visionary, not targets".
> But for every one of those there's a cyber truck, a "self driving mode," etc.
Cybertruck is entering mass production soon. Yes, it's late, but it's coming.
Same for self driving. It is clearly getting better all the time, and it's coming.
Again, he is often late delivering on very difficult and ambitious things. Have a look how the competition are doing with re-usable rockets, mass producing electric vehicles, self driving, etc.
None of these things are easy, and they take time.
I mean, electric cars were a thing. Not a big thing. But nonetheless. And FSD is to me a lot less impressive than implied. And on FSD, he has repeatedly well, failed to deliver - "this year, coast to coast, no interventions" is something he's said each and every year since 2016. And it still runs in to the back of stationary emergency lighted vehicles. Until last year it could barely handle roundabouts. It saw level crossings as a weird intersection with a light that was red, and then not, and then red again, with a collection of trucks crossing it.
Sure, it's hard to predict if Tesla hadn't been formed whether electric cars would be as big as they are now (perhaps another innovator would've jumped in).
I'd agree that he's failed to deliver on the FSD so far, but I mean, it'll be such a shift (when either Tesla or some other group nail it which like I said I think is the most likely) that when we look back on it in 100 years we're not going to care how far their time estimates were out by.