> Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer.
Something I've never understood: why do AGI perverts think that a superintelligence is any more likely to "cure cancer" than "create unstoppable super-cancer"
AI will do neither of those things because curing or creating cancer requires physical experiments and trials on real people or animals, as does all science outside of computer science (which is often more math than science).
I can see AI being helpful in generating hypothesis, or potential compounds to synthesize, or helping with literature search, but science is a physical process. You don't generally do science just by sitting there and pondering, despite what the movies suggest.
There are a few fully automated wet labs and many semi-autonomous. They are called "Cloud Labs", and they will only become more plentiful. AI can identify and execute the physical experiments after using simulations to filter and score the candidate hypotheses.
They're actually right in that there are several attempts to create automated labs to speed up the physical part. But in reality there are only a handful and they are very very narrowly scoped.
But yes, potentially in some narrow domains this will be possible, but it still only automates a part of the whole process when it comes to drugs. How a drug operates on a molecular test chip is often very different than how it works in the body.
Exactly - AI allows for intersections in concepts from training data; up to the user to make sense of it. Thanks for stating this (I end up repeating same thing in every conversation, but is common sense).
Somehow it never crossed my mind, but human civilization could plausibly end in the next 10 years. Many thought if then the cause would be a nuclear war, turns out it's more like the 90's movie 12 monkeys. I would love to be proven wrong, yet there is no international regulation on AI.
>I would love to be proven wrong, yet there is no international regulation on AI.
What are the chances of advancing in AI regulation before any monumental fuck up that changes the public opinion to "yeah this thing is really dangerous". Like a Hiroshima or Chernobyl but for AI.
I'm not sure he's even talking about AGI (which feels unusual for Altman). He might be talking about GPT5 in agentic workflows. Or whatever their next model will be called
but anyone who gets into a Waymo that smells like dogshit can get out, report it, and wait for the next one. Are you describing a real problem or just one you imagine?
In these scenarios you do understand that there will be a non zero number of smelly Waymos, that was my entire point, also until someone reports it you really won't have that smelly Waymo fixed.
The point of organizing a union is to negotiate a contract. One that includes terms that individual workers wouldn't have been able to secure on their own.
A union that's recognized but doesn't have a contract may as well not be a union at all. Basically the only power they can exercise is to strike.