Spying on your own citizens enables certain sorts of anti-democratic abuses (and has been used that way in the past), so I can understand the specific opposition to it. Put somewhat melodramatically, they're okay with spying but don't want to create self-coup tools.
I agree that the killbots red line is somewhat odd, but I guess you have to draw the line somewhere, and I prefer them having that principle to having no principle at all. (Also, it's possible that the AI insiders understand something I don't about why a human in the loop is important.)
I would argue that isn't really a moral argument though, it's rather utilitarian. If someone at OpenAI disagreed with that risk assessment, that's a difference of opinion not a reason to quit and write letters talking about ethics.
Also it's a rather American-centric view. If a Canadian is working at OpenAI, should they care? Or would they care more about possible anti-democratic interference by the American government on Canada?
Utilitarianism is a moral system. If you disagree with the risk assessment and are utilitarian, you believe that OpenAI got it wrong and is thus doing bad stuff. If a company making bridges, say, or nuclear power plants, was doing risk assessments that appeared to ignore substantial risks in order to get a lucrative contract, I would fully expect engineers to quit and start writing letters talking about ethics.
Agreed on the America-centric view, to an extent. I will note that almost all countries have spied on each other since time immemorial, but serious efforts to spy on their own citizens tend to coincide with uniquely repressive and unpleasant regimes. I think having a norm against spying on your own citizens is good, even if it isn't a perfectly elegant principle. Also, countries can do more damage spying on their own citizens vs other citizens -- as a Canadian, I don't want the American government spying on me, but I'd probably be more worried if the Canadian government was spying on me.
I'm not convinced that "laziness" is a weakness of human nature. Businesses are ultimately a way to specialize production without central planning. Allocating some people to bake all the bread in a factory frees up other people who would otherwise have to spend an hour or two every day baking bread. Or from a market perspective, you're willing to pay a premium for not having to put in the labour. Businesses certainly can be exploitative, but I don't think all or even most of them are by nature.
That the author does not think generative AI can solve social problems shows a serious lack of imagination. Therapy, medical diagnosis, improved machine translation, and better information retrieval are all social benefits. There are social costs, too, but that doesn't mean the technology is a waste of time -- it means the technology needs to be regulated.
It is very hard to get good medical diagnosis. Doctors are overworked and can barely spend more than 10 minutes thinking about you. People with long complex medical histories get completely fucked over, so basically these social tasks already are not being performed properly by humans because the humans doing it are stretched way too thin.
AI is going to quickly surpass the quality of medical diagnosis by doctors, at which point hopefully people can see a the right kind of specialists faster and get treatment quicker.
> AI is going to quickly surpass the quality of medical diagnosis by doctors
High doubts. We're still talking about GPTs? The 'transformer' part will be an issue. I use it the enterprise version _a lot_, but one thing you really can't use it for is finding a bug, and considering how Transformers work, that's not a surprise.
ChatGPT finds about half of the bugs in the code it creates when the issue is described, which is more than I expected and still low enough for my job security.
Given the history of AI is written in examples of "machines can't ever… oh wait one did that must mean this never needed intelligence", I don't rule out any specific breakthrough on any particular time scale.
It seems to me that this would be life-changing for some people and great for others, but I fear that this is the emotional-intimacy version of refined sugar. It won't be the end of the world, but if everyone can press the intimacy button without gaining many of the less obvious benefits of a close relationship, it may be quite unhealthy.
ChatGPT hallucinated badly when I was trying to learn about generating functions. It took me a good twenty minutes to figure out that I did understand it and the computer was just making things up. I do not use ChatGPT for learning anymore.
I imagine it would be possible to construct the system to substantially reduce the risk, even reduce it to "cosmic rays cause bit-flips which launch nukes" levels of risk. It's not like military nuclear command-and-control, you don't need one person who can push the button at any time. If we spot an asteroid, we'll have some time to deal with it. The system would presumably need to be constructed, which would take a few days even if everything was in storage on standby. Then, to activate it, you could have something like the US nuclear weapons' PAL, but instead of the codes being in the hands of one person, you spread them out. Just before launch it's activated with something analogous to the ICANN key ceremony: a bunch of well-regarded astronomers, rocket scientists, and aerospace engineers use their section of the Earth Asteroid Defence Key, and they press the metaphorical button together.
My first job was at a hydroponics store. I wanted to grow some fresh fruit in my basement over the winter, and wanted to build the system rather than buying an out-of-the-box one. I asked the owner a few questions about building a system, and got some advice from him. When I came back the second time to get more materials, I was offered a job.
It may be true, if we go by simple quantity of games. Low-quality mobile games with gambling mechanics get churned out at very high rates, especially compared to AAA titles. I wouldn't be surprised if most games are gacha by volume, but probably not when adjusted for number of players or total number of player-hours.
There are some "real" games which started with the intention of being fun and then added some in-app purchases to make money. (Angry Birds, Plants Vs Zombies, etc). Then, there are bare-minimum games which basically only exist as a shell for gambling mechanics. Endless Candy Crush clones, incremental / idle games, and so on.
The first type are, indeed, not easy to churn out (at least no easier than any other game). The second, though, are the sort that most third-year CS students could make in a few days of concentrated effort, and individual companies often make many, many games that are basically the same but with different assets. They're then sold with misleading ads in the hopes that a few whales will get addicted. The cost to produce them is so low that even single digits of whales buying lots of in-app-purchases can result in profit.
(There's also a third category: terrible games with a license to beloved franchises, with E.T. being the canonical example. Many of them are microtransaction hell, but more often they take advantage of the brand to sell them for an up-front price. Same strategy, though: minimize production values, market as aggressively as possible within budget, accept a relatively low return which is still net-profitable.)
Do you have any stats though? Because I don’t believe that.
It seems unlikely that you’re going to have people whaling for rare drops in a trash game with no social network. Whaling is often a status thing and benefits from network effects.
I played some Nintendo’s / Cygames gacha from a while back and its revenue numbers were pretty middling despite being pretty darn high end. Are people really making money on games that are just barely serviceable with no fan base but a slot machine in them? I find that hard to square.
I agree that the killbots red line is somewhat odd, but I guess you have to draw the line somewhere, and I prefer them having that principle to having no principle at all. (Also, it's possible that the AI insiders understand something I don't about why a human in the loop is important.)