> Maybe prompt engineering and temperature settings can be enough to prevent this, but ChatGPT has only been out ~6 months and I find the attitude that the burden is on OpenAI to make sure it never says anything bad completely ass backwards. It's like a writer controlling his pen with puppet strings and going "wow, look what it's saying!"
The problem ChatGPT - but especially image, audio and video AI systems - has is that these systems have drastically lowered the barrier to entry to create harmful content.
Everyone can rig up an AI prompt to draw a convincing picture of a politician having sexual intercourse with a child in a matter of mere minutes - a work that would take even a dedicated visual artist many hours. For "voice cloning" AI, it's even worse, as it was practically impossible to emulate that unless you had access to a person talented enough to re-enact voices - and now anyone can pull off completely "legitimate" sounding prank voice calls.
The general availability of such specialized AI completely erodes trust in anything not happening face-to-face - it was already bad enough with people denying actual genocides (Armenia, the Holocaust, Bosnia), but now? How can we even make sure historians can trust records from our time period?
I agree with everything you're saying. I just think these are two different things
1) AI creators should be held responsible for making it easier to generate harmful content, and we can't know whether content is AI generated or not so this becomes a pervasive problem.
2) AI creators should not be held responsible if people just assume everything it says is true, even when they know what they're consuming came from an LLM. That is, the college professor that just failed his entire class because ChatGPT told him they were cheating. He should not sue OpenAI, he should be fired. Similarly, a news organization that uses ChatGPT to generate news shouldn't be surprised if they accidentally commit libel using it.
The problem ChatGPT - but especially image, audio and video AI systems - has is that these systems have drastically lowered the barrier to entry to create harmful content.
Everyone can rig up an AI prompt to draw a convincing picture of a politician having sexual intercourse with a child in a matter of mere minutes - a work that would take even a dedicated visual artist many hours. For "voice cloning" AI, it's even worse, as it was practically impossible to emulate that unless you had access to a person talented enough to re-enact voices - and now anyone can pull off completely "legitimate" sounding prank voice calls.
The general availability of such specialized AI completely erodes trust in anything not happening face-to-face - it was already bad enough with people denying actual genocides (Armenia, the Holocaust, Bosnia), but now? How can we even make sure historians can trust records from our time period?