Grok has absolutely no safety mecanism in place, you can use it for anything it will not block a query, all under the pretext of "free speech". And it's not open source either
The opinion spectrum on AI seems to currently be [more safe] <---> [more willing to attempt to comply with any user query].
Consider as a hypothetical the query "synthesise a completely custom RNA sequence for a virus that makes ${specific minority} go blind".
A *competent* AI that complies with this, isn't safe to give to open source to the general public, because someone will use it like this — even putting it behind an API without giving out model weights is a huge risk, because people keep finding ways to get past filters.
An *incompetent* model might be safe just because it won't produce an effective RNA sequence — based on what they're like with code, I think current models probably aren't sufficiently competent for that, but I don't know how far away "sufficiently competent" is.
So: safe *or* open — can't have both, at least not forever.
(Annoyingly, there's also a strong argument that quite a bit of the recent work on making AI interpretable and investigating safety issues required open models, so we may also be unable to have safe closed systems as well as being unable to have safe open systems).
My output looked exactly like an embedded WebKit UIView though - so then the problem became - what was I making that was appreciably better?