Isn't AI safety mostly a marketing thing? Like, we employ these safety people to make sure our chat bot does not turn into Skynet, implying the chat bot could turn into Skynet i.e. it's powerful and magic and please give us money.
Maybe the text prediction programs are too familiar to people for the Skynet marketing to bite like it used to.
Or maybe it was not just a marketing thing and the AI bros really did believe we were a few GPUs and some training data away from AGI, but now they no longer believe this.
> we employ these safety people to make sure our chat bot does not turn into Skynet
i think it's mostly about not showing up in some NYT article titled "look what crazy thing i got this AI to say". There were a bunch of those early on and it really hurt the cause. Microsoft had some famous ones, even prior to chatgpt, where the AI got pretty testy in the chat.
"Not only were the colours and patterns unusually fine, but the clothes that were made of the stuffs had the peculiar quality of becoming invisible to every person who was not fit for the office he held, or if he was impossibly dull."
Sufficient status entirely changes how the act of asking dumb questions is perceived by others. A person with a small title is seen as asking dumb questions because they are dumb. A person with a big title asks dumb questions because they are smart. Of course it's not just title but also age, gender, race, appearance, etc.
I didn’t know that, but I have some personal biases because I have had 6 or 7 years of pure Common Lisp paid work, a little over 1 year of Clojure paid work, and a few months of paid Scheme work.
There are many reasons web applications are more attractive. Web applications can't be pirated as the user does not receive the program. Users don't want desktop applications, as it's more convenient to login to a web application with a Google or Facebook account than it is to install a native application. A web application works on any device and most people only have a smartphone.