my high school had a cs-70 and it poisoned me for life. that being said, theres a pretty big leap in terms of accessibility vs a browser based synth and you dont need $10,000 to play it so that's nice
Its such a shame that the AI era continues to lionize the last of the free and open internet. Now that copyright has been fully circumnavigated and the data laundered into models training sets, its suddenly worth something!
Is anybody shocked that when prompted to be a psychotherapy client models display neurotic tendencies? None of the authors seem to have any papers in psychology either.
There is nothing shocking about this, precisely, and yes, it is clear by how the authors are using the word "psychometric" that they don't really know much about psychology research either.
The broader point is that if you start from the premise that LLMs cannot ever be considered the way sentient beings are then all abuse becomes reasonable.
We are in danger of unscientifically returning to the point at which newborn babies weren’t considered to feel pain.
Given our lack of a definition for sentience it’s unscientific to presume no system has any sentient trait at any level.
I'm not shocked at all. This is how the tech works at all, word prediction until grokking occurs. Thus like any good stochastic parrot, if it's smart when you tell it it's a doctor, it should be neurotic when you tell it it's crazy. it's just mapping to different latent spaces on the manifold
I think popular but definitely-fictional characters are a good illustration: If the prompt describes a conversation with Count Dracula living in Transylvania, we'll percieve a character in the extended document that "thirsts" for blood and is "pained" by sunlight.
Switching things around so that the fictional character is "HelperBot, AI tool running in a datacenter" will alter things, but it doesn't make those qualities any less-illusory than CountDraculatBot's.
In so far as model use cases I don't mind them throwing their heads against the wall in sandboxes to find vulnerabilities but why would it do that without specific prompting? Is anthropic fine with claude setting it's own agendas in red-teaming? That's like the complete opposite of sanitizing inputs.
Also appeal to investors. Nobody would give tons of money to upstart which goal is to generate text porn, generated TikTok slop and make some needy teens suicide just to compete with Google Ads.
Selling big AGI dream that will literally make winner take it all is much more desirable.
Kind of an odd metric to try to base this process off of. are more comments inherently better? is it responding to buzz words? Makes sense talking about hiring algos / resume scanners in part one and if anything this elucidates some of the trouble with them.
reply