The frustrating thing about your and several other arguments in this submission is that there is no rationale or data. All you are saying is "LLMs are not/cannot be good at therapy". The only (fake) rationale is "They are not humans." The whole comment comes across as tautological.
> The frustrating thing about your and several other arguments in this submission is that there is no rationale or data. All you are saying is "LLMs are not/cannot be good at therapy". The only (fake) rationale is "They are not humans." The whole comment comes across as tautological.
My comment to which you replied was a clarification of a specific point I made earlier and not intended to detail why LLM's are not a viable substitute for human therapists.
As I briefly enumerated here[0], LLM's do not "understand" relevant to therapeutic contribution, LLM's do not possess a shared human experience to be able to relate to a person, and LLM's do not possess an acquired professional experience specific to therapy on which to draw. All of these are key to "be good at therapy", with other attributes relevant as well I'm sure.
People have the potential to be able to satisfy the above. LLM algorithms simply do not.
The frustrating thing about your argument is that it runs on a pretence that we must prove squares aren’t circles.
A person may be unable to provide mathematical proof and yes be obviously correct.
The totally obvious thing you are missing is that most people will not encourage obviously self-destructive behaviour because they are not psychopaths. And they can get another person to intervene if necessary
I'm not sure I get the actual point you're making.
To begin with, not all therapy involves people at risk of harming themselves. Easily over 95% of people who can benefit from therapy are at no more risk of harming themselves than the average person. Were a therapy chatbot to suggest something like it to them, the response will either be amusement or annoyance ("why am I wasting time on this?")
Arguments from extremes (outliers) are the stuff of logical fallacies.
As many keep pointing out, there are plenty of cases of licensed therapists causing harm. Most of the time it is unintentional, but for sure there are those who knowingly abused their position and took advantage of their patients. I'd love to see a study comparing the two ratios to see whether the human therapist or the LLM fare worse.
I think most commenters here need to engage with real therapists more, so they can get a reality check on the field.
I know therapists. I've been to some. I took a course from a seasoned therapist who also was a professor and had trained them. You know the whole replication crisis in psychology? Licensed therapy is no different. There's very little real science backing most of it (even the professor admitted it).
Sure, there are some great therapists out there. The norm is barely better than you or I. Again, no exaggeration.
So if the state of the art improves, and we then have a study showing some LLM therapists are better than the average licensed human one, I for one will not think it a great achievement.
All these threads are full of "yeah but humans are bad too" arguments, as if the nature of interacting with, accountability, motivations or capabilities between LLMs and humans are in any way equivalent.
There are a lot of things LLMs can do, and many they can't. Therapy is one of the things they could do but shouldn't... not yet, and probably not for a long time or ever.
I'm not referring to the study, but to the comments that are trying to make the case.
The study is about the present, using certain therapy bots and custom instructions to generic LLMs. It doesn't do much to answer "Can they work well?"
> All these threads are full of "yeah but humans are bad too" arguments, as if the nature of interacting with, accountability, motivations or capabilities between LLMs and humans are in any way equivalent.
They are correctly pointing out that many licensed therapists are bad, and many patients feel their therapy was harmful.
We know human therapists can be good.
We know human therapists can be bad.
We know LLM therapists can be bad ("OK, so just like humans?")
The remaining question is "Can they be good?" It's too early to tell.
I think it's totally fine to be skeptical. I'm not convinced that LLMs can be effective. But having strong convictions that they cannot is leaping into the territory of faith, not science/reason.
> The remaining question is "Can they be good?" It's too early to tell.
You're falling into a rhetorical trap here by assuming that they can be made better. An equally valid argument that can be made is 'Will they become even worse?'
Believing that they can be good is equally a leap of faith. All current evidence points to them being incredibly harmful.
+1 I also wanted to point out, if there are questions about validation of the point made... just look at the post.
And from my perspective this should be common sense, and not a scientific paper. A LLM will allways be a statistical token auto completer, even if it identifies different.
It is pure insanity to put a human with a already harmed psyche in front of this device and trust in the best.
It's also insanity to pretend this is a matter of "trust". Any intervention is going to have some amount of harm and some amount of benefit, measured along many dimensions. A therapy dog is good at helping many people in many ways, but I wouldn't just bring them into the room and "trust in the best".