>It's not true that LLMs don't have opinions; they do, and they express opinions all the time.
Not at all, there's not even a "being" there to have those opinions. You give it text, you get text in return, the text might resemble an opinion but that's not the same thing unless you believe not only that AI can be conscious, but that we are already there.
You're just using a different definition of "opinion", one that is too reductive to be useful in this case. If an LLM outputs a text stream that expresses an opinion, then it has an opinion, regardless of whether it is conscious.
Not at all, there's not even a "being" there to have those opinions. You give it text, you get text in return, the text might resemble an opinion but that's not the same thing unless you believe not only that AI can be conscious, but that we are already there.