I asked ChatGPT "What is a subset?" Part of the response:
> a subset must contain fewer elements than the larger set it is a part of
I said that the specific sentence was not true. It spewed out some more stuff, including:
> it is possible for a subset to have the same number of elements as the larger set, in which case it is called a proper subset
I told it again that this specific sentence was incorrect. Then it told me:
> there is a special type of subset called a "singleton" that contains only one element, and therefore has the same number of elements as the larger set
Again, incorrect. It has no knowledge of accepting that it's wrong. Its general response was:
> I apologize if my previous response was not clear. <Some more nonsense>. I apologize if my previous response was unclear or misleading. Is there anything else I can help you with?
It never accepts that it is wrong or incorrect. It just states that it was unclear. It's rather condescending and arrogant, despite being objectively incorrect and being told that. It pulls together some statistically related things in grammatically correct sentences. Only the sentence forming is impressive. Everything else is it just throwing up stuff and people posting examples of survivorship bias as if it spews out all the awesome stuff all the time. In my time with ChatGPT, it wasn't even close to a chat. It was more like Google returning results in sentence form. It couldn't respond to anything. I even asked to please stop apologizing and explaining that it's a model, because I was getting this over and over and over:
> I'm sorry, but I am a large language model trained by OpenAI and do not have the ability to keep track of time or current events. My knowledge is based on the text that I have been trained on, and my responses are based on that information. I do not have the ability to browse the internet or access any information outside of what I have been trained on.
Including when I simply asked it to tell me something that it knew.