It certainly has logic. I had some fun using the "virtual machine" example someone else did, with the "nvidia-smi" command, if I told him it was hot in the room, the next run of the command was showing an higher temperature on the GPU. This is the logical conclusion from an hotter room.
> It certainly has logic. I had some fun using the "virtual machine" example someone else did, with the "nvidia-smi" command, if I told him it was hot in the room, the next run of the command was showing an higher temperature on the GPU. This is the logical conclusion from an hotter room.
Orrrr.... it's drawing references from other texts that were colocated with the word "hot"
It's an inference based on how chatgpt works, which is a more reasonable inference than assuming chatgpt somehow has abstract logical reasoning capabilities.
It doesn't have any logic, it's just prediction based on statistics.
There is so many examples already floating around that it has no logic but I will give you really simple one from my experiments:
I told it to:
> curl somedomain.ext
It replied with curl error that this hostname doesn't exists.
> And it replied with some random http response showing that this hostname exists.
And that's not logical? ChatGPT doesn't know what is there, so it answer logically based on what should happens there. Obviously having 2 different answers make it less logical for sure, but I have seen many peoples makes plenty of logic error too in real life.
It's crazy to me that for an AGI to be one, it need to be infallible in logic...
What about doing wget over https://chat.openai.com/chat ? I don't believe it had much Google result over that when it learned, yet it was able to logically infer it would be a chat assistant doing exactly what he was doing.