Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Stop confusing ChatGPT with GPT-4. Most common rookie mistake. GPT-4 is way stronger at 'solving problems' than ChatGPT. I was baiting ChatGPT with basic logical or conversion problems, I stopped doing that with GPT-4, since it would take too much effort to beat it.


Possibly rookie mistake?

https://chat.openai.com/chat

What is this? Is this ChatGPT, or GPT4? I'm talking about my experiences last week with this URL.


Are you paying $20/month and selecting the GPT-4 from the drop-down menu?


It's trivially easy even with gpt-4.

> Please respond with the number of e's in this sentence.

> There are 8 "e" characters in the sentence "Please respond with the number of e's in this sentence."


Dealing with words on the level of their constituent letters is a known weakness of OpenAI’s current GPT models, due to the kind of input and output encoding they use. The encoding also makes working with numbers represented as strings of digits less straightforward than it might otherwise be.

In the same way that GPT-4 is better at these things than GPT-3.5, future GPT models will likely be even better, even if only by the sheer brute force of their larger neural networks, more compute, and additional training data.

(To see an example of the encoding, you can enter some text at https://platform.openai.com/tokenizer. The input is presented to GPT as a series of integers, one for each colored block.)


Almost like it has a kind of dyslexia when it comes to "looking inside" tokens.

If you instead ask it to write a Python program to do the same job, it will do it perfectly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: