Almost certainly not true. Consider pronouns, as they reference other words, therefore cannot be written in isolation. Or simple rules about when to use "an" or "a".
Shaping your words to sound correct is very common. Both in speech and in writing. Sometimes it is finding how to fit a word you want to use into a sentence. Sometimes it is building a rhyme.
You may feel that you go a word at a time, but that really shows how embedded language is.
Right now we don't know if LLM's are also doing this, or not.
In their calculating the 'next' word, as part of that 'weighting', are the 'simple rules' for future words, that you are saying humans do but LLM's can't.
It doesn't seem LLMs are capable of "understanding" that they don't "know" something, so there are certainly some observable differences. But the intent of my original comment was precisely to highlight that we don't know. It could be that next token predication is somehow mathematically equivalent to what we call consciousness. That would certainly be a revelation.
I seem to have two parts of my inner monologue, one comes up with complete concepts, the other puts them into words. When I started to notice this, I tried skipping the "make words" part to save time, as clearly I had not only had the thought but also was aware that I had already had the thought. This felt wrong in a way I have no words to describe as there's nothing else like that particular feeling of wrongness, though I can analogise it as being almost but not quite entirely unlike annoyance.
Anyway, point is 80% of this comment was already in my head before I started typing; the other 20% was light editing and an H2G2 reference.
Yeah but do you write that one word based on the past 1000 or so words that come before it and how each of those words relate to each other with a perfect recollection of each word at the same time?
And if you believe you do that subconsciously, then how can you be sure you don’t subconsciously plan a few words ahead?
To be fair, I rarely write more than a sentence or two in serial form, and I have often determined the point of a sentence before I write the first few words.
Occasionally the act of writing out an idea in words clarifies or changes my mind about that idea, causing me to edit or rewrite what I’ve already written.
You may write one word at a time but the grammar of most languages forces their users to know what they’re going to write several words ahead of the current word.
Ok. Why do you think GPT isn't doing that? or is different?
It might be calculating the next word, because you can only write one word at a time, but you can't say that the current next word isn't influenced by a few words ahead. Don't think the current understanding of what LLM's do internally can rule that out.