I think it can be a false comfort to think of LLMs being trained to the center of the bell curve. I think it's closer to true that there's no real "average" (just like there isn't an "average" human) because there's just too many dimensions.
But what LLMs do, in the absence of better instructions, is expect that the user WANTS the most middling innocuous output. Which is very reasonable! It's not a lack of capability; it's a strong capability to fill in the gaps in its instructions.
The person who has a good intuition for design (visual, narrative, conversational) but can't articulate that as instructions will find themselves stuck. And unsurprisingly this is common, because having that vision and being able to communicate that vision to an LLM is not a well practiced skill. Instructing an LLM is a little like instructing a person... but only a little. You have to learn it. And I don't think as LLMs get better that this will magically fix itself, because it's not exactly an error, there's no "right" answer.
Which is to say: I think applying design to one's work with AI is possible, important, and seldom done.
But what LLMs do, in the absence of better instructions, is expect that the user WANTS the most middling innocuous output. Which is very reasonable! It's not a lack of capability; it's a strong capability to fill in the gaps in its instructions.
The person who has a good intuition for design (visual, narrative, conversational) but can't articulate that as instructions will find themselves stuck. And unsurprisingly this is common, because having that vision and being able to communicate that vision to an LLM is not a well practiced skill. Instructing an LLM is a little like instructing a person... but only a little. You have to learn it. And I don't think as LLMs get better that this will magically fix itself, because it's not exactly an error, there's no "right" answer.
Which is to say: I think applying design to one's work with AI is possible, important, and seldom done.