The problem with using large language models for brainstorming or writing is that the fundamental mechanism by which they work is to choose the most common thing to say at any given point — that is, the most average, middle of the bell curve thing to say. That's how they give the appearance of having any form of coherence, by rarely if ever deviating from the happy path. So any ideas you get from it are going to be pretty unoriginal, and even if you give it original prompts for ideas, it's eventually just going to regress back to the mean again as it traverses the probability space back in the direction of the average at every step. And its writing is always going to be essentially the average human writing style.
But this isn't true in general, you can easily train a local model to write in very localized styles, and include temperatures that allow for wild swings outside of average.
If you want a rambling, occasionally brilliant Kerouac or de Montaigne you can make one.