You'd be surprised. A number of fairly technical people who are just not that familiar with ML I know got confused by this and believed the models were actually being tuned daily. I don't think that's universally understood at all.
That has actual practical implications and isn't just pedantry. People might like some model and avoid better dialog engines like perplexity believing they'd have to switch.
I think you either didn't read my response or missed the point. No matter if the LLM output is useful or not, the learning outcome is hugely impacted. Negatively.
It's like copying on your homework assignments. Looks like it gets the job done, but the point of a homework assignment is not the result you deliver, it's the process of creating that result which makes you learn something.