Preferred by anyone who's actually using and modifying the work.
No one trains an existing model from scratch, even those who have access to all of the data to do so. There's just no compelling reason to retrain a model to make a change when you have the weights already—fine tuning is preferred by everyone.
The only people I've seen who've asserted otherwise are random commenters on the internet who don't really understand the tech.
> Preferred by anyone who's actually using and modifying the work.
> ...fine tuning is preferred by everyone
How do you know this? Did you take a survey? When? What if preferences change or there is no consensus?
> The only people I've seen who've asserted otherwise are random commenters on the internet who don't really understand the tech.
There are lots of things that can be done with the training set that don't involve retraining the entire model from scratch. As a random example, I could perform a statistical analysis over a portion of the training set and find a series of vectors in token-space that could be used to steer the model. Something like this can be done without access to the training data, but does it work better? We don't know because it hasn't been tried yet.
But none of that really matters, because what we're discussing is the philosophy of open source. I think it's a really bad take to say that something is open source because it's in a "preferred" format.