Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Firstly, thank you for engaging in a discussion. Secondly, I am not an expert in image processing, rather my focus in on language. Thus my intuitions will not work as much in my favour in this domain, although the models do have similarities.

They explore a range of sizes and I do not think it is fair to to only highlight the smallest ones. They do explore a 12M subset of LAION in Section 7 for a model that was trained on 2B images. Yes, it is not an ideal experimental setup to use a subset (they admit this) and far from LAION-5B, but it is a fair stab at this kind of analysis and is likely to lead to further explorations.

Let us return though to your claim, which is what I objected to: “Pretty much none of these systems ‘reconstruct an image in detail’.” I think it is fair to say that this work certainly makes me doubt whether none of these systems (even the larger ones) exhibit behaviour that may limit their generalisability or cross the boundary of what is legally considered derivative work.

You may very well be right that once we scale to billions of images this behaviour is improved (or maybe even disappears), but to the best of my knowledge we do not know if this is the case and we do not know when, how, and why it occurs if it does occur. I remain a firm believer that these kinds of models are the future as there is little evidence that we have reached their limits, but I will continue to caution anyone that talks in absolutes until there is solid evidence to support those claims.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: