> The Co-Pilot suit is ostensibly being brought in the name of all open source programmers. Yes, that’s right, people crusading in the name of open source–a movement intended to promote freedom to use source code–are now claiming that a neural network, designed to save programmers the onus of re-inventing the wheel when they need code to perform programming tasks, is de facto unlawful. The open source movement is wonderful in many ways, but its tendency to engage in legal maximalism to “protect” open source is sometimes disappointing.
What a disingenuous framing of the open-source software movement.
When I license a work I created as GPL or MIT or whatever, I do so because I imagine that some user or some company would use the software to build something based on my software and contribute their changes back to the community so that we can all benefit. This is standing on the shoulders of giants. Microsoft "using" my source code to build a for-profit programming-as-a-service bot was not what I had in mind. At the very least, the complaint that the model doesn't give credit to its sources is correct. We might need new licenses so people who write open source code can opt out of this type of usage.
> The Stable Diffusion suit alleges copyright infringement, stating that, “The resulting image is necessarily a derivative work, because it is generated exclusively from a combination of the conditioning data and the latent images, all of which are copies of copyrighted images. It is, in short, a 21st-century collage tool.” That characterization is the essence and conclusion of the lawsuit, and one with which many AI designers would disagree.
AI designers can disagree, but how about they try building an AI model without using someone else's images. The AI model is not even possible without the source images. Show a little gratitude.
If you ask me, AI is not in danger of being swallowed by copyright law—it's in danger of being swallowed up by its own self-entitlements.
What a disingenuous framing of the open-source software movement.
When I license a work I created as GPL or MIT or whatever, I do so because I imagine that some user or some company would use the software to build something based on my software and contribute their changes back to the community so that we can all benefit. This is standing on the shoulders of giants. Microsoft "using" my source code to build a for-profit programming-as-a-service bot was not what I had in mind. At the very least, the complaint that the model doesn't give credit to its sources is correct. We might need new licenses so people who write open source code can opt out of this type of usage.
> The Stable Diffusion suit alleges copyright infringement, stating that, “The resulting image is necessarily a derivative work, because it is generated exclusively from a combination of the conditioning data and the latent images, all of which are copies of copyrighted images. It is, in short, a 21st-century collage tool.” That characterization is the essence and conclusion of the lawsuit, and one with which many AI designers would disagree.
AI designers can disagree, but how about they try building an AI model without using someone else's images. The AI model is not even possible without the source images. Show a little gratitude.
If you ask me, AI is not in danger of being swallowed by copyright law—it's in danger of being swallowed up by its own self-entitlements.