> The "top researchers" in AI are Chinese. And I am skeptical that they have even remotely the philosophical or political alignment you are attempting to project on to them.
What an ugly trope. Idealism motivates Chinese workers just as often as any other nationality.
Idealism of what? That the government shouldn't use AI for surveillance or the military?
You really think the average Chinese worker thinks their government should stop working on AI because of liberal western values or something? This is nothing short of delusional.
I have my doubts that top Chinese AI researchers want to work for an AI company with direct tires to the white house and zero morals. Not for any great ethical concerns mind you. Simply because the US is a geological rival to China.
This is a deceptive line of dismissal. Sound principles needs to be figured out before imposing any kind of restriction on art - "things have changed" doesn't cut it.
Figuring things out is exactly what needs to happen. I think it is valid to dismiss arguments of “this is how copyright has always worked” when those rules were written before AI completely changed the game.
> What is known is that the majority believe that if artists were to be copied, they should at least be compensated.
I get the emotional side of this argument - artists going hungry while someone else cashes in on their ideas. But compensation is a dangerous premise, because derivative art is an established type of artistic freedom. Artists routinely mimic styles, or work within the bounds of styles established by masters, but they've never been expected to compensate those styles' pioneers. Imagine it as a precedent:
"Your stuff borrows from Warhol? Guess what buddy, you owe the Warhol estate x% of your sale."
Perhaps you're arguing things change when commercial interests are involved? But again, this has never been the case for advertising companies (with their hired artistic guns) or any kind of graphic design leaning on established artistic styles for effect and making a killing in the process.
In the case of AI, even if it has a commercial master, it seems much closer to the borrowing of an ordinary artist. It's a trained entity, with deep understanding of styles, capable of making new works. On top of that, it works under the instruction of a user with their own ideas, whose guidance is crucial in deciding the work's final state. The user is the artist here - like one of the visionaries who delegate the nitty gritty of production to helpers. In this case the helper is leased from the AI company, which is more like an agency supplying those helpers.
All in all it's hard to see how any compensation model wouldn't end up constricting the artistic freedom most of these artists depend on.
I suggest you look into the history of art, specifically how artists have operated studios or "factories" where apprentices produced replicas of the master's work. Modern artists have numbered prints, where they reproduce their work at scale to as a revenue source. Sol Lewitt produced instructions for his art, and people reproduce them as public murals all over the place. see: https://massmoca.org/event/sol-lewitt-a-wall-drawing-retrosp...
The 'machine' existed long before the AI companies in university art studies, galleries and republication, and the scale came from the graduates or ordinary acolytes borrowing wholesale the ideas and techniques they admired. Scale shouldn't alter the principle. Once there's a right to compensation established for derivation, you have to explain why it doesn't apply to the millions of artists making a living from exactly that.
> Why are the pro AI people so obsessed with proving the AI skeptics wrong.
It seems to me the pro-AI types just want to be free to enjoy a transformative tech and discuss the implications of its development and innovations - without being badgered and henpecked or told the results they see are some kind of mass delusion.
The "badgering and henpecking" "problem" was created entirely by AI bros hyping AI to everyone and forcing it in every possible channel and avenue.
You're literally trying to blame the victim. Put "don't show AI content" on every major platform and the henpecking will stop but (aside from technical annoyanced of doing it) that won't happen because companies want to force AI down our throats.
> Put "don't show AI content" on every major platform and the henpecking will stop
Your argument then is: "Ban the subject of AI from your platforms or we're coming at you with pitchforks. And don't say anything to us when we do, because we are the sad ones here." Correct?
Two loaves of bread off the same line are perfect substitutes for each other, and compete to be sold.
Lines of code within the same code base aren't competing to be sold. They either complement each other by adding new features, making the actual product sold more valuable, or one replaces another to make a feature more desirable- look better, work faster, etc.
The market grows if you add new features- your bread now doubles as a floatation device- or you introduce a new line of bread with nuts and berries.
So, the business has to decide- does it fire some workers and pocket the difference until someone else undercuts them, or does it keep the workers and grow the market it can sell to faster?
Read the comment I replied to to see where the bread came from.
But on your point (which seems to hinge on wish thinking), this infinity of new features you propose for every product still needs those new markets you take for granted to justify their inclusion in the product. However cornering a new market isn't as straightforward as deploying a new feature - we all wish it was. The tech that makes it trivial for one firm to develop these features, makes it trivial for everyone else to build them. This means any new market will be immediately saturated.
Even if the leap of finding new markets was as easy as you think, you still need to explain why this hypothetical company would keep paying millions in avoidable salaries. Because whatever jobs you assign to AI, it won't be any less available to do the work of the human labor.
Adding new features doesn't necessarily grow the market. Your bread with nuts and berries competes with the regular bread for the customer's money. Other things also compete for the same money, such as medical, daycare, schooling etc. So increasing features won't necessarily grow the market because the market. Even in an optimistic scenario, those features only have a probability of increasing revenue, it's not certain.
OTOH, if you fire those workers, it is a certainty that your bakery gets more cash. You can then use that cash to reward your shareholders (a category that conveniently includes you) via buybacks or dividends.
> Payrolls in the US dropped by 92,000 and the unemployment rate ticked up to 4.4%, according to the latest official figures, surprising analysts who had expected hiring to remain stable.
I'm not in any way suggesting the economy isn't taking a shit, but I'm curious about the actual expectation and reality. I know it says analysts expect hiring to be stable, but hiring isn't the same as job losses.
> You have fallen headfirst into the "Not now, so never" fallacy.
Perhaps. Though we have empirical evidence of how much we can quantize and distillate models to the point of practical uselessness. That sets a bar for how large a local model needs to be for general-use as to compete with the could ones. We are talking in the area of 60GB for GPT-OSS/Qwen3.5, which is what enthusiasts are running on 32GB DDR5 + 24GB VRAM RTX 3090.
> As if consumer hardware won't get more powerful
Now I will let you, with that last fact in hand, plot a chart of how much it's been costing to provision that over the past 2 years and use it to prove me wrong about the affordability of local models.
I don't - idealistic motives seems to be common among leading AI developers and researchers. It's totally realistic that Anthropic sticking to principle & taking a hit for it will give it an edge recruiting those idealistic types.
That different path was grim. Adobe wrecked or killed a lot of really good software after the buyout. Freehand and Fireworks were incredible for their day.
What an ugly trope. Idealism motivates Chinese workers just as often as any other nationality.
reply