The thing is, there is next to no software that "doesn't need further changes" at all. There is always something, sure it might be infrequent and/or most of the time nothing really big or difficult (except sometimes) but the point is: someone needs to step up and do it.
If a see a project with recent activity, best from multiple people it is a strong signal that this will happen, if the last commit is a year ago I must assume it's completely abandoned because most of the time it just is. Sometimes it's clearly communicated that it is the way because the authors see it as essentially feature complete, there are some examples of this but not that many honestly.
An LLM is an impressive, yet still imperfect and unpredictable translation machine. The code it outputs can only be as good as your prompt is precise, minus the often blatant mistakes it makes.
Like with most technological change I think there is no need for FOMO. You run into problems if you completely ignore already established and proven tools and practices for years to come but you don't have to jump onto every "this changes everything, trust me bro" hype.
Independent of ones philosophical stance on the broader topic: I find it highly concerning that AI companies, at least right now, seem to be largely exempt from all those rules which apply to everyone else, often enforced rigorously.
I draw from this that no-one should be subject to those rules, and we should try to use the AI companies as a wedge to widen that crack. Instead, most people people who claim that their objection is really only consistency, not love for IP spend their time trying to tighten the definitions of fair use, widen the definitions of derivative works, and in general make IP even stronger, which will effect far more than just the AI companies they're going after. This doesn't look to me like the behavior of people who truly only want consistency, but don't like IP.
And before you say that they're doing it because it's always better to resist massive, evil corporations than to side with them, even if it might seem expedient to do so, the people who are most strongly fighting against AI companies in favor of IP, in the name of "consistency" are themselves siding with Disney, one of the most evil companies — from the perspective of the health of the arts and our culture — that's working right now. So they're already fine with siding with corporations; they just happened to pick the side that's pro-IP.
oh hey, let's have a thought experiment in this world with no IP rules
suppose I write a webnovel that I publish for free on the net, and I solicit donations. Kinda like what's happening today anyway.
Now suppose I'm not good at marketing, but this other guy is. He takes my webnovel, changes some names, and publishes it online under his name. He is good at social media and marketing, and so makes a killing from donations. I don't see a dime. People accuse me of plagiarism. I have no legal recourse.
There are also unfair situations that can happen, equally as often, if IP does exist, and likewise, in those situations, those with more money, influence, or charisma will win out.
Also, the idea that that situation is unfair relies entirely on the idea that we own our ideas and have a right to secure (future, hypothetical) profit from them. So you're essentially begging the question.
You're also relying on a premise that, when drawn out, seems fundamentally absurd to me: that you should own not just the money you earn, but the rights to any money you might earn in the future, had someone not done something that caused unrelated others to never have paid you. If you extend that logic, any kind of competition is wrong!
there are two programmers.
first is very talented technically, but weak at negotiations, so he earns median pay.
second is average technically, but very good at negotiations, and he earns much better.
In China, engineers hold the most power, yet the country prospers. I don't think the problem is giving engineers power, rather a cultural thing. In china there is a general feeling of contributing towards the society, in the US everyone is trying to screw over each-other, for political or monetary reasons.
Copyright infringement is a tort. “Illegal” is almost always used to refer to breaking of criminal law.
This seems like intentionally conflating them to imply that appropriating code for model training is a criminal offense, when, even in the most anti-AI, pro-IP view, it is plainly not.
> There are four essential elements to a charge of criminal copyright infringement. In order to sustain a conviction under section 506(a), the government must demonstrate: (1) that a valid copyright; (2) was infringed by the defendant; (3) willfully; and (4) for purposes of commercial advantage or private financial gain.
I think it’s very much an open debate if training a model on publicly available data counts as infringement or not.
> The key phrase here is “requirements analysis” and if you’re not good at it either your software sucks or you’re going to iterate needlessly and waste massive time including on bad architecture. You don’t iterate the foundation of a house.
This depends heavily on the kind of problem you are trying to solve. In a lot of cases requirements are not fixed but evolve over time, either reacting to changes in the real word environment or by just realizing things which are nice in theory are not working out in practice.
You don’t iterate the foundation of a house because we have done it enough times and also the environment the house exists in (geography, climate, ...) is usually not expected to change much. If that were the case we would certainly build houses differently than we usually do.
The key first step here is "Problem Identification" - the number of times I have seen where it is part way through a development, and only then it starts to become obvious that even if the specifications were good, they were not solving the right problem.
Users have a habit of describing their problem as the solution they think they would like to have, often being disastrously far from the actual need.
As it is so often in life, extreme approaches are often bad. If you do pure waterfall you risk finding out very late that your plan might not work out, either because of unforeseen technical difficulties implementing it, the given requirements actually being wrong/incomplete or just simply missing the point in time where you planned enough. If you do extreme agile you often end up with a shit architecture which actually, among other things, hurt your future agility but you get a result which you can validate against reality. The "oh, I didn't think of that" is definitely present in both extremes.
reply