> Considering they trained their model on open-source software, the least they could do is give it to open-source maintainers for free with no time limit.
Why? The resulting code generated by Claude is unfit for training, so any work product produced after the start of the subsidized program should be ignored.
Therefore it makes sense to charge them for the service after 6 months, no? Heh.
What do you mean it's unfit for training? It's a form of reinforcement learning; the end result has been selected based on whether it actually solved the need.
You need to be careful of the amount of reinforcement learning vs continued pretraining you do, but they already do plenty of other forms of reinforcement learning, I'm sure they have it dialed in.
> One example of this was a malformed authentication function. The AI that vibe-coded the Supabase backend, which uses remote procedure calls, implemented it with flawed access control logic, essentially blocking authenticated users and allowing access to unauthenticated users.
Actually sounds like a typical mistake a human developer would make. Forget a `!` or get confused for a second about whether you want true or false returned, and the logic flips.
The difference is a human is more likely to actually test the output of the change.
I also credit Neopets, but it was really the confluence of Neopets, MySpace, Geocities/Tripod, Xanga, etc. that really formed the base for so much of my career.
reply