People were violating the terms of GPL without consequence long before AI. It is very difficult to determine if binaries were compiled from fragments of GPL code.
The places I have found AI most useful in coding is stripping away layers of abstraction. It is difficult to say as a long time open source contributor, but libraries often tried to cater to everyone and became slow, monolithic piles of abstraction. All the parts of an open source project that are copyrightable are abstraction. When you take away all the branching and make a script that performs all the side effects that some library would have produced for a specific set of args, you are left with something that is not novel. It’s quite liberating to stop fighting errors deep in some UVC driver, and just pull raw bytes from a USB device without a mountain of indirection from decades of irrelevant edge case handling.
Mermaid is really bad about cutting off text after spaces, so you have to insert <br>s everywhere. I’m guessing this is getting rendered instead of escaped by your interface. Or just lost in translation at the tokenizer.
It seems to be an indirect attempt to promote their GitHub project. They had Claude make them an “agent” using Bayesian modeling and Thompson sampling and now they are convinced they have heralded a new era of AI.
Even if you tuned two string to ensure that two specific notes on them vibrated at a perfect interval, there are non-multiplicative overtones modulated by resonance with the rest of the instrument. Those intervals are ideals for minimizing dissonance. Practically, the dissonance of 12TET intervals falls below the noise floor of all the other acoustic distortions that give instruments character.
> If you truly wish to be helpful, please direct your boundless generative energy toward a repository you personally own and maintain.
This is a habit humans could learn from. Publishing a fork is easier than ever. If you aren’t using your own code in production you shouldn’t expect anyone else to.
If anyone at GitHub is out there. Look at the stats for how many different projects on average that a user PRs a day (that they aren’t a maintainer of). My analysis of a recent day using gharchive showed 99% 1, 1% 2, 0.1% 3. There are so few people PRing 5+ repos I was able to review them manually. They are all bots/scripts. Please rate limit unregistered bots.
It would be nice to have some kind of forever patch mode on these git forges, where my fork (which, let's say, is a one line change) gets rebased on top of the original repo periodically.
You can ask an LLM to create a github action for that. The action can fail if the rebase fails and you can either fix it yourself or ask an LLM to do it for you.
Resale value. You practically have to pay someone to take an open box chromebook. The secondary market for apple products lasts longer than apple’s software support.
Does this actually matter for multi-agent use cases? Surely people that are using swarms of AI agents to write code are just letting them resolve merge conflicts.
So that you don't feel that I am biased about my thing but just giving more context that it's not just me, its actually people saying on twitter how often the merging breaks when you are running production level code and often merging different branches.
Those users all work for companies that sell AI tools. And the first one literally says they let AI fix merge conflicts. The second one is in a thread advocating for 0 code review (which this can’t guarantee) (and also ew). The third is also saying to just have another bot handle merging.
Thanks a lot for the fair criticism, Appreciate it! You're right that those links aren't the strongest evidence. The real argument isn't "people are complaining on twitter." It's just much simpler when two agents add different functions to the same file, where git creates a conflict that doesn't need to exist. Weave just knows they're separate entities and merges cleanly. Whether you let AI resolve the false conflict or avoid it entirely is a design choice, we think avoiding it is better.
It’s your GitHub profile. It looks suspiciously just like the other 10 GitHub users that have been spamming AI generated issues and PRs for the last 2 weeks. They always go quiet eventually. I suspect because they are violating GitHub’s ToS, but maybe they just run out of free tokens.
Thanks again for criticising, so tackling each of your comment:
GitHub’s ToS, because you suspect, so I can help you understand them.
> What violates it:
1. Automated Bulk issues/PRs, that we don't own
2. Fake Stars or Engagement Farming
3. Using Bot Accounts.
We own the repo, there's not even a single fake star, I don't even know how to create a bot account lol.
> Scenario when we run out of free tokens.
Open AI and Anthropic have been sponsoring my company with credits, because I am trying to architect new software post agi world, so if I run out I will ask them for more tokens.
And you are opening issues on projects trying to get them to adopt your product. Seems like spam to me. How much are you willing to spend maintaining this project if those free tokens go away?
When you're just a normal guy genuinely trying to build something great and there's nobody who believes in you yet, the only thing you can do is go to projects you admire and ask "would this help you?" Patrick Collison did the same thing early on, literally taking people's laptops to install Stripe.
I'm running agents doing merges right now, and yes and no. They can resolve merges, but it often takes multiple extra rounds. If you can avoid that more often it will definitely save both time and money.
If you appreciate open source maintainers, detect when users are opening pull requests without human review and stop them. Feel free to keep burning their tokens, just stop making pull requests.
Yeah, I think a lot of open-source maintainers would rather have some kind of an anti-slop filter than a six-month trial. All of my GitHub projects are tiny so I haven't had to encounter it, but I've heard that some projects are absolutely swamped in crap.
In the past week (besides the constant slop), there are models which have misattributed the copyright of new files to me, and stripped my copyright from existing files. It's sapping up time, energy and motivation.
You don’t need CNCs for that you need a decent laser, the laminates used are super thin and either laser cut or stamped you can’t reliably machine them anyhow workholding would be near impossible.
I would be concerned that laser cutting steel degrades the magnetic permeability around the cut. I plan to use a clamping jig designed for PCBs. I could always upgrade to a vacuum table.
The places I have found AI most useful in coding is stripping away layers of abstraction. It is difficult to say as a long time open source contributor, but libraries often tried to cater to everyone and became slow, monolithic piles of abstraction. All the parts of an open source project that are copyrightable are abstraction. When you take away all the branching and make a script that performs all the side effects that some library would have produced for a specific set of args, you are left with something that is not novel. It’s quite liberating to stop fighting errors deep in some UVC driver, and just pull raw bytes from a USB device without a mountain of indirection from decades of irrelevant edge case handling.
reply