> but it hardly conveys what's happening inside the head of the hacker
Mr Robot is another great one at that. It has layers of trippy stuff, but the hacking stuff is both real-ish and pretty well explained by the main character's monologues.
> feeling ignored in a sea of botspam will kill their desire to participate.
The bots are not really that bad, they're (still) pretty easy to spot and not engage with. I'm more perplexed about the negativity filled comments sections, and I'm pretty sure most posters are real grass-fed certified humans.
I don't get why negative posts get so upvoted, get so popular on the front page, and people still debate with outdated arguments in them. People come in and fight other deamons, make straw-man arguments and in general promote negative stuff like there's no tomorrow. I think you can get so much more signal from posititve examples, from "hey I did a thing" type posts, and so on. Even overhyped stuff like the claw-mania can still be useful. Yet the "I did a thing" get so overwhelmed by negativity, nitpicking and "haha not perfect means doa" type of messages. That makes me want to participate less...
Oh that's just human nature: there's a reason why trashy tabloids continue to exist despite how public sentiment seems to universally agree that they're awful spreaders of rumour and insecurity. More people are Skankhunt42 than we'd like to admit.
Sure, just be aware of what you're up against: if religion teaches us anything it's that even concerted, systematic efforts over millennia to conquer human nature (eg: libido) still fail. But if you want to give it a go, by all means: one can only imagine Sisyphus happy.
That's probably a "layman's terms" translation of a more technical term NET April 1, which would be "Not Earlier Than" and is widely used in the industry.
Interesting model release. It offers ~same capabilities as gpt-oss-120b but with faster inference (should be cheaper as well) and they're also providing (some?) pre-training and (all?) post-training datasets. In some benchmarks it is behind qwen3.5-122b that recently released. Focus on "agentic" stuff, and nvfp4 native. Glad to see nvda pushing for more models, more accessible models, and opening up some of their datasets.
For a bit of context, goog already did something like this two generations of models ago, as announced in this blog post[1] from May '25:
> AlphaEvolve is accelerating AI performance and research velocity. By finding smarter ways to divide a large matrix multiplication operation into more manageable subproblems, it sped up this vital kernel in Gemini’s architecture by 23%, leading to a 1% reduction in Gemini's training time.
We are now seeing the same thing "at home", for any model. And with how RL heavy the new training runs have become, inference speedups will directly translate in faster training as well.
> I use another little-known feature, "ControlMaster". Multiplexes multiple connections into one, it makes making " new" sessions instant
Is this what secureCRT used as well? I remember this being all the rage back when I used windows, and it allowed this spawn new session by reusing the main one.
Project maintainers will always have the right to decide how to maintain their projects, and "owe" nothing to no one.
That being said, to outright ban a technology in 2026 on pure "vibes" is not something I'd say is reasonable. Others have already commented that it's likely unenforceable, but I'd also say it's unreasonable for the sake of utility. It leaves stuff on the table in a time where they really shouldn't. Things like documentation tracking, regression tracking, security, feature parity, etc. can all be enhanced with carefully orchestrated assistance. To simply ban this is ... a choice, I guess. But it's not reasonable, in my book. It's like saying we won't use ci/cd, because it's automated stuff, we're purely manual here.
I think a lot of projects will find ways to adapt. Create good guidelines, help the community to use the best tools for the best tasks, and use automation wherever it makes sense.
At the end of the day slop is slop. You can always refuse to even look at something if you don't like the presentation. Or if the code is a mess. Or if it doesn't follow conventions. Or if a PR is +203323 lines, and so on. But attaching "LLMs aka AI" to the reasoning only invites drama, if anything it makes the effort of distinguishing good content from good looking content even harder, and so on. In the long run it won't be viable. If there's a good way to optimise a piece of code, it won't matter where that optimisation came from, as long as it can be proved it's good.
tl;dr; focus on better verification instead of better identification; prove that a change is good instead of focusing where it came from; test, learn and adapt. Dogma was never good.
At the moment verification at scale is an unsolved problem, though. As mentioned, I think this will act as a rough filter for now, but probably not work forever - and denying contributions from non-vetted contributors will likely end up being the new default.
Once outside contributions are rejected by default, the maintainers can of course choose whether or not to use LLMs or not.
I do think that it is a misconception that OSS software needs to "viable". OSS maintainers can have many motivations to build something, and just shipping a product might not be at the top of that list at all, and they certainly don't have that obligation. Personally, I use OSS as a way to build and design software with a level of gold plating that is not possible in most work settings, for the feeling that _I_ built something, and the pure joy of coding - using LLMs to write code would work directly against those goals. Whether LLMs are essential in more competitive environments is also something that there are mixed opinions on, but in those cases being dogmatic is certainly more risky.
> That being said, to outright ban a technology in 2026 on pure "vibes" is not something I'd say is reasonable.
To outright accept LLM contributions would be as much "pure vibes" as banning it.
The thing is, those that maintain open source projects have to make a decision where they want to spend their time. It's open source, they are not being paid for it, they should and will decide what it acceptable and what is not.
If you dislike it, you are free to fork it and make a "LLM's welcome" fork. If, as you imply, the LLM contributions are invaluable, your fork should eventually become the better choice.
Or you can complain to the void that open source maintainers don't want to deal with low effort vibe coded bullshit PRs.
>Or you can complain to the void that open source maintainers don't want to deal with low effort vibe coded bullshit PRs.
If you look back and think about what your saying for a minute, it's that low effort PRs are bad.
Using an LLM to assist in development does not instantly make the whole work 'low effort'.
It's also unenforceable and will create AI witch hunts. Someone used an em-dash in a 500 line PR? Oh the horror that's a reject and ban from the project.
2000 line PR where the user launched multiple agents going over the PR for 'AI patterns'? Perfectly acceptable, no AI here.
> Using an LLM to assist in development does not instantly make the whole work 'low effort'.
Instantly? No, of course not.
I do use LLMs for development, and I am very careful with how I use it. I throughly review the code it generated (unless I am asking for throwaway scripts, because then I only care about the immediate output).
But I am not naive. We both know that a lot of people just vibe code the way through, results be damned.
I am not going to fault people devoting their free time on Open Source for not wanting to deal with bullshit. A blanket ban is perfectly acceptable.
Your reply is based on a 100% bad-faith, intellectually dishonest interpretation of the comment to which you’re replying. You know that.
Nobody claimed that LLM code should be outright accepted.
Also, nobody claimed that open source maintainers have the right to accept or decline based on whichever criteria they choose. To always come back to this point is so…American. It’s a cop-out. It’s a thought-terminating cliche. If you aren’t interested in discussing the merits of the decision, don’t bother joining the conversation. The world doesn’t need you to explain what consent is.
Most of all, I’m sick of the patronising “don’t forget that you can fork the project!” What’s the point of saying this? We all know. Nobody needs to be reminded. Nobody isn’t aware. You aren’t being clever. You aren’t adding anything to the conversation. You’re being snarky.
> Nobody claimed that LLM code should be outright accepted
Not directly, but that's the implication.
I just did not pretend that was not the implication.
> always come back to this point is so…American
I am not American.
To be frank, this was the most insulting thing someone ever told me online. Congratulations. I feel insulted. You win this one.
> If you aren’t interested in discussing the merits of the decision, don’t bother joining the conversation.
I will join whatever conversation I want, and to my desires I adressed the merits of the discussion perfectly.
You are not the judge here, your opinion is as meaningless as mine.
> Most of all, I’m sick of the patronising “don’t forget that you can fork the project!” What’s the point of saying this?
That sounds like a "you" problem. You will be sick of it until the end of time, because that's the final right answer to any complaints of open source project governance.
> You aren’t adding anything to the conversation. You’re being snarky.
I disagree. In fact, I contributed more than you. I adressed arguments. You went on a whinging session about me.
> Or if the code is a mess. Or if it doesn't follow conventions.
In my experience these things are very easily fixable by ai, I just ask it to follow the patterns found and conventions used in the code and it does that pretty well.
I've recently worked extensively with "prompt coding", and the model we're using is very good at following such instructions early on. However after deep reasoning around problems, it tends to focus more on solving the problem at hand than following established guidelines.
Still haven't found a good way to keep it on course other than "Hey, remember that thing that you're required to do? Still do that please."
Licensing is dependent on IPR, primarily copyright.
It is very unclear whether the output of an AI tool is subject to copyright.
So if someone uses AI to refactor some code, that refactored code isn't considered a derivative work which means that the refactored source is no longer covered by the copyright, or the license that depends on that.
> It is very unclear whether the output of an AI tool is subject to copyright.
At least for those here under the jurisdiction of the US Copyright Office, the answer is rather clear. Copyright only applies to the part of a work that was contributed by a human.
For example, on page 3 there (PDF page 11): "In February 2022, the Copyright Office’s Review Board issued a final decision affirming
the refusal to register a work claimed to be generated with no human involvement. [...] Since [a guidance on the matter] was issued, the Office has registered hundreds of works that incorporate AI-generated material, with the registration covering the human author’s contribution to the work."
(I'm not saying that to mean "therefore this is how it works everywhere". Indeed, I'm less familiar with my own country's jurisprudence here in Germany, but the US Copyright Office has been on my radar from reading tech news.)
Your analogy with CI/CD is flawed because while not all were convinced of the merits of CI/CD, it's also not technology built on vast energy use and copyright violation at a scale unseen in all of history, which has upended the hardware market, shaken the idea of job security for developers to its very foundation and done it while offering no really obvious benefits to groups wishing to produce really solid software. Maybe that comes eventually, but not at this level of maturity.
But you're right it's probably unenforceable. They will probably end up accepting PRs which were written with LLM assistance, but if they do it will be because it's well-written code that the contributor can explain in a way that doesn't sound to the maintainers like an LLM is answering their questions. And maybe at that point the community as a whole would have less to worry about - if we're still assuming that we're not setting ourselves up for horrible licence violation problems in the future when it turns out an LLM spat out something verbatim from a GPLed project.
> That being said, to outright ban a technology in 2026 on pure "vibes" is not something I'd say is reasonable.
The response to a large enough amount of data is always vibes. You cannot analyze it all so you offload it to your intuition.
> It leaves stuff on the table in a time where they really shouldn't. Things like documentation tracking, regression tracking, security, feature parity, etc. can all be enhanced with carefully orchestrated assistance.
What’s stopping the maintainers themselves from doing just that? Nothing.
Producing it through their own pipeline means they don’t have to guess at the intentions of someone else.
Maintainers just doing it themselves is just the logical conclusion. Why go through the process of vetting the contribution of some random person who says that they’ve used AI “a little” to check if it was maybe really 90%, whether they have ulterior motives... just do it yourself.
Hot sauce is pretty easy to make if you're inclined to go that route. You only need a scale and a blender, and some basic kitchen skills. You get to explore a lot and control for flavour / heat with adding stuff to the mix. Plenty of good content on yt you can get inspiration from.
It's also something you can make into a hobby. You can go as low effort as buying fresh peppers from a market when in season, or start growing yourself. Growing can be anywhere from extremely low maintenance (i.e. just water them from time to time and leave them on a window sill) or get into advanced stuff like pruning, soil ph, cross pollination and all that stuff. Some peppers are prolific growers, and you get fresh peppers, pepper paste, chili flakes and sauce from a potentially low effort hobby. And they make some nice gifts as well.
They cost a bit but they are idiot proof. The company also sells materials at an Okay price to grow whatever seeds you want, and you can use standard hydroponic supplements rather than their name brand ones.
I got a kit with Basil and it grew more Basil by accident than I could use. I was ripping off leaves and throwing them into store brand pasta sauce. I had a single plant in one of the $100 growers.
Their "salsa" kit is cherry tomatoes and Jalapenos so maybe it won't work so well on other peppers, but they have a Banana pepper kit. Pepper growers might have nutrient and growing recommendations to better manage flavor and spice profiles.
Go for the gusto and do some Carolina Reapers. Mine grow truly amazing amounts of the hottest peppers you can imagine. I don’t even like hot food I just grow them for the heck of it and give them away dried. But be careful.
Counter-argument: we're talking about saving a handful of bucks for something that lasts months. Do it if you find it fun - I tried it and didn't like the work nor spice under my fingernails, at all.
My preferences in cooking are like software: high level is fun (cooking dishes), low level is annoying (growing or producing ingredients).
I also like making cocktails. A brief try with homemade coffee licqueurs was disappointing - knowing a couple of good brands, I can buy and enjoy them, no hassle. Closest to preparing ingredients I do is occasionally doing batches of "super juice", where you squeeze a bunch of limes and add some conservatives and enhancers (and water), that increase the yield, flavor and shelf life by a lot. Then it's really practical to just use the juice like a normal ingredient, versus having the cytrus available having to squeeze them and having more stuff to clean.
I've tried getting started with this but my first attempt a habanero/mango sauce was _horrible_, must've used a slop recipe or something. Do you have a good base to recommend?
Mr Robot is another great one at that. It has layers of trippy stuff, but the hacking stuff is both real-ish and pretty well explained by the main character's monologues.
reply