Hacker Newsnew | past | comments | ask | show | jobs | submit | theredbeard's commentslogin

It’s not groundbreaking in a technological sense. The codebase is actually a bit of a monstrosity. But it removed guardrails that were artificially put on these LLMs which suddenly gave it an entire new dimension and the timing was right.

I built this because I was curious what Claude sends to the API, how subagents get work delegated and what contexts look like. Interesting to see how small part of the context the user interaction really is typically.

Pretty interesting to see how widely the token use differs across the models for the same task.

I built this because I was curious what Claude sends to the API, how subagents get work delegated and how contexts look like. Interesting to see how small part of the context the user interaction really is typically.

Gitlab.com is the obvious rec.

Skipping the investigation phase to jump straight to solutions has killed projects for decades. Requirements docs nobody reads, analysis nobody does, straight to coding because that feels like progress. AI makes this pattern incredibly attractive: you get something that looks like a solution in seconds. Why spend hours understanding the problem when you can have code right now?

The article's point about AI code being "someone else's code" hits different when you realize neither of you built the context. I've been measuring what actually happens inside AI coding sessions; over 60% of what the model sees is file contents and command output, stuff you never look at. Nobody did the work of understanding by building / designing it. You're reviewing code that nobody understood while writing it, and the model is doing the same.

This is why the evaluation problem is so problematic. You skipped building context to save time, but now you need that context to know if the output is any good. The investigation you didn't do upfront is exactly what you need to review the AI's work.


OSS was already brutal for new contributors before AI. You'd spend hours on a good-faith PR and get ignored for months, or get torn apart in review because you didn't know the unwritten conventions. The signal-to-noise ratio sucked but at least maintainers would eventually look at your stuff.

Now with AI-generated spam everywhere, maintainers have even more reason to be suspicious of unknown names. Vouch solves their problem, but think about what it means for someone trying to break in. You need someone to vouch for you before you can contribute, but how do you get someone to vouch for you if you can't contribute?

I get why maintainers need this. But we're formalizing a system that makes OSS even more of an insider's club. The cold start problem doesn't really get any warmer like this.


Good filters make good communities. Back in the good ol' days of the internet, access to the internet in of itself was a decent filter: you had to want to be online, you needed to be somewhat technical, or at least willing to grapple with technical problems, and you needed to actively seek out communities online which aligned with your interests, and there was little financial motivation to do so in bad faith. As the barrier to entry to the internet writ large dropped to near zero, communities that were built around the bygone era's natural filtering suffered. Communities must now establish filters proactively.

Ultimately, you need to choose: does your community prioritize its short-term health, or ease of access? If a community never lets anyone in, then it withers and dies eventually, but in the meantime the community can be extremely high-trust. That's what happened to fraternal orders like the Oddfellows and the Free Masons post-Vietnam. If the community has zero barrier to entry, you end up with Twitter: a teeming mass of low-trust members screaming into the void.

The happy medium is allowing in new members just as fast as you can build trust and community cohesion. University clubs are a good example of this: at a massive turnover rate of 25% per year, they need to form processes to not just recruit that many people, but integrate that big of a chunk of their community without destroying the high-trust environment. That's how you end up with the ritualized "rushing" process.


>Back in the good ol' days of the internet, access to the internet in of itself was a decent filter: you had to want to be online, you needed to be somewhat technical, or at least willing to grapple with technical problems, and you needed to actively seek out communities online which aligned with your interests, and there was little financial motivation to do so in bad faith

And it was horrifically expensive to be online until the mid 90s, or late 90s depending on where you were.


The comment I read about this that I liked was that they want to push the idea of starting with an Issue and a discussion before going straight to a PR. That way you can build reputation by contributing to a discussion first. Maybe you could "earn" a temporary Vouch like this that lets you start submitting. Still open to attack but the attack is at least more difficult.

Agreed. The obvious solution is to lower the barrier of entry for demonstrating good intent, but also lowering the ceiling of effort required to analyze that demonstration for good intent.

Mandating participation in discussion prior to creating any PR sounds like a perfectly reasonable requirement.


Maybe it is because I mostly contribute to projects that have corporate backers but this has not been my experience at all. Usually opening an issue with “I would be willing to fix this” gets good and quick responses from maintainers. Maybe linux kernel devs are different but I doubt many of us have to interact with that as part of our day-to-day business.

Building projects, especially larger ones, has not been solely about writing code. I don't see how anything you are saying is a bad thing at all. Drive-by PRs and similar practices are bad. A high barrier is a feature, not a bug.

This makes sense to me. Part of me wonders if this system wouldn't work better in reverse, a blocklist instead of a banlist. Blocklists can spread via URL, in the same way that DNS or email blocklists work. Subscribe to the blocklists of people you trust.

I _think_ this removes the motivation for low-quality PRs. Get on a major blocklist and the GitHub account is basically dead. People could make new GitHub accounts, but then you never get an "impressive" GitHub account.


let's make it even better: why not set up a donation mechanism to get in the list?

Because I want people to get paid for writing code, not to pay to write code.

my bad, forgot to /s

No worries, the italics did heavy lifting.

What could go wrong?!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: