Hacker Newsnew | past | comments | ask | show | jobs | submit | acedTrex's commentslogin

> (2) there's nothing wrong with more people being able to create and share things

There is very clearly many things wrong with this when the things being shown require very little skill or effort.


That is by no means all of these projects. I'm not interested in a circle-the-wagons crackdown because it won't work (see "it's foolish to fight the future" above), and because we should be welcoming and educating new users in how to contribute substantively to HN.

Which users?

The future you're concerned with defending includes bots being a large part of this community, potentially the majority. Those bots will not only submit comments autonomously, but create these projects, and Show HN threads. I.e. there will be no human in the loop.

This is not something unique to this forum, but to the internet at large. We're drowning in bot-generated content, and now it is fully automated.

So the fundamental question is: do you want to treat bots as human users?

Ignoring the existential issue, whatever answer you choose, it will inevitably alienate a portion of existing (human) users. It's silly I have to say this, but bots don't think, nor "care", and will keep coming regardless.

To me the obvious answer is "no". All web sites that wish to preserve their humanity will have to do a complete block of machine-generated content, or, at the very least, filter and categorize it correctly so that humans who wish to ignore it, can. It's a tough nut to crack, but I reckon YC would know some people capable of tackling this.

It's important to note that this state of a human driving the machine directly is only temporary. The people who think these are tools as any other are sorely mistaken. This tool can do their minimal effort job much more efficiently, cheaper, and with better results, and it's only a matter of time until the human is completely displaced. This will take longer for more complex work, of course, but creating regurgitated projects on GitHub and posting content on discussion forums is a very low bar activity.


Taking a good picture requires very little effort once you've found yourself in the right place. You gonna shit on Ansel Adams?

Yep, my day to day is now miserable, LLMs have ruined everything that made this field fun, and greatly enhanced everything that made it suck.

Yes exactly, of all the uses cases for LLMs "writing code" is easily my least favorite. Theres so many other cool things for "stochastic contextual orchestrators"

yep, as is always the case, it has to break before you can fix it. Bandaiding something along just makes it more painful for longer.

Claude attempted a treesitter to go port

Better title


Thank you, I flagged the submission. I doubt this project will have activity in 6 months, but I'd love to be proven wrong.

This was my first thought as well, just from reading the title.

well how did it do?

Hard to say. Claude’s very good at writing READMEs. In fact, Copilot often complains about docs that sound like they’re about current capabilities when in fact they’re future plans or just plan aspirational.

Without downloading and testing out your software, how can we know if it’s any good? Why would we do that if it’s obviously vibed? The dilemma.

I’m not at all against vibe coding. I’m just pointing out that having a nice README is trivial. And the burden of proof is on you.


Shouldn't you be able to answer that?

yes and if you clicked the links you would know that i did answer it in the readme.

But how do we know the readme isn't also vibecoded?

> Pure-Go tree-sitter runtime — no CGo, no C toolchain, WASM-ready.

No you didn't. The readme is obvious LLM slop. Em-dash, rule of three, "not x, y". Why should anyone spend effort reading something you couldn't be bothered to write? Why did you post it to HN from a burner account?


I read the README and did not find answers to my questions.

How is OP using Claude relevant?

OK for prototyping. Not OK for prod use if noone actually read it line by line.

ii am trying to not take issue with this comment because im aware of the huge stigma around ai generated code.

i needed this project so i made it for my use case and had to build on top of it. the only way to ensure quality is to read it all line by line.

if you give me code that you yourself have not reviewed i will not review it for you.


I’m just curious, what would need to happen for you to change your opinion about this? Are you basically of the opinion that it’s not good enough today, never will be good enough in the future, and we should just wind back the clock 3 years and pretend these tools don’t exist?

It feels to me like a lot of this is dogma. If the code is broken or needs more testing, that can be solved. But it’s orthogonal: the LLM can be used to implement the unit testing and fuzz testing that would beat this library into shape, if it’s not already there. It’s not about adding a human touch, it’s about pursuing completeness. And that’s true for all new projects going from zero to one, you have to ask yourself whether the author drove it to completeness or not. That’s always been true.

You want people to hedge their projects with disclaimers that it probably sucks and isn’t production worthy. You want them to fess up to the fact that they cheated, or something. But they’re giving it away for free! You can just not use it if you don’t want to! They owe you nothing, not even a note in the readme. And you don’t deserve more or less hacker points depending on whether you used a tool to generate the code or whether you wrote it by hand, because hacker points don’t exist, because the value of all of this is (and always will be) subjective.

To the extent that the modern tools and models can’t oneshot anything, they’re going to keep improving. And it doesn’t seem to me like there’s any identifiable binary event on the horizon that would make you change your mind about this. You’re just against LLMs, and that’s the way it is, and there’s nothing that anyone can do to change your mind?

I mean this in the nicest way possible: the world is just going to move on without you.


>I’m just curious, what would need to happen for you to change your opinion about this?

Imagine a machine that can calculate using logic circuits and one that uses a lookup table.

LLMs right now is the latter (please don't take literally, It is just an example). You can argue that the look up table is so huge that it works most of the time.

But I (and probably the parent commenter) need it to be the former. And that answers your question.

So it does not matter how huge the lookup table will grow in the future so that it will work more often, it is still a lookup table.

So people are divided into two groups right now. One group that goes by appearance, and one that goes by what the thing actually is fundamentally, despite the appearances.


But logic circuits are look up tables.

Every computation/function can be a look up table!

right, so why are you asking me to imagine one machine that can calculate using logic circuits and another that can calculate using a lookup table when we’re in agreement that they’re the same thing?

I reject the premise of your analogy.


> that they’re the same thing

I said no such thing.


I think you will get a better response to a slightly different analogy. In genetic programming (and in machine learning), we have a concept of "overfitting". Overfitting can be understood as a program memorizing too much of its test/training data (i.e. so it is acting more like an oracle than a computation). This, intuitively, becomes less of a problem the greater the training-dataset becomes, but the problem will always be there. Noticing the problem is like noticing the invisible wall at the edge of the game-world.

The most insightful thing about LLMs, is just how _useful_ overfitting can be in practice, when applied to the entire internet. In some sense, stack-overflow-driven-development (which was widespread throughout the industry since at least 2012), was an indication that much of a programmer's job was finding specific solutions to recurring problems, that never seem to get permanently fixed (mostly for reasons of culture, conformity, and churn in the ranks).

The more I see the LLM-ification of software unfold (essentially an attempted controlled demolition of our industry and our culture), the more I think about Arthur Whitney (inventor of the K language and others). In this interview[1], he said two interesting things: (1) he likened programming to poetry, and (2) he said that he designed his languages to not have libraries, and everybody builds from the 50 basic operators that come with the language, resulting in very short programs (in terms of both source code size and compiled/runtime code size).

I wonder if our tendency to depend on libraries of functions, counterintuitively results in more source code (and more compiled/runtime code) in the long run -- similarly to how using LLMs for coding tends to be very verbose as well. In principle, libraries are collections of composable domain-verbs that should allow a programmer to solve domain-problems, and yet, it rarely feels that way. I have ripped out general libraries, and replaced them with custom subroutines more times than I can count, because I usually need a subset of functionality, and I need it to be correct (many libraries are complex and buggy because they have some edge-cases [for example, I once used an AVL library that would sometimes walk the tree in reverse instead of from least to greatest -- unfortunately, the ordering mattered, and I wrote a simpler bespoke implementation]).

Arguably, a buggy program or a buggy library or a buggy function, is just an overfit program, or library, or function (it is overfit to the mental-model of the problem-space in the library writer's mind). These overfit libraries, which are often used as blackboxes by someone rushing to meet a deadline, often result in programs that are themselves overfit to the buggy library, creating _less_ modularity instead of more. _Creating_ an abstraction is practically free, but maintaining it and (most disappointingly) _using_ it has real, often permanent long term costs. I have rarely been able to get two computers, that were meant to share data with NFS, to do so reliably, if they were not running the same exact OS (because the NFS client and server of each OS are bug-compatible, are overfit to each other).

In fact the rise of VMWare, and the big cloud companies, and containerization and virtualization technologies is, conceivably, caused by this very tendency to write software that is overfit to other software (the operating system, the standard library [on some OSes emacs has to be forced to link to glibc, because using any other memory allocator causes it to SEGFAULT, and don't get me started on how no two browser-canvases return the same output in different browser _nor_ on the same browser in a different OS]). (Maybe, just as debt keeps the economy from collapsing, technical debt is the only thing that keeps Silicon Valley from collapsing.)

In some ways, coding-LLMs exaggerate this tendency towards overfitting in comical ways, like fun-house mirrors. And now, a single individual, with nothing but a dream, can create technical debt at the same rate as a thousand employee software company could a decade ago. What a time to be alive.

[1]: https://queue.acm.org/detail.cfm?id=1531242


>he likened programming to poetry

This I can definitely relate..

I don't fully understand the rest, but I ll give it some thought.


You’ve gotta read the code. It doesn’t matter how it got there but if you don’t fully understand it (which implies reading it) don’t get mad when you try to push slop on people. It’s the equivalent of asking an LLM to write an email for somebody else to read that you didn’t read yourself. It’s basic human trust - of course people get annoyed with you. You’re untrustworthy.

I see this as the same argument as saying GMO label not needed, no need to mention artificial flavours in food, etc.

I mean this in the nicest way possible: the world is just going to insist that AI generated output is marked clearly as AI produced output.

Not sure whether giving a LICENSE even makes sense.


I tried to control LLM output quality by different means, including fuzzing. Had several cases when LLM "cheated" on that too. So, I have my own shades and grades of being sure the code is not BS.

Well, that’s obviously bad.

But once you told it to stop cheating, did it eventually figure it out? I mean, correctly implementing fuzzer support for a project is entirely within the wheelhouse of current models. It’s not rocket science.


This might be true, but we can continue to try and require the communities we have been part of for years to act a certain way regarding disclosures.

If the community majority changes it mind then so be it. But the fight will continue for quite some time until that is decided.


There never was a cohesive generic open source community. There are no meaningful group norms. This was and always will be a fiction.

I’m tempted to just start putting co-authored-by: Claude in every commit I make, even the ones that I write by hand, just to intentionally alienate people like you.

The best guardrails are linters, autoformatters, type checkers, static analyzers, fuzzers, pre-commit rules, unit tests and coverage requirements, microbenchmarks, etc. If you genuinely care about open source code quality, you should be investing in improving these tools and deploying them in the projects you rely on. If the LLMs are truly writing bad or broken code, it will show up here clearly.

But if you can’t rephrase your criticism of a patch in terms of things flagged by tools like those, and you’re not claiming there’s something architecturally wrong with the way it was designed, you don’t have a criticism at all. You’re just whining.


> There never was a cohesive generic open source community. There are no meaningful group norms. This was and always will be a fiction.

It's always been a bit splintered, but it was generally composed of 95%+ of people that know how to program. That is no longer the case in any sense.

> I’m tempted to just start putting co-authored-by: Claude in every commit I make, even the ones that I write by hand, just to intentionally alienate people like you.

I mean it sounds like you are already using claude for everything so this is probably a bit of a noop lol.

> But if you can’t rephrase your criticism of a patch in terms of things flagged by tools like those, and you’re not claiming there’s something architecturally wrong with the way it was designed, you don’t have a criticism at all. You’re just whining.

No, because doing that requires MORE rigor and work than what an LLM driven project had put into it. That difference in effort/work is not tenable, its shallow work being shown, its shallow criticisms thrown at it.

All sense of depth and integrity is gone and killed.


Is that what this was all about? Depth and integrity? Rigor and hard work? Because I thought it was all about writing useful programs for computers.

Yes, it was always about writing useful programs for computers. Which is why people moan about the use of LLMs: because then the writing aspect is gone!

Anyway, this stuff will resolve itself, one way or another.


Pack it in everyone, the “OK for prod use” guy has spoken.

That ship has sailed, man…

No it has not - if it had, there'd be no need to shout down folk who disagree.

Not everyone buys into the inevitabilism. Why should I read code "author" didn't bother to write?


Sorry but these are just not accurate as blanket statements anymore, given how good the models have gotten.

As other similar projects have pointed out, if you have a good test suite and a way for the model to validate its correctness, you can get very good results. And you can continue to iterate, optimize, code review, etc.


Because OP obviously downplayed this important fact, which typically shows lower quality/less tested code.

Because the entire README doesn't even mention it, and it is an important factor in deciding whether it is ready for production use.

I, for one, am definitely not going to use this project for anything serious unless I have thoroughly reviewed the code myself. Prototyping is fine.


[flagged]


Dude, you know you are trolling.

There is a material difference between whether you used VSCode or vim to write the code, and if you personally wrote or reviewed any code at all.


I'm not really trolling. I'm trying to push people to consider that the world is already in a state where "I used AI" is neither binary nor dispositive. I think we're used to a 2023 to mid-2025 framing where outside of some narrow, highly structured cases, the code is garbage.

If that's still true as a binary now, it won't be for long. As the robot likes to say, some of these changes are "in flight".


People should say what models/tools they used in even show the prompts.

Showing the prompts is not feasible when using agentic coding tools. I suppose one could persist all chat logs ever used in the project, but is that even useful?

I think it would be useful. I see lots of comments like "it one-shotted this" and am curious if they just had to write one sentence or many pages of instructions.

"show the prompts"

What would the prompt for this look like?


never mind the fact that the model is constantly reseeding itself against the files it’s reading from your working directory, so the prompts are useless on their own.

AI often produces nonsense that a human wouldn't. If a project was written using AI the chances that it is a useless mess are significantly higher than if it was written by a human.

maintenance burden

I work on a revision control system project, except merge is CRDT. On Feb 22 there was a server break-in (I did not keep unencrypted sources on the client, server login was YubiKey only, but that is not 100% guarantee). I reported break-in to my Telegram channel that day.

My design docs https://replicated.wiki/blog/partII.html

I used tree-sitter for coarse AST. Some key parts were missing from the server as well, because I expected problems (had lots of adventures in East Asia, evil maids, various other incidents on a regular basis).

When I saw "tree-sitter in go" title, I was very glad at first. Solves some problems for me. Then I saw the full picture.


Wait, are you suggesting that OP broke in to your server and stole code and is republishing it as these repos?

I have questions. Have you reviewed the code here to see if it matches? What, more specifically, do you mean when you say someone broke in? What makes you think that this idea (which is nice but not novel) is worth stealing? If that sounds snarky, it’s not meant to; just trying to understand what’s going on. Why is that more likely than someone using Claude to vibe up some software along the same lines?


1. Just saying, strange coincidence

2. How can we compare Claude's output in a different language?

3. Detecting break-ins and handling evil-maids: unless the trick is already known on the internets, I do not disclose. Odds are not in my favor.

4. Maybe worth, maybe not. I have my adaptations. Trying to make it not worthy of stealing, in fact.


Based on this and your other comments, including the one that’s no longer visible: Please phone a friend. Or find a professional to talk to. I say that with nothing but compassion.

For the people who are downvoting me: I’m being totally sincere. This is not an ad hominem attack. You didn’t see his other comment, it was genuinely concerning.

What comments are strange to you? I'm looking through history and everything is relatively normal.

Also, evil maids, what?

I can't speak for the specificity of parent's "evil maids" phrase but the concept of an "Evil maid" is used in security scenarios.

A maid tends to be an example of a person who's mostly a stranger, but is given unmonitored access to your most private spaces for prolonged periods of time. So they theoretically become a good vector for a malicious actor to say "hey I'll give you $$ if you just plug in this USB drive in his bedroom laptop next time you're cleaning his house" - it's often used in the scenario of "ok what if someone has physical access to your resource for a prolonged period of time without you noticing? what are your protections there?"

I wonder if that's what OP meant? :-)


"Evil maids" (example): I put my laptop into a safe, seal the safe, seal the room, go to breakfast. On return, I see there was cleaning (not the usual time, I know the hotel), the cleaner looks strangely confused, the seal on the safe is detached (that is often done by applying ice; adhesive hardens, seal goes off). This level of paranoia was not my norm. Had to learn these tricks cause problems happened (repeatedly). In fact, I frequented that hotel, knew customs and the staff, so noticed irregularities.

And what does this have to do with the price of tea in China?

Ah right, thanks! But it seems he meant literal evil maids. Which I guess count as the figurative kind too.

LMFAO what is this

I think some people are missing the sarcasm here

at the moment it's impossible to distinguish between AI boosters who really believe that Claude is nearly AGI and jokes about them

Poe's law?

In theory, comments on Hacker News should advance discussion and meet a certain quality bar lest they be downvoted to make room for the ones that meet the criteria. I am not sure if this ever was true in practice, it certainly seems to have waned in the years I have been a reader of this forum (see one of the many pelican on a bike comments on any AI model release thread), but I'd expect some people still try to vote with this in mind.

Being sarcastic doesn't lower the bar for a comment to meet to not get downvoted, so I wouldn't go thinking people miss the sarcasm without first considering whether the comment adds to the discussion when wondering why a comment is downvoted.


I only understood it after reading some of co_king_5’s other comments. This is Poe’s law in action. I know several people who converted into AI coding cultists and they say the same things but seriously. Curiously none of them were coders before AI.

[dead]


Graduates of the Zoolander Center for Kids Who Can't Read Good and Who Wanna Learn to Do Other Stuff Good Too?

God cloudflare's blog quality has fallen off a fuckin cliff ehh. Used to be so good now its just llm slop both content and actual writing.

Gotta hand it to 'em though - posting this less than a month after the Matrix boondoggle certainly is, uh, audacious.

I'm actually surprised that this is the only hit I get when I search these comments for "matrix".

This is interesting to my on both a technical level as well as a social-political level. I wonder what impact "AI-washing" will have on licensing for example

The core network products seem to be having a run of downtime issues too. Maybe they should focus on their homework before going out to play with the AI kids.

Yup. This was so jarring to read. Shame.

> Nothing to do with AI, or even the capabilities of AI. The person intentionally didn't put in much effort.

The part to do with AI is that it was not able to drive a comprehensive and bug free driver with minimal effort from the human.

That is the point.


Why is that the metric? In my job, I get drafts from junior employees that requires major revisions, often rewriting significant parts. It’s still faster to have someone take the first pass. Why can’t AI coding be used the same way? Especially if AIs are capable of following your own style and design choices, as well as testing code against a test suite, why isn’t it easier to start from a kind of working baseline than to rebuild from a raf h.

Dis you hire juniors just to get drafts? That seems pretty inneficient.

I'm a lawyer, so a bunch of work--factual analysis, legal research, etc.--goes into the draft that isn't just the words on the page. At the same time, the work product is meant to persuade human readers, so a lot of work goes into making the words on the page perfect. (Perhaps past the point of diminishing returns, but companies are willing to pay for that incremental edge when the stakes are high.)

Programming is different in that you don't usually have senior engineers rewrite code written by junior engineers. On the other hand, look at how the Linux kernel is developed. You have Linus at the top, then subsystem maintainers vetting patches. The companies submitting patches presumably have layers of reviewers as well. Why couldn't you automate the lower layers of that process? Instead of having 5 junior people, maybe you have 2 somewhat more senior people leveraging AI.

This is probably not sustainable unless the AI can eventually do the work the more senior people are doing. But that probably doesn't matter in the short term for the market.


Maybe because code is different. A software is a recipe that an autonomous machine can follow (very fast and repeateadly).

But the whole goal of software engineering is not about getting the recipe to the machine. That’s quite easy. It’s about writing the correct recipe so that the output is what’s expected. It’s also about communicating the recipe to fellow developers (sharing knowledge).

But we are not developing recipe that much today. Instead we’ve built enough abstractions that we’re developing recipes of recipes. There’s a lot of indirection between what our recipe says and the final product. While we can be more creative, the failure mode has also increased. But the cost of physically writing a recipe has gone down a lot.

So what matters today is having a good understanding of the tower of abstractions, at least the part that is useful for a project. But you have to be on hand with it to discern the links between each layer and each concept. Because each little one matters. Or you delegate and choose to trust someone else.

Trusting AI is trusting that it can maintain such consistent models so that it produces the expected output. And we all know that they don’t.


I’m not able to provide a comprehensive bug free driver.

There will absolutely be systems of the future that are entirely LLM written. Honestly they will probably be better quality than the standard offshore team output.

But lets all hope these are not vital systems we end up depending on.


The failure mode, as foretold by James Cameron, is in handing control of the nuclear launch systems over to an AI. Let's all really hope not!

The influx if these sorts of posts have largely pushed me out of all my previous "programmer" online spots.

I have zero interest in seeing something that Claude emitted that the author could never in a million years have written it themselves.

Its baffling to me these people think anyone cares about what their claude prompt output.


Do you choose your food/car/housing/etc based on virtue signalling rather than utility as well?

Well i generally look for signals that a place I might eat at knows how to cook. If I walk in and I see rats on the floor and theres no kitchen I probably won't be interested.

I apply the same logic to software.


Do you choose your software exclusively based on virtue signalling about which editor they used to write it in?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: