The author of this article gives a more balanced POV than mine. I think most (maybe overwhelming majority) of publicized vibe coding projects are complete technical virtue signaling.
With agentic loops, you specify what you want and it continues to do stuff until ‘it works’. Then publish. Its takes less time and attention. So projects are less thought out and less tested as well.
In the end, I think it’s not about how a project was created. But how much passion and dedication went into it. It’s just that the bar got lowered.
Every market on some level can be analogized to a common and simple market.
One of the common examples in management books is the signage industry. You can have custom logos custom molded, extruded, embossed, carved, or at least printed onto a large, professional-looking billboard or marquee size sign. You can have a video billboard. You can have a vacuum formed plastic sign rotating on top of a pole. At the end of the day, though, your barrier to entry is a teenager with a piece of posterboard and some felt-tipped markers.
What has happened is that as the coding part has become easier, the barrier to entry has lowered. There are still parts of the market for the bespoke code running in as little memory and as few CPU cycles as possible, with the QA needed for life-critical reliability. There’s business-critical code. There’s code reliable enough for amusement. But the bottom of the market keeps moving lower. As that happens, people with less skill and less dedication can make something temporary or utilitarian, but it’s not going to compete where people have the budget to do it the higher-quality way.
How much an LLM or any other sort of agent helps at the higher ends of the market is the only open question. The bottom of the market will almost certainly be coded with very little skilled human input.
There's definitely a trend towards flashy projects prioritizing style over substance, which can overshadow more practical applications. It's easy to get caught up in the hype and overlook the real problems that need solving.
I think it's often genuine excitement to share a thing - without quite processing that anybody with the same idea can now build it (for simple- to mid-complexity projects).
The novelty of "new thing! That would have been incredibly hard a decade ago!" hasn't worn off yet.
This isn't the first time something like this has happened.
I would imagine that people had similar thoughts about the first photographs, when previously the only way to capture an image of something was via painting or woodcutting.
When movies first came out they would film random stuff because it was cool to see a train moving directly at you. The novelty didn't wear off for years.
There was something someone said in a comment here, years and years ago (pre AI), which has stuck with me.
Paraphrased, "There's basically no business in the Western world that wouldn't come out ahead with a competent software engineer working for $15 an hour".
Once agents, or now claws I guess, get another year of development under them they will be everywhere. People will have the novelty of "make me a website. Make it look like this. Make it so the customer gets notifications based on X Y and Z. Use my security cam footage to track the customer's object to give them status updates." And so on.
AI may or may not push the frontier of knowledge, TBD, but what it will absolutely do is pull up the baseline floor for everybody to a higher level of technical implementation.
And the explosion in software produced with AI by lay-people will mean that those with offensive security skills, who can crack and exploit software systems, will have incredible power over others.
I think that when a software system is used by more people and has more eyes on it, it's more likely to have its security flaws be found and fixed. Then all the users will benefit from the fix.
The more that software is fragmented into bespoke applications used by small numbers of people, the less people benefit from security network effects.
I believe the security vulnerability issues will be addressed with companies using cloud based vibe-code platform or a ai security auditor agent that runs through the code base and flags security issues.
Sure it is. AI software development is here. It's not good enough for everything, but it's good enough for a majority of the changes made by most software engineers.
That's now. Right now, the tooling exists so that for >80% of software devs, 80% of the code they produce could be created by AI rather than by hand.
You can always find some person saying that it'll destroy all jobs in a year, or make us all rich in a year, or whatever, but your cynicism blinds you to the actual advances being made. There is an endless supply of new goalpost positions, they will never all be met, and an endless supply of chartalans claiming unrealistic futures. Don't confuse that with "and therefore results do not exist".
No, it isn't. There is a gigantic chasm of difference between "80% of code they produce could be created by AI" and "80% of commits they produce could be created by AI".
Mixing the two up is how we get a massive company like Microsoft to continually produce such atrocious software updates that destroy hardware or cause BSODs for their flagship Operating System.
That's not replacing software development. That's dysfunction masquerading as capability.
And none of what I said is goalpost moving. They are the goalposts constantly made by the AI industry and their hype-men. The very premise of replacing a significant amount of human labor underlies the exorbitant valuation AI has been given in the market.
It appears that your understanding of AI code generation reflects the state of 1-2 years ago. In which case of course it seems like what people are describing as reality, feels 1-2 years away.
> There is a gigantic chasm of difference between "80% of code they produce could be created by AI" and "80% of commits they produce could be created by AI".
This is exactly the goalpost moving I am talking about. I said 80% of code could be AI-written, you agreed, and followed up with "oh but it doesn't matter because now we're measuring by % of commits".
> That's now. Right now, the tooling exists so that for >80% of software devs, 80% of the code they produce could be created by AI rather than by hand.
Technically 100% of the code they could produce could be created by a ton of very specific AI prompts. At that level of control it would be slower than typing the code out though.
Just throwing out random numbers like this is complete nonsense since there's about a million factors which determine the effectiveness of an LLM at generating code for a specific use case. And it also depends on what you consider producing by hand versus LLM output. Etc.
Today I fed to Opus 4.6 five screenshots with annotations from the client and told it to implement the changes. Then told it to generate real specs, which it did. I never even looked at the screenshots, I just checked and tested against the generated specs. Client was happy.
I have a similar feeling to people who upload their AI art to sites like danbooru. Like I guess I can understand making it for yourself but why do you think others want to see it
xkcd turned stick figure drawings into an art form. sometimes it is not about how something was created, but about the story being told.
some people build apps to solve a problem. why should they not share how they solved that problem?
i have written a blog post about a one line command that solves an interesting problem for me. for any experienced sysadmin that's just like a finger painting.
do we really need to argue if i should have written that post or not?
Even if status-signaling through this vector loses it's lustre, AI slop (agentic or otherwise) will not, and some of that slop will take on the guise of "vibe-coding" projects.
I mean I guess, but you and I are not going to see eye to eye on so-called "virtue signaling" so it isn't the burn you think it is. Virtue signaling is just a fancy term for shaming, right? Shaming serves society by moderating behavior that is unethical or immoral but not illegal. It serves a real purpose to keep us from sliding into a world where we act on our base instincts. Sure people can be a dick about it and be smug and superior, but it is really easy to roll your eyes and move on instead of getting triggered by it.
I spent a couple of weeks working on the line in a restaurant over the break. It was the exact opposite of my tech job and obviously doesn't pay very well, but could be a good thing to try for a while and see how your brain changes
I have a few of these posts I've written coming out over the next few months that I want people to discuss. Would you prefer I add a disclaimer at the top? Easy enough to add
Personally, I'd say you should definitely add a disclaimer, otherwise a discussion that would likely gain traction is that the information might be being intentionally concealed.
I will add them in subsequent posts. The information is all over our site (they are on our portfolio page, and we wrote about them when we invested) but I am happy to reiterate on each post for each company
The resistance to a seemingly obvious correction seems a little shady. Expecting people to dig around a site to find out your position of financial interest is not reasonable.
Such behavior is disappointing, especially from a top HN account holder.
It's too bad, because otherwise there are some interesting ideas, but TFA is lacking in good faith.
You're nitpicking the term "journalism" here, but even in casual conversation if a friend went on and on about how great a company was and then I later found out that they:
* Got paid a referral fee if I signed up
* Owned shares in the company
* Even was roommates with the founders/CEO but failed to mention it
I would trust that person less going forward.
If you want to be perceived as trustworthy, then you shouldn't say things that you have a hidden interest in saying.