Hacker Newsnew | past | comments | ask | show | jobs | submit | raesene9's commentslogin

There's a massive difference between launching a piece of software and launching a successful business.

Over the last couple of months I've seen a load of new "product launches" in my niche but when you look at them they're largely vibecoded and don't show deep understanding and sustainability, so it's pretty likely you'll never see them as successful businesses.

Looking at some of the related places like /r/sideproject/ there's a lot of releases and I'd be willing to suggest that most of them are using LLMs


Then, respectfully, what is the point? Does the trillions-of-dollars AI industry exist to support a few hobbyists building niche products to scratch their own itch? I thought the promise here is increased productivity, presumably in the economic sense.

There seems to be a lot of hype, and has been for years, but I’m not seeing it materialize as actual economic output. Surely by now there should be lots of businesses springing up to capture all of this value created by vibecoded software.


Whilst I have no special knowledge, my expectation is it'll do both. If you reduce the barriers to coding you'll get more code, both at the hobbyist/one-person level and also at the large corp level.

Whether that translates into more value for those larger corps is the trillion dollar question :) Writing code is a small part of the process of finding and shipping features that customers want, so it remains to be seen how much LLM tools translate it.

I think it's fairly widely accepted that from a financial standpoint we're in an AI/LLM bubble. There has been more investment than we're likely to see financial benefits, but it's impossible to predict to what degree (if you can predict that and the timing you can make a lot of money!!)


I don't think the hobbyist interest will go away, but I can see what's happening affecting business that use software.

For most businesses, software is just a means to an end, they don't really care how high quality and thoughtful the systems they use are (e.g. look at any piece of "enterprise" software)

What LLMs have done is made much much easier for orgs to launch new features and services both internally and externally, without necessarily understanding the complexity.

For me, that's what this post tapped into. Many orgs already have more complexity than they can reasonably handle. Massively accelerating development, is not going to make that problem better :)


They may have used ChatGPT or similar to help with the prose but the technical content (as discussed elsewhere on this page) is good, so does it really matter if they did?

The problem with AI slop (to me) is more that the technical content is not good or is entirely the product of the LLM. At that point, there's no point in me reading it, I can just prompt the question if I'm interested.

This is original research which wasn't public before, so the value is still there and I didn't think whichever combination of a human and LLM that generated it did a bad job.


I've just been looking in to this as I've got quite a lot of older hardware that'll be fine for running some websites lying around.

My ISP has a static IP option for £5/month, but I reckon I can save £30/month+ on server costs even before any rises.

Ofc it does mean I have to do my own sysadmining, but a combination of my general knowledge + an LLM should make that relatively easy.


Watch out for the energy usage. What's electric now, 27p/kWh?


yeah it definitely makes sense to use lower powered kit. Fortunately I've got a little stack of Raspberry Pis lying around for projects I always meant to do but didn't get round to :)


I'm going to guess the key differentiator here is "major ISPs". I can see the page fine using a Zen Internet connection, but from my phone, which uses EE, it's blocked.


I can access it from both my mobile and fiber connections, different ISPs. I'm with smaller players so maybe that's it.


I think avoiding filling context up with too much pattern information, is partially where agent skills are coming from, with the idea there being that each skill has a set of triggers, and the main body of the skill is only loaded into context, if that trigger is hit.

You could still overload with too many skills but it helps at least.


Of course it depends on exactly what you're using Claude Code for, but if your use-case involves cloning repos and then running Claude Code on that repo. I would definitely recommend isolating it (same with other similar tools).

There's a load of ways that a repository owner can get an LLM agent to execute code on user's machines so not a good plan to let them run on your main laptop/desktop.

Personally my approach has been put all my agents in a dedicated VM and then provide them a scratch test server with nothing on it, when they need to do something that requires bare metal.


In what situations where it require bare metal?


In my case I was using Claude Code to build a PoC of a firecracker backed virtualization solution, so bare metal was needed for nested virtualization support.


Firecracker can solve the kind of problems where you want more isolation than Docker provides, and it's pretty performant.

There's not a tonne of tooling for that use case now, although it's not too hard to put together I vibe-coded something that works for my use case fairly quickly (CC + Opus 4.5 seemed to understand what's needed)


Reading the article, it seemed to me that both the professor and the students were interested in the material being taught and therefore actively wanted to learn it, so using an LLM isn't the best tactic.

My feeling is that for many/most students, getting a great understanding of the course material isn't the primary goal, passing the course so they can get a good job is the primary goal. For this group using LLMs makes a lot of sense.

I know when I was a student doing a course I was not particularly interested in because my parents/school told me that was the right thing to do, if LLMs had been around, I absolutely would have used them :).


Yeah they definitely can be true (IME), as there's a massive difference depending on how LLMs are used to the quality of the output.

For example if you just ask an LLM in a browser with no tool use to "find a vulnerability in this program", it'll likely give you something but it is very likely to be hallucinated or irrelevant.

However if you use the same LLM model via an agent, and provide it with concrete guidance on how to test its success, and the environment needed to prove that success, you are much more likely to get a good result.

It's like with Claude code, if you don't provide a test environment it will often make mistakes in the coding and tell you all is well, but if you provide a testing loop it'll iterate till it actually works.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: