Hacker Newsnew | past | comments | ask | show | jobs | submit | mattbuilds's commentslogin

Got any evidence of this or is just vibes based?


Unsure why the status quo needs evidence but remote doesn't, but which part of my reasoning do you require evidence to believe?


That’s a false equivalency, sorry that some of us think companies should actually be responsible for the things they produce.


I’m sorry but the difficult part of making games isn’t the coding, it is making something that is appealing and enjoyable to play. An LLM isn’t going to help with that at all. How is it going to know if something is fun? That’s the real work.

Also the idea that a dev who could making a game in 24 hour would create something professional and polished in 3 days is a joke. The answer to “where are all the games” is simple: LLMs don’t actually make a huge impact on making a real game.


Easy! Ask the LLM to play the game and if it’s not fun to try again. just like when you ask it to compile the code and if it fails to try again

…Joking…. For now


This is almost on the money. Making something fun often requires coding, art, sound etc to bring the fun out. So in fact coding is the difficult part, along with all the other stuff needed for something to be fun. Imo tooling like ue blueprints and visual scripting is in the coding bucket.


I’m not saying coding is easy, but when it comes to games it is the easy part. Lots of people can code, very few can make something actually fun. Knowing how to code (or how to use an engine/blueprints/visual scripting) is just the start. It’s like making films. Everyone can record some videos on their phone, but it takes much more than that to make something people want to watch.


That analogy is more accurate for LLM vibe coding than real programming which i think proves my point. Not everyone can code. Actually code. Ideas are bountiful compared to the required skill to bring them into reality.


I personally don't dismiss or advocate for AI/LLMs, I just take what I actually see happening, which doesn't appear revolutionary to me. I've spent some time trying to integrate it into my workflow and I see some use cases here and there but overall it just hasn't made a huge impact for me personally. Maybe it's a skill issue but I have always been pretty effective as a dev and what it solves has never been the difficult or time consuming part of creating software. Of course I could be wrong and it will change everything, but I want to actually see some evidence of that before declaring this the most impactful technology in the last 100 years. I personally just feel like LLMs make the easy stuff easier, the medium stuff slightly more difficult and the hard stuff impossible. But I personally feel that way about a lot of technology that comes along though, so it could just be I'm missing the mark.


> I have always been pretty effective as a dev

> LLMs make the easy stuff easier

I think this is the observation that's important right now. If you're an expert that isn't doing a lot of boilerplate, LLMs don't have value to you right now. But they can acceptably automate a sizeable number of entry-level jobs. If those get flushed out, that's an issue, as not everyone is going to be a high-level expert.

Long-term, the issue is we don't know where the ceiling is. Just because OpenAI is faltering doesn't mean that we've hit that ceiling yet. People talk about the scaling laws as a theoretical boundary, but it's actually the opposite. It shows that the performance curve could just keep going up even with brute force, which has never happened before in the history of statistics. We're in uncharted territory now, so there's good reason to keep an eye on it.


True but if you are building it for yourself then you will still have something useful in the end. Chances are that you also probably enjoyed or took satisfaction in the process of building it. Also, if it is truly a passion project and not just attempt to make money, it’s probably more interesting than most of the stuff shared.


Got any evidence on that or is it just “vibes”? I have my doubts that AI tools are helping good programmers much at all, forget about “running circles” around others.


I don't know about "running circles" but they seem to help with mundane/repetitive tasks. As in, LLMs provide greater than zero benefit, even to experienced programmers.

My success ratio still isn't very high, but for certain easy tasks, I'll let an LLM take a crack at it.


That is not a moral obligation, it is in fact the opposite. It is a lie that people tell themselves and the world to allow themselves to make immoral decisions for their own benefit.

I’m not saying running a company is easy and I know that many gray areas exist in the decision making. I do think companies can exist, profit, and be a net good for the world. However, we need to remove the notion that the duty to shareholder profits is a moral duty. It’s a cowards way out of having to make actual difficult choices. It’s one of those things that sounds great exactly because it allows you do horrible things with no responsibility. It creates a system where you offload the effort and weight of your decisions. As long as you’re are acting in the interest of shareholders, you are in the clear. That’s a dangerous concept and the opposite of morality.


It is the fiduciary duty of the CEO to do what’s in the best interest of shareholders.

In a working system it should be the governments responsibility to limit what a company can do


According to your logic, a CEO should attempt to destabilize and influence the government's responsibility so they can maximize shareholder value. And guess what, that is exactly what happens in reality. You can't just simplify reality into rules like this because it leads to people using those rules as an excuse to skirt responsibility and make actual difficult decisions.


Correct and this is why regulatory capture is the phase after market capture, to transition into legal monopoly.


In the best interest of the shareholders might reasonably interpreted as, say, not destroying the biosphere. Fiduciary duty is certainly not "maximise profits whatever the consequences".


I would recommend reading about the Friedman Doctrine and the time period where it came about. It is only a theory and not necessarily a good one.


Unless Saint Friedman got his "doctrine" from some higher power, it's just the oligarchy's first commandment.

In the first line of GP's reference in Wikipedia:

"The Friedman doctrine, also called shareholder theory, is a normative theory of business ethics advanced by economist Milton Friedman that holds that the social responsibility of business is to increase its profits."


Is it really the best approach though if we sink all this capital into it if it can never achieve AGI? It’s wildly expensive and if it doesn’t achieve all the lofty promises, it will be a large waste of resources IMO. I do think LLMs have use cases, but when I look at the current AI hype, the spend doesn’t match up with the returns. I think AI could achieve this, but not with a brute force like approach.


There's still even a more fundamental question before getting there, how are we defining AGI?

OpenAI defines it based on the economic value of output relative to humans. Historically it had a much less financially arrived definition and general expectation.


You really can't take anything OpenAI says about this kind of thing seriously at this point. It's all self-serving.


It's still important though, they are the ones many are expecting to lead the industry (whether that's an accurate expectation is surely up for debate).


Market will sort that out just like it did dotcom or tulip madness.

Another big push back is copyrighted content. Without proper revenue model how to pay for that?

That will also restrict what can be "learned". Already there's lawsuit, allegations of using pirated books etc


I'll be surprised if anything meaningful comes of those issues in the end.

Copyright issues here feel very similar to claims against Microsoft in the 80s and 90s.


Just to further your point, we are in a thread about Kubrick who did numerous book adaptations including Lolita, Dr. Strangelove, The Shining, and Clockwork Orange and this is just off the top of my head. Tons of directors adapt novels. Bringing the story to the screen is the skill.


As an aside, 2001 is an interesting case as it was produced concurrently with the novel. It's clearly not an adaptation, but I wouldn't say it's clearly original material either.


It’s funny because in my many years of development I don’t think I’ve ever encounter a “mess of shell scripts” that was difficult to maintain. They were clear, did their job and if they needed to be replaced it was usually simple and straightforward.

Can’t say the same for whenever the new abstraction of the day comes along. In my experience what the OP is saying is exactly my experience. The abstractions get picked not because they are best but because they reduce liability.


Hello. I have found the mess of shell scripts. Please don't do this.

I was able to deal with the weird skaffold mess by getting rid of it, and replacing it with argocd. I was able to get rid of jenkins by migrating to github actions. I have yet to replace the magic servers with magic bash scripts. They take just enough effort that i can't spend the time.

Use a tool i can google. If your bash script is really this straight forward and takes you from standard A to standard B, and it's in version control then bash is AMAZING. Please don't shove a rondom script that does a random thing on a random server.


Bash is good but can grow out of control. The problem is solo engineers and managers who push/approve 500+ line bash scripts that do way too much. A good engineer will say it's getting too complicated and reimplement it in Python.


Wasn't there a rule about that?

Something like "in software development the only solution that sticks is the bad one, because the good ones will keep getting replaced until it's so bad, nobody can replace it anymore"


i have encountered messes of shell scripts that were difficult to maintain; in my first sysadmin job in 01996 i inherited a version control system written as a bunch of csh scripts, built on top of rcs

but they were messy not because they lacked 'abstractions' but because they had far too many

i think shell scripts are significantly more bug-prone per line than programs in most other programming languages, but if the choice is hundreds of thousands of lines in an external dependency, or a ten-line or hundred-line shell script, it's easy for the shell script to be safer


If it was in RCS, then you could directly move the archives under a CVSROOT and use them natively.

CVS had been out since Brian Berliner's version of 1989.

I actually moved a PVCS archive into RCS->CVS this way, and I'm still using it.


that version control system provided a number of facilities cvs didn't (locking, and also a certain degree of integration with our build system permitting the various developers to only recompile the part of the system they were working on, which was important because recompiling the whole thing usually took me about a week, once a month), but it had never actually occurred to me that turning an rcs repository into a cvs repository like that was a possibility. also i never realized pvcs used rcs under the covers. thank you very much


PVCS did not use the RCS format, but the RPM distribution included a perl script to convert the archives.

  $ rpm -ql cvs | grep pvcs
  /usr/share/cvs/contrib/pvcs2rcs


ooh. that would have been very useful two jobs later when i got stuck with pvcs


Shell seems great until your tens of lines in googling every other line of obscure error-prone syntax.


… or maybe you are not proficient at shell scripting? I never had this issue, including large projects written in tcl, bash or perl in the 90s when it was more normal to do so.

The modern answer seems to be some kind of dsl with yaml syntax mixed with Unix (and thus bash) snippets which are often incredibly verbose and definitely not easier to read than a well written bash script. The only thing I think of when I see those great solutions is; another greenspun’s tenth rule in action.


bash and other sh related approaches have a lot of "foot guns". python, or powershell, or even C++ are often easier to read and follow.

> are often incredibly verbose and definitely not easier to read than a well written bash script

define well written -- getting into no true scotsman here.

bash is fine for what it was and what it did, and i'm glad to know enough sed and awk to be dangerous, but it's a PITA unless we're forced to use it


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: