Hacker Newsnew | past | comments | ask | show | jobs | submit | cyanydeez's commentslogin

Its a billionaire bubble making machine

GIGOaaS

...is it though? Fundamentally, these are statistical models with harnesses that try to conform them to deterministic expectations via narrow goal massaging.

They're not improving on the underlying technology. Just iterating on the massaging and perhaps improved data accuracy, if at all. It's still a mishmash of code and cribbed scifi stories. So, of course it's going to hit loops because it's not fundamentally conscience.


I think what's bewildering is the usual hypemongers promising (threatening) to replace entire categories of workers with this type of dogshit. As another commenter mentioned, most large employers are overstaffed by 2 to 3x so ai is mostly an excuse for investors not to get too worried about staffing cuts. The idea that Marc is blown away by this type of nonsense is indicative only of the types of people he surrounds himself with.

What's also bewildering is the complete opposite of the spectrum of calling something "dogshit" when it is quite obviously a very powerful tool. It won't replace workers. But it will make those workers more productive. You don't need to vibe-code to be able to do more work in the same amount of time with the help of an LLM coding agent.

> Fundamentally, these are statistical models

> So, of course it's going to hit loops because it's not fundamentally conscience.

Wait, I was told that these are superintelligent agents with sophisticated reasoning skills, and that AGI is either here or right around the corner. Are you saying that's wrong?

Surely they can answer a simple question correctly. Just look at their ARC-AGI scores, and all the other benchmarks!


We made this unbeatable tests for AI then told some of the smartest engineering teams in the planet that they can present a solution in a black box without explaining if they cheated but if they win they get amazing headlines and to keep their jobs and funding.

Somehow thye beat the score in the same year, its crazy! No one could have seen this coming, and please do not test it at home to see if you get the same results, it gets embarrased outside of our office space


The complete lack of skepticism in the AI space is sickening. Are all economic bubbles this annoying?

Security through obscurely programmed model is a new paradigm I suppose.

Do you have the email to your auditor? Would like to know if this is legit.

Good abstractions only get you easy wins for some percent of the desirable tasks. They never guarantee 100% edge case unless trivial.

Choosing wrong means huge tech debt. Choosing righr just means most of your code will be happy path, and a few will need escape hatches. Not because of the abstract but because the target problem shifts uncontrollably. Because the problems you are solving typically require multiple abstractions and they are going to meet at the edges in the best case.


yeah the escape hatch accumulation is the real problem - starts as like 2 edge cases then just grows. I think the worst outcome is when you realize 6 months in that the abstraction was kind of wrong for your use case - now every new feature is fighting the grain of the code AND you've got a pile of special cases on top. at some point maintaining the abstraction layer itself becomes the work

Base model is content. If it reads to much it becomes the content.

What you want is a harness that continually inserts file portions until a sufficiently bright light bulb goes off.

When they say agentic AI, ITS BASICALLY:

<command><content-chunk-1/></command>

its the ugliest string mashing indeterministic garbage the bearded masters would face palm.


The idea that the large. LLM can disarticulate a problem for small LLMs seems to be a fallacy.

Take a look at this. It just made the HN front page:

https://news.ycombinator.com/item?id=47038411


I believe the discs are a product of the manufacturing, having to spin them. The entire disc is not useable, so not really would you call it a wafer. If the entire cjip comes from the wafer, its wafer scale.

Im actually very cpncerned people have yet to realize they dont need to put truth values on internet content.

Once you default to 'doesnt matter if true' you end up being a lot more even keeled.


There's a trust built up over years (in this case, decades) by a news organization. In this case, Ars Technica. I don't trust the rando on the internet, but I do trust a news organization that has proven over the course of decades to release factual information.

Now that Ars Technica has been caught and admitted to using AI-generated material in its stories, I now have to question that trust. A week ago, I wouldn't have had to.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: