Hacker Newsnew | past | comments | ask | show | jobs | submit | __0x01's commentslogin

Is "AI code review" a correct term?

A code review requires reasoning and understanding, things that to my knowledge a generative model cannot do.

Surely the most an AI code review ever could be is something that looks like a code review.


Given we are more interested in the end than in the mean, it is a good usage.

Please can you elaborate? We are more intersted in "the end" in what sense?

> This is nothing but speculation

Did you read the paper?


It's written in the future tense, so I can safely call it speculation. I've read the abstract which is all I need to decide the full text is not worth my time.


Cool, then we can safely give your comments exactly the same treatment - since they are completely uninformed speculation about a paper you haven't read.


Is he incorrect that the paper is speculating about future events? I don't think it's completely uninformed either. He said that he's read the abstract, which is supposed to give you an impression of the structure of the argument. Why don't you engage with the criticism?


There is no criticism. He did not read the paper.


I read the entire paper, and his criticism is spot on. I even read through many of the references, which, in my spot checks, don't support the claims in the paper. Very disappointing work, IMHO.


Cool. Perhaps you should have criticized the paper and requested feedback instead of defending someone who did not read the paper!


I did both! I'm not concerned with defending anyone, I'm interested in truth. His criticism was sound, and your comments contribute even less to the discussion than his. Very disappointing.


> Is he incorrect that the paper is speculating about future events? I don't think it's completely uninformed either.

Most people would say this is a defense of the person, or at least a defense of the person's choice to not read the full paper. It is no fun to debate with intellectual dishonesty.


Anyone with experience reading research papers professionally will tell you that one of the responsibilities of a paper's abstract is to meaningfully convey the level of evidence and certainty that the paper is backed by. This paper did very well at that, by having the abstract indicate its more of an essay/opinion piece than an a more scientific piece. This is blindingly obvious, and was a simple observation that everyone for some reason dismissed not on merit, but because the person who said it hadn't read the whole paper, which for a 40 page document is an incredibly high bar that is likely not met by 90% of the people commenting here.

Anyway, I'm tired of this now.


And you must have read all 40 pages of it, right? Because if not you are a hypocrite. I claim that the Bible is the literal truth. Oh, you haven't read every word of the Bible? Your arguments against me are worthless!


I did actually read all 40 pages of it. I frequently read law journal articles, among with lots of other types of journals and papers.

I also used to maintain up to date reading lists of various areas (compiler optimization, for example) because I would read so many of the papers.

Let me give you a piece of advice:

First, gather facts, then respond.

Here you start by sarcastically asserting i wouldn't have read it, but it would generally be better to ask if i read it (fact gathering), and then devise a response based on my answer. Because your assertion is simply wrong, making the rest of it even sillier.

As for the strawman about the bible - i'm kinda surprised you are really trying to equate not reading any part of something with not reading every part of something, and really trying to defend what you did here, instead of just owning up to it and moving on.

This speaks a lot more about you than anything else.

That said -

When you make a claim covering that everything in a book is the literal truth, you only have to find a part that is not the literal truth to prove the claim wrong. Which may or may not require reading the entire thing to start (if it turns out your counter-claim is wrong, you at least have to read and find another)

In the original comment, you'll note your claim was "This is nothing but speculation" - IE all of the paper is speculation.

If we are being accurate, this would require you reading the entire thing to be able to say all of it is speculation. How could you know otherwise?

Even if we were being nice, and treat your claim colloquially as meaning "most of it is speculation", this would still require reading some of the paper, which you didn't do either.

Perhaps you should just quit while you are behind, and learn that when you screw up, the correct thing to do is say "yeah, i screwed up, i should have read it before saying that", instead of trying to double down on it.

Doubling down like this just makes you look worse.

As an aside - I was always an avid reader, and very bored in synagogue, so i have read every word of a number of books of the hebrew bible because it was more interesting than paying attention to the sermons.


His criticism that the paper is speculation is spot on. Many of the references don't support the claims they are cited for. It's fascinating to me that you want to argue the poster's standing to make a criticism more than you want to actually discuss the content of the paper.


Its a particularly weird criticism given that Danny is a lawyer and has experience in the CS research community. He is especially well suited to address a criticism that the authors are trying to trick people into thinking their work is a scientific paper, which is plainly a ridiculous criticism.


I'd love some clarity on that.

The linked page says this:

``` How AI Destroys Institutions

77 UC Law Journal (forthcoming 2026)

Boston Univ. School of Law Research Paper No. 5870623

40 Pages Posted: 8 Dec 2025 Last revised: 13 Jan 2026 ```

What exactly is this document? It reads like a heavily cited op-ed, but is coming out of a law school from a professor there and calls itself a "research paper". Very strange.

EDIT: I looked up UC Journal of Law, and I think I was misled because I'm not familiar with the domain. They describe themselves as:

> Since 1949, UC Law Journal, formerly known as Hastings Law Journal, has published scholarly articles, essays, and student Notes on a broad range of legal topics. With roughly 100 members, UCLJ publishes six issues each year, reaching a large domestic and international audience. Each year, one issue is dedicated to essays and commentary from our annual symposium, which features speakers and panel discussions on an area of current interest and development in the law.

So this is congruent with the Journal's normal content (it's an essay), but having the document call itself a "research paper" conjured an inflated expectation about the rigor involved in the analysis, at least for me.


> So this is congruent with the Journal's normal content (it's an essay), but having the document call itself a "research paper" conjured an inflated expectation about the rigor involved in the analysis, at least for me.

Right. And I think it is weird that people immediately leapt to this being some sort of deception by the authors and I think it was weird that when a lawyer who has experience in both domains clarified this that people doubled down.


Yep, I agree that jumping to the "deception" angle would be pretty far down on my list. I always admired the simplicity of HN's guideline to focus on curiosity, since it has far-reaching effects on the nature of the discourse.


> Even if we were being nice, and treat your claim colloquially as meaning "most of it is speculation", this would still require reading some of the paper, which you didn't do either.

I did read a some of it. The abstract. Which is there for the specific purpose of providing readers a summary to decide whether it is worth their time to read the whole thing.

And, yeah, obviously I didn't mean literally all because that just isn't how people talk. e.g. the author's names are not speculation. But the central premise of the paper "How AI Destroys Institutions" is speculative unless they provide a list of institutions that have been destroyed by AI and prove that they have. The institutions they list, "the rule of law, universities, and a free press," have not been destroyed by AI, so therefore, the central claim of the paper is speculative. And speculation on how new tech breakthroughs will play out is generally useless, the classic example being "I think there is a world market for maybe five computers," by the CEO of IBM.

Furthermore their claim here: > The real superpower of institutions is their ability to evolve and adapt within a hierarchy of authority and a framework for roles and rules while maintaining legitimacy in the knowledge produced and the actions taken. Purpose-driven institutions built around transparency, cooperation, and accountability empower individuals to take intellectual risks and challenge the status quo.

This just completely contradicts any experience I have ever had with such institutions. Especially "empower individuals to take intellectual risks and challenge the status quo". Yeah. If you believe that, then I've got a bridge to sell you. These guys are some serious koolaid drinkers. Large institutions are where creativity and risk taking go to die. So yeah, not reading 40 pages by these guys.

You can tell a lot from a summary, and the entire premise that you have to read a huge paper to criticize is just bullshit in general.


I also worry about a centralised service having access to confidential and private plaintext files of millions of users.


Heard of google drive?


Often when I look closely at the output of LLM generated code, I see repetition, redundant logic and deeply hidden bugs.

Notwithstanding the above, to my understanding LLM services are currently being sold below cost.

If all of the above is true, at some point the degredation of quality in codebases that use these tools will be too expensive to ignore.


These LLM tools appear to have an unprecedented amount of access to the file systems of their users.

Is this correct, and if so do we need to be concerned about user privacy and security?


We should be absolutely terrified about the amount of access these things have to users systems. Of course there is advice to use a sandbox but there are stupid people out there (I'm one of them) who disregard this advice because it's too cumbersome, so Claude is being run in yolo mode, on the same machine that has access access to bank accounts, insurance, password manager and crypto private keys.


My understanding was that atherosclerotic plaques are comprised of cholesterol or fatty deposits [1] and that these can lead to CVD.

The fat mechanism I understand, but what is the mechanism for sugar in CVD?

[1] https://www.heart.org/en/health-topics/cholesterol/about-cho...


CVD requires a bunch of events to happen in sequence, I always felt like it was a combination of risk factors + luck that make a heart attack or aneurysm happen.

1. High blood pressure damages walls of arteries and veins

2. LDL Cholesterol gets into the damaged walls

3. LDL gets oxidized

4. White blood cells engulf oxidized LDL and form plaques

5. Hardened plaques chill, they are bad but not deadly, if a plaque breaks off you are probably dead.

Sugar is gonna contributes to 1 - 3, especially 3 it seems way more guilty of than fat. The one big thing that opened my eyes was that most of the LDL you get is going to be produced by your own liver. Regulating how the liver produces it is going to have a bigger impact than directly eating less/more of it.

It is kind of a luck thing though, you could eat like shit and never have all the events occur just due to dumb luck, or you could be a fit 45 year old and for whatever reason you get a plaque that breaks off and you aneurysm and die.


And the liver produces triglycerides from fructose which is half of sugar.


Consuming cholesterol doesn't normally change the level of cholesterol in your bloodstream - it simply leads to your body producing less cholesterol. Unless you're consuming gigantic amounts, or have some problems with your cholesterol regulation, dietary cholesterol is completely safe. It's only if your blood work shows elevated cholesterol levels that you need to start paying attention to cholesterol intake. This is in fact very similar to what happens to blood sugar levels, in fact.


Pretty much every health authority will tell you that high blood sugar damages blood vessels, thereby enabling the formation of said plagues.


Healthy adults consuming some dietary sugar doesn't cause persistent high blood sugar, though. That's diabetes.


it's not just sugar. https://en.wikipedia.org/wiki/Glycemic_index#Grouping

all simple carbs are the devil, but we can't possibly feed billions of people actually healthy food - organic vegetables, nuts, and animal products, so come drink your corn syrup.


The sugar industry (topic of this article) can only be blamed for sugar, though -- not all high-GI foods.

And you can replace "sugar" in what I said earlier with "high-GI foods" and it doesn't change a thing. Persistent high blood sugar is diabetes; it isn't dietary.


>Persistent high blood sugar is diabetes; it isn't dietary.

how is it not dietary if consuming most carbs spikes your blood sugar for hours, which, with three meals + snacks + starbucks slurry, means elevated blood sugar 20+ hours a day?


It doesn't happen in non-diabetic people. It's different in type 2 diabetics who will see large swings in blood fat and glucose after meals.


Please can you provide a source for the above?


Sugar causes inflammation, and inflammation damages arteries. It is this damage that then leads to accumulation of fatty deposits, as damaged arteries basically lose the protective layer (think of equivalent to a non-stick coating). But that doesn't mean dietary fat is what actually caused the plaque.


Poor dental health also contributes and nothing pushes poor dental healthy like a high sugar diet.


Engineering Room, panning over a bunch of hot Blackwells

"I can't change the laws of physics!"


The monster babbleth no more, sire.


Is cost a risk here? I'm assuming that sometime in the future the price for vibe coding/engineering will go up significantly.


It already is. I saw a demo of someone spending $30 an hour on Cline to do something that would be achieved better, for free, and faster, with a framework.

That's more than $5,000 a month assuming 40 hours a week.

The end result didn't compile by the way.


I have noticed that many of the most used and loved projects use C/C++.

With all of the unpopular press that they get, why has history often proven C/C++ to be the right choice time and time again?


The one thing I really regret is listening to the kind of opinionated people that will tell how a certain popular language is bad and you shouldn't use it.

Twenty years ago when I learned Python, people told me it was a waste of time. Sure it is a nice language but you will never get a job doing it.

When I got into PHP ten years ago people told me it is a horrible language and basically dead already. Still, PHP was key to my career and I am making good money programming with in in 2025.

I used to be a C++ hater because that was the cool thing to do. Then I tried Unreal Engine and had to use C++ and discovered it is... fine. Really. There is a good reason it is heavily used in the gaming industry. I don't totally love it. The compile times are kind of annoying but C++ is not the only language suffering from that. If you need the performance and the ecosystem, C++ still isn't the worst choice.


There are only two kinds of languages: the ones people complain about and the ones nobody uses.

-- Bjarne Stroustrup, The C++ Programming Language


The most used Desktop projects were started before 2015, around which new languages were stable enough to consider app building. Those apps fill their niche well and people don't need to start new projects for these niches, and new developers add to those projects instead of replacing them, which is a good thing. The cloud, AI, mobile (or crypto ..) by example are another story.


Part of the answer is in term C/C++ itself. C is a very popular language because the history around Unix and FFI. C++ builds on that popularity by being sort of backwards compatible with it and being what Microsoft Visual C++ supports if you want more languages features beyond ISO C90. Which brings to mind another factor, that people used to pay for compilers and IDEs. The hobbyist had to consider price as well as functionality, portability, performance and popularity in what programming languages they choose. Given there were several C++ compilers to choose from and the popularity of Windows it makes sense that C++ won.

Why has nothing replaced it? That depends on what you mean by replaced. Python, and Typescript/JavaScript have replaced it in many places. It's just low level programming where C++ has yet to be superceded. For that kind of programming there had not been many that could even approach the space until LLVM-based languages started coming out recently.

Some things come down to the OS system. We're still using the C FFI and other system elements designed around C. So until something better replaces those we're still using C on some level.


Long term stability is pretty important and C/C++ has a pretty good record of that. A lot of the bad press is also for ideological reasons, not entirely practical ones.

Finally, I think it's because the C/C++ manages to stay hidden. You don't need to know that it's written in C/C++ to use it.


Survivorship bias


Don’t trust hype.

”Unpopular press” generally means ”non-expert clickbait opinions”.

C++ as a language is abominable but the ecosystem (compiler support, tooling, libraries, established knowledge) is very hard to beat.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: