Hacker Newsnew | past | comments | ask | show | jobs | submit | whoknowsidont's commentslogin

>The Generative AI Industry is based on Theft

This is the big thing that needs to be addressed. These models are nothing without that data. Code, art, music, just plain old conversations freely given for the benefit or entertainment of other humans.

Humans are accountable for what they use or "borrow."

These models seemingly are getting a free pass through the legal system.

Another way of looking at this is that humans have to have some type of credential for many professions, and they have to pay to be taught and certified. Out of their own pocket, and time.

Not only will these models copy cat your work, but a lot of companies/industries seem very keen on just ignoring the fact that these models have not had to pass any sort of exam.

The software has more rights and privilege than actual humans at this point.


> The software has more rights and privilege than actual humans at this point.

That's been true for some time though, right? For example if you have a community notice board in front of your store and someone pins illegal content to it you're held to a different legal standard than if someone posts the same content to a social media platform.

I don’t think that’s right either, but this kind of “tech exceptionalism” has been baked into law for decades. AI is inheriting those privileges more than inventing new ones.


>GPs comment is very surprising since it has been noted that Opus 3 is in fact exceptionally "well aligned" model

I'm sorry what? We solved the alignment problem, without much fan fair? And you're aware of it?

Color me shocked.


A lot of people genuinely can't remember what they did an hour ago, but to be very clear you're implying that an LLM can't "remember" something from an hour, or three hours ago, when it's the opposite.

I can restart a conversation with an LLM 15 days later and the state is exactly as it was.

Can't do that with a human.

The idea that humans have a longer, more stable context window than LLM's, CAN or is even LIKELY to be true given certain activities but please let's be honest about this.

If you talk to someone for an hour about a technical conversation I would guesstimate that 90% of humans would immediately start to lose track of details in about 10 minutes. So they write things down, or they mentally repeat things to themselves they know or have recognized they keep forgetting.

I know this because it's happened continually in tech companies decade after decade.

LLM's have already passed the Turing test. They continue to pass it. They fool and outsmart people day after day.

I'm no fan of the hype AI is receiving, especially around overstating its impact in technical domains, but pretending that LLM's can't or don't consistently perform better than most human adults on a variety of different activities is complete non-sense.


The Turing test was passed in the 80s by Eliza it doesnt mean anything

Why doesn't it mean anything?

Lot's of assumptions about humanity and how unique we are constantly get paraded in this conversation. Ironically, the people who tout those perspectives are the least likely to understand why we're really not all that "special" from a very factual and academic perspective.

You'd think it would unlock certain concepts for this class of people, but ironically, they seem unable to digest the information and update their context.


It's not. And if your team is doing this you're not "advanced."

Lots of people are outing themselves these days about the complexity of their jobs, or lack thereof.

Which is great! But it's not a +1 for AI, it's a -1 for them.


Part of the issue is that I think you are underestimating the number of people not doing "advanced" programming. If it's around ~80-90%, then that's a lot of +1s for AI

Wrong. 80% of code not being advanced is quite strictly not the same as 80% people not doing advanced programming.

I completely understand the difference, and I am standing by my statement that 80-90% of programmers are not doing advanced programming at all.

Why do you feel like I'm underestimating the # of people not doing advanced programming?

Theoretically, if AI can do 80-90% of programming jobs (the ones not in the "advanced" group), that would be an unequivocal +1 for AI.

I think you're crossing some threads here.

"It's not. And if your team is doing this you're not "advanced." Lots of people are outing themselves these days about the complexity of their jobs, or lack thereof.

Which is great! But it's not a +1 for AI, it's a -1 for them.

" Is you, right?


Yes. You can see my name on the post.

OK, just making sure. Have a blessed day :)

It's true for me. I type in what I want and then the AI system (compiler) generates the code.

Doesn't everyone work that way?


Describing a compiler as "AI" is certainly a take.

I used to hand roll the assembly, but now I delegate that work to my agent, clang. I occasionally override clang or give it hints, but it usually gets it right most of the time.

clang doesn't "understand" the hints because it doesn't "understand" anything, but it knows what to do with them! Just like codex.


Given an input clang will always give the same output, not quite the same for llms. Also nobody ever claimed compilers were intelligent or that they "understood" things

The determinism depends on the architecture of the model!

Symbolica is working on more deterministic/quicker models: https://www.symbolica.ai

I also wish it was that easy, but compiler determinism is hard, too: https://reproducible-builds.org


An LLM will also give the same output for the same input when the temperature is zero[1]. It only becomes non-deterministic if you choose for it to be. Which is the same for a C compiler. You can choose to add as many random conditionals as you so please.

But there is nothing about a compiler that implies determinism. A compiler is defined by function (taking input on how you want something to work and outputting code), not design. Implementation details are irrelevant. If you use a neural network to compile C source into machine code instead of more traditional approaches, it most definitely remains a compiler. The function is unchanged.

[1] "Faulty" hardware found in the real world can sometimes break this assumption. But a C compiler running on faulty hardware can change the assumption too.


currently LLMs from majorvproviders are not deterministic with temp=0, there are startups focusing on this issue (among others) https://thinkingmachines.ai/blog/defeating-nondeterminism-in...

You can test that yourself in 5 seconds and see that even at a temp of 0 you never get the same output

Works perfectly fine for me.

Did you do that stupid HN thing where you failed to read the entire comment and then went off to try it on faulty hardware?


No I did that HN thing where I went to an LLM, set temp to 0, pasted your comments in and got widely different outputs every single time I did so

"Went" is a curious turn of phrase, but I take it to mean that you used an LLM on someone else's hardware of unknown origin? How are you ensuring that said hardware isn't faulty? It is a known condition. After all, I already warned you of it.

Now try it on deterministic hardware.


Feel free to share your experiments, I cannot reproduce them but you seem very sure about your stance so I am convinced you gave it a try, right ?

Do you need to reproduce them? You can simply look at how an LLM is built, no? It is not exactly magic.

But what are you asking for, exactly? Do you want me to copy and paste the output (so you can say it isn't real)? Are you asking for access to my hardware? What does sharing mean here?


Was the seed set to the same value everytime?


Hm, some things compilers do during optimization would have been labelled AI during the last AI bubble.

it's something that crossed my mind too honestly. natural-language-to-code translation.

You can also do search query to code translation by using GitHub or StackOverflow.

Compilers are probably closer to "intelligence" than LLMs.

I understand what you're getting at, but compilers are deterministic. AI isn't just another tool, or just a higher level of program specification.

This is all a bit above my head. But the effects a compiler has on the computer are certainly not deterministic. It might do what you want or it might hit a weird driver bug or set off a false positive in some security software. And the more complex stacks get the he more this happens.

And so is "AI". Unless you add randomness AKA raise the temperature.

If you and I put the same input into GCC, we will get the same output (counting flags and config as input). The same is not true for an LLM.

> The same is not true for an LLM.

Incorrect. LLMs are designed to be deterministic (when temperature=0). Only if you choose for them to be non-deterministic are they so. Which is no different in the case of GCC. You can add all kinds of random conditionals if you had some reason to want to make it non-deterministic. You never would, but you could.

There are some known flaws in GPUs that can break that assumption in the real world, but in theory (and where you have working, deterministic hardware) LLMs are absolutely deterministic. GCC also stops being deterministic when the hardware breaks down. A cosmic bit flip is all it takes to completely defy your assertion.


> but compilers are deterministic.

Are they, though? Obviously they are in some cases, but it has always been held that a natural language compiler is theoretically possible. But a natural language compiler cannot be deterministic, fundamentally. It is quite apparent that determinism is not what makes a compiler.

In fact, the dictionary defines compiler as: "a program that converts instructions into a machine-code or lower-level form so that they can be read and executed by a computer." Most everyone agrees that it is about function, not design.

> AI isn't just another tool

AI is not a tool, that is true. I don't know, maybe you stopped reading too soon, but it said "AI systems". Nobody was ever talking about AI. If you want to participate in the discussions actually taking place, not just the one you imagined in your head, what kind of system isn't just another tool?


> Nobody was ever talking about AI. If you want to participate in the discussions actually taking place, not just the one you imagined in your head

Wow. No, I actually don't want to participate in a discussion where the default is random hostility and immediate personal attack. Sheesh.


[flagged]


What the hell? You can't comment like this on HN, not matter how right you are or feel you are. The guidelines make it clear we're trying for something better here. These guidelines are particularly relevant:

Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."

Please don't fulminate. Please don't sneer, including at the rest of the community.

Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.

Please don't post shallow dismissals...

Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that".

HN is only a place where people want to participate because others make the effort to keep the standards up. Please do your part to make this a welcoming place rather than a mean one.

https://news.ycombinator.com/newsguidelines.html


I beginning to think most "advanced" programmers are just poor communicators.

It really comes mostly down to being able to concisely and eloquently define what you want done. It also is important to understand what the default tendencies and biases of the model are so you know where to lean in a little. Occasionally you need to provide reference material.

The capabilities have grown dramatically in the last 6 months.

I have an advantage because I have been building LLM powered products so I know mechanically what they are and are not good with. For example.. want it to wire up an API with 250+ endpoints with a harness? You better create (or have it create) a way to cluster and audit coverage.

Generally the failures I hear often with "advanced" programmers are things like algorithmic complexity, concurrency, etc.. and these models can do this stuff given the right motivation/context. You just need to understand what "assumptions" the model it making and know when you need to be explicit.

Actually one thing most people don't understand is they try to say "Do (A), Don't do (B)", etc. Defining granular behavior which is fundamentally a brittle way to interact with the models.

Far more effective is defining the persona and motivation for the agent. This creates the baseline behavior profile for the model in that context.

Not "don't make race conditions", more like "You value and appreciate elegant concurrent code."


Some of the best programmers I know are very good at writing and/or speaking and teaching. I struggle to believe that “advanced programmers” are poor communicators.

Genuine reflection question, are these excellent communicators good at using llms to write code?

My supposition was: Many programmers that say their programming domain was too advanced and llms didn't work for their kind of code are simply bad at describing concisely what is required.


Most good programmers that I know personally work, as do I, in aerospace, where LLMs have not been adopted as quickly as some other fields, so I honestly couldn’t say.

> I beginning to think most "advanced" programmers are just poor communicators.

This is a interesting take take considering that programmers are experts in communicating what someone has asked for (however vaguely) into code.

I think you're referring to is the transition from 'write code that does X' which is very concrete to 'trick an AI into writing the code I would have written, only faster', which feels like work that's somewhere between an art form and asking a magic box to fix things over and over again until it stops being broken (in obvious ways, at least).

Understandably people that prefer engineered solutions do not like the idea of working this way very much.


When you oversee a team technically as a tech lead or an architect, you need communication skills.

1. Basing on how the engineer just responded to my comment, what is the understanding gap?

2. How do I describe what I want in a concise and intuitive way?

3. How do I tell an engineer what is important in this system and what are the constraints?

4. What assumptions will an engineer likely make that are will cause me to have to make a lot of corrections?

Etc.. this is all human to human.

These skills are all transferrable to working with an LLM.

So I guess if you are not used to technical leadership, you may not have used those skills as much.


The issue here is that LLM’s are not human and so having a human mental model of how to communicate doesn’t really work. If I communicate to my engineer to do X I know all kinds of things about them, like their coding style, strengths and weaknesses, and that they have some familiarity with the code they are working with and won’t bring the entirety of stack overflow answers to the context we are working in. LLM’s are nothing like this even when working with large amounts of context, they fail in extremely unpredictable ways from one prompt to the next. If you disagree I’d be interested in what stack or prompting you are using that avoids this.

> It really comes mostly down to being able to concisely and eloquently define what you want done.

We had a method for this before LLMs; it was called "Haskell".


One added note. This rigidness of instruction is a real problem that the models themselves will magnify and you need to be aware of. For example if you ask a Claude family of models to write a sub-agent for you in Claude Code. 99% of the time it will define a rigid process with steps and conditions instead of creating a persona with motivations (and if you need it suggested courses of action).

You don't need to replace it. Just please stop using it.

If for nothing else than pure human empathy.


\thread.

Nothing more needs to be said. Nothing more needs to be debated. This is evil, pure and simple.


AKA people who are willing to be honest about its faults. .NET has a lot of social commentary on how stable and robust it is yet for some reason in every project I've had the displeasure of brushing up against a .NET solution it's always "we're updating" and the update process magically takes longer than building the actual feature.

Or what is a pretty standard feature in other tech-stacks needs some bespoke solution that takes 3 dev cycles to implement... and of course there's going to be bugs.

And it's ALWAYS been this way. For some reason .NET has acolytes who have _always_ viewed .NET has the pinnacle of programming frameworks. .NET, Core, .NET framework, it doesn't matter.

You always get the same comments. For decades at the point.

Except the experience and outcomes don't match the claims.

Just before I get the reply, I'm pretty familiar with .NET since the 2000's.


>.NET was a solid choice for backend builds before Node became so popular

The problem with the community is that this statement has been said for every version in every era despite how untrue it is lol. No matter what ills or blights .NET will put on your solution the developers will always sing its real or imagined praises.

This is the #1 reason I avoid interacting and working with .NET teams because it's still true to this day.

Honesty would go a long way.


>It's fairly clear to me that our civilization is in decline

Because of deregulation, if anything.


What data do you have to suggest that our societies are becoming less regulated? Because what I can tell, regulation is increasing throughout the western world and has been for at least the past five decades. In the US for example:

> From 1970 to 1981, restrictions were added at an average rate of about 24,000 per year. From 1981 to 1985, that pace slowed to an average of 620 restrictions per year, before accelerating back to 18,000 restrictions per year from 1985 to 1995. A decrease of 27,000 restrictions occurred from 1995 to 1996—3.2 percent of the 1995 total—and in the 20 years since then, regulation has grown steadily by about 13,000 restrictions per year. These periods do not match up neatly with any president or party; rather, regulatory accumulation seems to be a bipartisan trend—or perhaps a bureaucratic trend independent of elected officials’ ideologies.

https://www.mercatus.org/research/data-visualizations/regula...


I like how the study you linked had to so loosely define "restrictions" as to make their point.

Do you really think that's an intelligent way to reason about this? Surely you understand the concept of quality vs quantity, which isn't even necessarily _the_ issue with the study but certainly stops the evaluation right in its tracks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: