> if this can be accompanied by an increase in software quality
That’s a huge “if”, and by your own admission not what’s happening now.
> other LLM agents reviewing code, feeding in compile errors, letting other LLM agents interact with the produced code, etc.
What a stupid future. Machines which make errors being “corrected” by machines which make errors in a death spiral. An unbelievable waste of figurative and literal energy.
> Then we will definitely start seeing more and more code produced by LLMs.
We’re already there. And there’s a lot of bad code being pumped out. Which will in turn be fed back to the LLMs.
> Don't look at the state of the art not, look at the direction of travel.
That’s what leads to the eternal “in five years” which eventually sinks everyone’s trust.
> What a stupid future. Machines which make errors being “corrected” by machines which make errors in a death spiral. An unbelievable waste of figurative and literal energy.
Humans are machines which make errors. Somehow, we got to the moon. The suggestion that errors just mindlessly compound and that there is no way around it, is what's stupid.
Even if we accept the premise (seeing humans as machines is literally dehumanising and a favourite argument of those who exploit them), not all machines are created equal. Would you use a bicycle to fill your taxes?
> Somehow, we got to the moon
Quite hand wavey. We didn’t get to the Moon by reading a bunch of text from the era then probabilistically joining word fragments, passing that around the same funnel a bunch of times, then blindly doing what came out, that’s for sure.
> The suggestion that errors just mindlessly compound and that there is no way around it
Is one that you made up, as that was not my argument.
LLMs are a lot better at a lot of things than a lot of humans.
We got to the moon using a large number of systems to a) avoid errors where possible and b) build in redundancies. Even an LLM knows this and knew what the statement meant:
> LLMs are a lot better at a lot of things than a lot of humans.
Sure, I'm really poor painter, Midjourney is better than me. Are they better than a human trained for that task, on that task? That's the real question.
The real question is can they do a good enough job quickly and cheaply to be valuable. ie, quick and cheap at some level of quality is often "better". Many people are using them in the real world because they can do in 1 minute what might take them hours. I personally save a couple hours a day using ChatGPT.
Ah, well then, if the LLM said so then it’s surely right. Because as we all know, LLMs are never ever wrong and they can read minds over the internet. If it says something about a human, then surely you can trust it.
You’ve just proven my point. My issue with LLMs is precisely people turning off their brains and blindly taking them at face value, even arduously defending the answers in the face of contrary evidence.
If you’re basing your arguments on those answers then we don’t need to have this conversation. I have access to LLMs like everyone else, I don’t need to come to HN to speak with a robot.
You didn't read the responses from an LLM. You've turned your brain off. You probably think self-driving cars are also a nonsense idea. Can't work. Too complex. Humans are geniuses without equal. AI is all snake oil. None of it works.
You missed the mark entirely. But it does reveal how you latch on to an idea about someone and don’t let it go, completely letting it cloud your judgement and arguments. You are not engaging with the conversation at hand, you’re attacking a straw man you have constructed in your head.
Of course self-driving cars aren’t a nonsense idea. The execution and continued missed promises suck, but that doesn’t affect the idea. Claiming “humans are geniuses without equal” would be pretty dumb too, and is again something you’re making up. And something doesn’t have to be “all snake oil” to deserve specific criticism.
The world has nuance, learn to see it. It’s not all black and white and I’m not your enemy.
Humans are obviously machines. If not, what are humans then? Fairies?
Now once you've recognized that, you're better equiped for task at hand - which is augmenting and ultimately automating away every task that humans-as-machines perform by building equivalent or better machine that performs said tasks at fraction of the cost!
People who want to exploit humans are the ones that oppose automation.
There's still long way to go, but now we've finally reached a point where some tasks that were very ellusive to automation are starting to show great promise of being automated, or atleast being greatly augmented.
Profoundly spiritual take. Why is that the task at hand?
The conceit that humans are machines carries with it such powerful ideology: humans are for something, we are some kind of utility, not just things in themselves, like birds and rocks. How is it anything other than an affirmation of metaphysical/theological purpose to particularly humans? Why is it like that? This must be coming from a religious context, right?
I cannot at least see how you could believe this while sustaining a rational, scientific mind about nature, cosmology, etc. Which is fine! We can all believe things, just know you cant have your cake and eat it too. Namely, if anybody should believe in fairies around here, it should probably be you!
Because it's boring stuff, and most of us would prefer to be playing golf/tennis/hanging out with friends/painting/etc. If you look at the history of humanity, we've been automating the boring stuff since the start. We don't automate the stuff we like.
Recognizing that humans, just like birds are self-replicating biological machines is the most level-headed way of looking at it.
It is consistent with observations and there are no (apparent) contraditions.
The spritual beliefs are the ones with the fairies, binding of the soul, made of special substrate, beyond reason and understanding.
If you have desire to improve human condition (not everyone does) then the task at hand naturally arisies - eliminate forced labour, aging, disease, suffering, death, etc.
This all naturally leads to automation and transhumanism.
I don’t think LLMs can easily find errors in their output.
There was a recent meme about asking LLMs to draw a wineglass full to the brim with wine.
Most really struggle with that instruction. No matter how much you ask them to correct themselves they can’t.
I’m sure they’ll get better with more input but what it reveals is that right now they definitely do not understand their own output.
I’ve seen no evidence that they are better with code than they are with images.
For instance, if the time to complete only scales with length of the token and not the complexity of its contents then it probably safe to assume it’s not being comprehended.
In my experience, if you confuse an LLM by deviating from the the "expected", then all the shims of logic seem to disappear, and it goes into hallucination mode.
Tbf that was exactly my point. An adult might use 'inference' and 'reasoning' to ask clarification, or go with an internal logic of their choosing.
ChatGPT here went with a lexigraphical order in Python for some reason, and then proceeded to make false statements from false observations, while also defying its own internal logic.
"six" > "ten" is true because "six" comes after "ten" alphabetically.
No.
"ten" > "seven" is false because "ten" comes before "seven" alphabetically.
No.
From what I understand of LLMs (which - I admit - is not very much), logical reasoning isn't a property of LLMs, unlike information retrieval. I'm sure this problem can be solved at some point, but a good solution would need development of many more kinds of inference and logic engines than there are today.
Do you believe that the LLM understands what it is saying and is applying the logic that you interprets from its response, or do you think its simply repeating similar patterns of words its seen associated with the question you presented it?
If you take the time to build an (S?)LM yourself, you'll realize it's neither of these. "Understands" is an ill-defined term, as is "applying logic".
But a LLM is not "simply" doing anything. It's extremely complex and sophisticated. Once you go from tokens into high-dimensional embeddings... it seems these models (with enough training) figure out how all the concepts go together. I'd suggest reading the word2vec paper first, then think about how attention works. You'll come to the conclusion these things are likely to be able to beat humans at almost everything.
I don't like the use of hallucinate. It implies that LLM have some kind of model of reality and some times get confused. They don't have any kind of model of anything, they cannot "hallucinate", they can only output wrong results.
that "hallucinate" term is a marketing gimmick to make it seem to the gullible that this "AI" (i.e. LLMs) can actually think, which is flat out BS.
as many others have said here on hn, those who stand to benefit a lot from this are the ones promoting this bullcrap idea (that they (LLMs) are intelligent).
greater fool theory.
picks and shovels.
etc.
In detective or murder novels, the cliche is "look for the woman".
in this case, "follow the money" is the translation, i.e. who really benefits (the investors and founders, the few), as opposed to who is grandly proclaimed to be the beneficiary (us, the many).
When it comes to bigness, there's grand and then there's grandiose. Both words can be used to describe something impressive in size, scope, or effect, but while grand may lend its noun a bit of dignity (i.e., “we had a grand time”), grandiose often implies a whiff of pretension.
Machines are intelligently designed for a purpose. Humans are born and grow up, have social lives, a moral status and are conscious, and are ultimately the product of a long line of mindless evolution that has no goals. Biology is not design. It's way messier.
That’s a huge “if”, and by your own admission not what’s happening now.
> other LLM agents reviewing code, feeding in compile errors, letting other LLM agents interact with the produced code, etc.
What a stupid future. Machines which make errors being “corrected” by machines which make errors in a death spiral. An unbelievable waste of figurative and literal energy.
> Then we will definitely start seeing more and more code produced by LLMs.
We’re already there. And there’s a lot of bad code being pumped out. Which will in turn be fed back to the LLMs.
> Don't look at the state of the art not, look at the direction of travel.
That’s what leads to the eternal “in five years” which eventually sinks everyone’s trust.