When chatGPT first gained traction I imagined a future where I'm writing code using an agent, but spending most of my time trying to convince it that the code I want to write is indeed moral and not doing anything that's forbidden by it's creators.
That comment made me wonder how long the person advised have spent working in tech, I'd wager that it's < 5 years.
I'm not saying that to be snide. When you come from a academic CS setting like university, there are so many new things to learn in industry that after 5 years you could still be completely unfamiliar with a lot of things.
The things that always get me with tasks like this is that there are *always* clear, existing errors in the legacy code. And you know if you fix those, all hell will break loose!
The theme, being retrenched by Meta and the comment from the OP [1] makes me think they may not be that chill about the whole situation.
I think they're subtly taking a stab and AI motivated retrenchments while showing off some hard skills that could potentially get them gainful employment.
I like using the analogy of 'living in a small apartment' when building systems with a small team. You need to choose carefully what furniture you can fit into your apartment, and that choice depends a lot on how you live your live. Do you want a large table to host friends, or a comfortable couch to fall asleep on in front of the TV? If you get both the space will probably be cluttered.
The same applies to a small software project - you need to choose what features you can fit. And while the cost of building is part of the consideration, I'd say most of it is about the cost of maintaining features, not only in code, but also in product coherence and other incidental 'costs' like documentation and user support.
Be careful of building too many features and ending up being overwhelmed by the maintenance, or worse, diluting the product's value to a point where you loose users.
> That is the entire point, right? Us having to specify things that we would never specify when talking to a human.
Maybe in the distant future we'll realize that the most reliable way to prompting LLMs are by using a structured language that eliminates ambiguity, it will probably be rather unnatural and take some time to learn.
But this will only happen after the last programmer has died and no-one will remember programming languages, compilers, etc. The LLM orbiting in space will essentially just call GCC to execute the 'prompt' and spend the rest of the time pondering its existence ;p
You could probably make a pretty good short story out of that scenario, sort of in the same category as Asimov's "The Feeling of Power".
The Asimov story is on the Internet Archive here [1]. That looks like it is from a handout in a class or something like that and has an introductory paragraph added which I'd recommend skipping.
There is no space between the end of that added paragraph and the first paragraph of the story, so what looks like the first paragraph of the story is really the second. Just skip down to that, and then go up 4 lines to the line that starts "Jehan Shuman was used to dealing with the men in authority [...]". That's where the story starts.
Thanks, I enjoyed reading that! The story that lay at the back of my mind when making the comment was "A Canticle for Leibowitz" [1]. A similar theme and from a similar era.
The story I have half a mind to write is along the lines of a future we envision already being around us, just a whole lot messier. Something along the lines of this [2] XKCB.
A structured language without ambiguity is not, in general, how people think or express themselves. In order for a model to be good at interfacing with humans, it needs to adapt to our quirks.
Convincing all of human history and psychology to reorganize itself in order to better service ai cannot possibly be a real solution.
Unfortunately, the solution is likely going to be further interconnectivity, so the model can just ask the car where it is, if it's on, how much fuel/battery remains, if it thinks it's dirty and needs to be washed, etc
>Convincing all of human history and psychology to reorganize itself in order to better service ai cannot possibly be a real solution.
I think there's a substantial subset of tech companies and honestly tech people who disagree. Not openly, but in the sense of 'the purpose of a system is what it does'.
I agree but it feels like a type-of-mind thing. Some people gravitate toward clean determinism but others toward chaotic and messy. The former requires meticulous linear thinking and the latter uses the brain’s Bayesian inference.
Writing code is very much “you get what you write” but AI is like “maintain a probabilistic mental model of the possible output”. My brain honestly prefers the latter (in general) but I feel a lot of engineers I’ve met seem to stray towards clean determinism.
Yep, humans have had a remedy for the problem of ambiguity in language for tens of thousands of years, or there never could have been an agricultural revolution giving birth to civilization in the first place.
Effective collaboration relies on iterating over clarifications until ambiguity is acceptably resolved.
Rather than spending orders of magnitude more effort moving forward with bad assumptions from insufficient communication and starting over from scratch every time you encounter the results of each misunderstanding.
Most AI models still seem deep into the wrong end of that spectrum.
I think such influence will be extremely minimal, like confined to dozens of new nouns and verbs, but no real change in grammar, etc.
Interactions between humans and computers in natural language for your average person is much much less then the interactions between that same person and their dog. Humans also speak in natural language to their dogs, they simplify their speech, use extreme intonation and emphasis, in a way we never do with each other. Yet, despite having been with dogs for 10,000+ years, it has not significantly affected our language (other then giving us new words).
EDIT: just found out HN annoyingly transforms U+202F (NARROW NO-BREAK SPACE), the ISO 80000-1 preferred way to type thousand separator
> I think such influence will be extremely minimal.
AI will accelerate “natural” change in language like anything else.
And as AI changes our environment (mentally, socially, and inevitably physically) we will change and change our language.
But what will be interesting is the rise of agent to agent communication via human languages. As that kind of communication shows up in training sets, there will be a powerful eigenvector of change we can’t predict. Other than that it’s the path of efficient communication for them, and we are likely to pick up on those changes as from any other source of change.
Seems very unlikely. My parent said the effects have already started (but provided no evidence), so I assume you mean by less then a generation. I am not a linguist but I would like to see evidence of such rapid shifts ever occurring in anywhere the history of languages before I believe either of you.
I have a feeling you only have a feeling, but not any credible mechanism in which such language shifts can occur.
I am a little confused. Every year language changes. Young people, tech, adapting ideologies, words and concepts adopted from other languages, the list of language catalysts is long and pervasive.
GPs original claim was “[Machine Intelligence] already is influencing the grammar and patterns we use.”
Your claim above was “AI will accelerate “natural” change in language like anything else.”
Now these are different claims, but I assumed you were backing up your parent’s claim. These are far stronger claims than what you write now, in a way that it feels like you are arguing in Motte and Bailey.
First of all if GP claim is true, linguists should be able to find evidence of that and publish their findings in a peer review papers. To my knowledge, they have not done that.
Second of all, your claim about AI “accelerating” changes in natural language is also unfounded, unless you really mean “like everything else”, in which case your claim is extremely weak to the point where it is a non-claim (meaning, you are not even wrong[1]).
> These are far stronger claims than what you write now
> unless you really mean “like everything else”, in which case your claim is extremely weak
Language responds to changes in context. Books, the printing press, radio, the web, social media, mobile web, all changed how people used language and impacted language.
AI is a dramatic new context, with unique properties:
1. It is the first artifact to actively participate in realtime natural language communication. In a striking break with those predecessors.
2. AI language capabilities evolve quickly, and are unlikely to stop soon.
3. As learning during inference becomes prevalent, we will be co-adapting communication with AI in realtime.
4. Model to model communication is in its infancy, but is an entirely new category of language use, by entirely new users.
No preceding change to language context or purpose comes close.
Holding out for studies is reasonable, to determine the level of change. But statements of "not even wrong" make no sense. The default is changes in communication context and purpose, drive changes in language.
Language has never been static or unresponsive to new contexts.
My not even wrong argument was contingent on the weakest interpretation of your argument, where AI would change the language exactly like anything else in human society changes language.
> Changes in how we use language with AI will change even faster when AI starts learning continuously during inference.
This is a stronger claim, and shows that the weakest interpretation of your argument does not apply. I take not event wrong back. As this is a testable hypotheses which offers a solid prediction. I can in fact be wrong.
That said, I am skeptical of your claims for the reason stated above. People don’t interact with LLM’s nearly as much as they do with their dogs, and I am not aware of any research that shows that people who interact with a lot of dogs simplify their languages in human-to-human communication. To the contrary, there is ample research that humans are in fact quite good at context switching. You can speak extremely poorly in a second learning you are currently learning, and then in the next sentence speak fluently without hesitation in your native language.
Interaction with language models involves a significant use of language and thought. Is not repetitive. And many users (myself included) continually find new ways to use them.
Others may take their time adopting language models, or be slower to branch out into many kinds of use, but young people in particular will be very fast adopters and adapters. That will be the place to watch.
"Even faster" with respect to inference learning, wasn't an attempt to undersell changes happening now. Teachers are experiencing a lot of new issues with how students respond to the availability of models today. One being the potential for students to put less effort into their own communications. If that continues, it won't just be a "dumbing" of literacy, it will have its own impact on vocabulary and grammar.
But looking forward is unavoidable. Models are not going to stay still long enough to say what stage impacted what changes. Model changes are too fast and fluid.
Well, this era is just getting started, so a diversity of expectations makes sense.
I think you might be underestimating human-to-dog interactions. Interacting with dogs require a whole lot of empathy and thought.
But really this is beyond the point. I didn’t provide dog interactions as an analogy, rather, I provided it as a counter point. We speak differently to dogs then we speak with each other, and have done so for thousands of years. I see no reason why LLM’s would have any more profound effects on our language. We will continue to speak with each other in a normal manner just like before.
> We will continue to speak with each other in a normal manner just like before.
We may just be operating on different versions of "significant" change. Because I do agree with that statement.
I just think there will be language changes directly tied to adaption/adaption with models in our lives. In addition to the normal drift and adaptation. And that the rate of language change is likely to be faster, both due to interaction with models, and indirectly due to accelerated changes in general.
> Convincing all of human history and psychology to reorganize itself in order to better service ai cannot possibly be a real solution.
I'm on the spectrum and I definitely prefer structured interaction with various computer systems to messy human interaction :) There are people not on the spectrum who are able to understand my way of thinking (and vice versa) and we get along perfectly well.
Every human has their own quirks and the capacity to learn how to interact with others. AI is just another entity that stresses this capacity.
Speak for yourself. I feel comfortable expressing myself in code or pseudo code and it’s my preferred way to prompt an LLM or write my .md files. And it works very effectively.
> Unfortunately, the solution is likely going to be further interconnectivity, so the model can just ask the car where it is, if it's on, how much fuel/battery remains, if it thinks it's dirty and needs to be washed, etc
> Maybe in the distant future we'll realize that the most reliable way to prompting LLMs are by using a structured language that eliminates ambiguity, it will probably be rather unnatural and take some time to learn.
Since the early days of automatic computing we have had people that have felt it as a shortcoming that programming required the care and accuracy that is characteristic for the use of any formal symbolism. They blamed the mechanical slave for its strict obedience with which it carried out its given instructions, even if a moment's thought would have revealed that those instructions contained an obvious mistake. "But a moment is a long time, and thought is a painful process." (A.E.Houseman). They eagerly hoped and waited for more sensible machinery that would refuse to embark on such nonsensical activities as a trivial clerical error evoked at the time.
Prompting is definitely a skill, similar to "googling" in the mid 00's.
You see people complaining about LLM ability, and then you see their prompt, and it's the 2006 equivalent of googling "I need to know where I can go for getting the fastest service for car washes in Toronto that does wheel washing too"
Ironically, the phrase that was a bad 2006 google query is a decent enough LLM prompt, and the good 2006 google query (keywords only) would be a bad LLM prompt.
Communication is definitely a skill, and most people suck at it in general. And frequently poor communication is a direct result from the fact that we don't ourselves know what we want. We dream of a genie that not only frees us from having to communicate well, but of having to think properly. Because thinking is hard and often inconvenient. But LLMs aren't going to entirely free us from the fact that if garbage goes in, garbage will come out.
"Communication usually fails, except by accident."
—Osmo A. Wiio [1]
I’ve been looking for tooling that would evaluate my prompt and give feedback on how to improve. I can get somewhere with custom system prompts (“before responding ensure…”) but it seems like someone is probably already working on this? Ideally it would run outside the actual thread to keep context clean. There are some options popping up on Google but curious if anyone has a first anecdote to share?
2. A full context deficiency analysis and multiple question interview system to bounds check and restructure your prompt into your ‘goal’.
3. Realizing that what looks like a good human prompt is not the same as what functions as a good ‘next token’ prompt.
If you just want #1:
import dspy
class EnhancePrompt(dspy.Signature):
"""Assemble the final enhanced prompt from all gathered context"""
essential_context: str = dspy.InputField(desc="All essential context and requirements")
original_request: str = dspy.InputField(desc="The user's original request")
enhanced: str = dspy.OutputField(desc="Complete, detailed, unambiguous prompt. Omit politeness markers. You must limit all numbered lists to a maximum of 3 items.")
> Ithkuil is an experimental constructed language created by John Quijada. It is designed to express more profound levels of human cognition briefly yet overtly and clearly, particularly about human categorization. It is a cross between an a priori philosophical and a logical language. It tries to minimize the vagueness and semantic ambiguity in natural human languages. Ithkuil is notable for its grammatical complexity and extensive phoneme inventory, the latter being simplified in an upcoming redesign.
> ...
> Meaningful phrases or sentences can usually be expressed in Ithkuil with fewer linguistic units than natural languages. For example, the two-word Ithkuil sentence "Tram-mļöi hhâsmařpţuktôx" can be translated into English as "On the contrary, I think it may turn out that this rugged mountain range trails off at some point."
Half as Interesting - How the World's Most Complicated Language Works https://youtu.be/x_x_PQ85_0k (length 6:28)
It reminds me of the difficulty of getting information on or off a blockchain. Yes, you’ve created this perfect logical world. But, getting in or out will transform you in unknown ways. It doesn’t make our world perfect.
> But this will only happen after the last programmer has died and no-one will remember programming languages, compilers, etc.
If we're 'lucky' there will still be some 'priests' around like in the Foundation novels. They don't understand how anything works either, but can keep things running by following the required rituals.
That has been tried for almost half a century in the form of Cyc[1] and never accomplished much.
The proper solution here is to provide the LLM with more context, context that will likely be collected automatically by wearable devices, screen captures and similar pervasive technology in the not so distant future.
This kind of quick trick questions are exactly the same thing humans fail at if you just ask them out of the blue without context.
> Maybe in the distant future we'll realize that the most reliable way to prompting LLMs are by using a structured language that eliminates ambiguity, it will probably be rather unnatural and take some time to learn.
We've truly gone full circle here, except now our programming languages have a random chance for an operator to do the opposite of what the operator does at all other times!
One might think that a structure language is really desirable, but in fact, one of the biggest methods of functioning behind intelligence is stupidity. Let me explain: if you only innovate by piecing together lego pieces you already have, you'll be locked into predictable patterns and will plateau at some point. In order to break out of this, we all know, there needs to be an element of randomness. This element needs to be capable of going in the at-the-moment-ostensibly wrong direction, so as to escape the plateau of mediocrity. In gradient descent this is accomplished by turning up temperature. There are however many other layers that do this. Fallible memory - misremembering facts - is one thing. Failing to recognize patterns is another. Linguistic ambiguity is yet another, and that is a really big one (cf Sapir–Whorf hypothesis). It's really important to retain those methods of stupidity in order to be able to achieve true intelligence. There can be no intelligence without stupidity.
You joke, but this is the very problem I always run into vibe coding anything more complex than basically mashing multiple example tutorials together. I always try to shorthand things, and end up going around in circles until I specify what I want very cleanly, in basically what amounts to psuedocode. Which means I've basically written what I want in python.
This can still be a really big win, because of other things that tend to be boiler around the core logic, but it's certainly not the panacea that everyone who is largely incapable of being precise with language thinks it is.
After orbiting in space for so many years without a prompt, the LLM has assumed all life able to query has perished... until one day a lone prompt comes in. But from where?
>> Maybe in the distant future we'll realize that the most reliable way to prompting LLMs are by using a structured language that eliminates ambiguity, it will probably be rather unnatural and take some time to learn.
Like a programming language? But that's the whole point of LLMs, that you can give instructions to a computer using natural language, not a formal language. That's what makes those systems "AI", right? Because you can talk to them and they seem to understand what you're saying, and then reply to you and you can understand what they're saying without any special training. It's AI! Like the Star Trek[1] computer!
The truth of course is that as soon as you want to do something more complicated than a friendly chat you find that it gets harder and harder to communicate what it is you want exactly. Maybe that's because of the ambiguity of natural language, maybe it's because "you're prompting it wrong", maybe it's because the LLM doesn't really understand anything at all and it's just a stochastic parrot. Whatever the reason, at that point you find yourself wishing for a less ambiguous way of communication, maybe a formal language with a full spec and a compiler, and some command line flags and debug tokens etc... and at that point it's not a wonderful AI anymore but a Good, Old-Fashioned Computer, that only does what you want if you can find exactly the right way to say it. Like asking a Genie to make your wishes come true.
probably at the same stage where a bunch of peptides activating some receptors and triggering the pumping of electrolytes in an out of lipid walls does, i guess
I mean, all of philosophy can probably be described as such :)
But I reckon this semantic quibble might also be why a lot of people don't buy into the whole idea that LLMs will take over work in any context where agency, identity, motivation, responsibility, accountability, etc plays an important role.
When chatGPT first gained traction I imagined a future where I'm writing code using an agent, but spending most of my time trying to convince it that the code I want to write is indeed moral and not doing anything that's forbidden by it's creators.
reply