This is one of my major concerns about people trying to use these tools for 'efficiency'. The only plausible value in somebody writing a huge report and somebody else reading it is information transfer. LLM's are notoriously bad at this. The noise to signal ratio is unacceptably high, and you will be worse off reading the summary than if you skimmed the first and last pages. In fact, you will be worse off than if you did nothing at all.
Using AI to output noise and learn nothing at breakneck speeds is worse than simply looking out the window, because you now have a false sense of security about your understanding of the material.
Relatedly, I think people get the sense that 'getting better at prompting' is purely a one-way issue of training the robot to give better outputs. But you are also training yourself to only ask the sorts of questions that it can answer well. Those questions that it will no longer occur to you to ask (not just of the robot, but of yourself) might be the most pertinent ones!
Yep. The other way it can have net no impact is if it saves thousand of hours of report drafting and reading but misses the one salient fact buried in the observations that could actually save the company money. Whilst completely nailing the fluff.
Hehe, yeah there's some terms that just are linguistically unintuitive.
"Skill floor" is another one. People generally interpret that one as "must be at least this tall to ride", but it actually means "amount of effort that translates to result". Something that has a high skill floor (if you write "high floor of skill" it makes more sense) means that with very little input you can gain a lot of result. Whereas a low skill floor means something behaves more linearly, where very little input only gains very little result.
Even though its just the antonym, "skill ceiling" is much more intuitive in that regard.
Are you sure about skill floor? I've only ever heard it used to describe the skill required to get into something, and skill ceiling describes the highest level of mastery. I've never heard your interpretation, and it doesn't make sense to me.
> LLM's are notoriously bad at this. The noise to signal ratio is unacceptably high
I could go either way on the future of this, but if you take the argument that we're still early days, this may not hold. They're notoriously bad at this so far.
We could still be in the PC DOS 3.X era in this timeline. Wait until we hit the Windows 3.1, or 95 equivalent. Personally, I have seen shocking improvements in the past 3 months with the latest models.
Personally I strongly doubt it. Since the nature of LLM's does not allow them semantic content or context, I believe it is inherently a tool unsuited for this task. As far as I can tell, it's a limitation of the technology itself, not of the amount of power behind it.
Either way, being able to generate or compress loads of text very quickly with no understanding of the contents simply is not the bottleneck of information transfer between human beings.
Yeah, definitely more skeptical for communication pipelines.
But for coding, the latest models are able to read my codebase for context, understand my question, and implement a solution with nuance, using existing structures and paradigms. It hasn't missed since January.
One of them even said: "As an embedded engineer, you will appreciate that ...". I had never told it that was my title, it is nowhere in my soul.md or codebase. It just inferred that I, the user, was one. Based on the arm toolchain and code.
It was a bit creepy, tbh. They can definitely infer context to some degree.
> We could still be in the PC DOS 3.X era in this timeline. Wait until we hit the Windows 3.1, or 95 equivalent. Personally, I have seen shocking improvements in the past 3 months with the latest models.
While we're speculating, here's mine: we're in the Windows 7 phase of AI.
IOW, everything from this point on might be better tech, but is going to be worse in practice.
Context size helps some things but generally speaking, it just slows everything down. Instead of huge contexts, what we need is actual reasoning.
I predict that in the next two to five years we're going to see a breakthrough in AI that doesn't involve LLMs but makes them 10x more effective at reasoning and completely eliminates the hallucination problem.
We currently have "high thinking" models that double and triple-check their own output and we call that "reasoning" but that's not really what it's doing. It's just passing its own output through itself a few times and hoping that it catches mistakes. It kind of works, but it's very slow and takes a lot more resources.
What we need instead is a reasoning model that can be called upon to perform logic-based tests on LLM output or even better, before the output is generated (if that's even possible—not sure if it is).
My guess is that it'll end up something like a "logic-trained" model instead of a "shitloads of raw data trained" model. Imagine a couple terabytes of truth statements like, "rabbits are mammals" and "mammals have mammary glands." Then, whenever the LLM wants to generate output suggesting someone put rocks on pizza, it fails the internal truth check, "rocks are not edible by humans" or even better, "rocks are not suitable as a pizza topping" which it had placed into the training data set as a result of regression testing.
Over time, such a "logic model" would grow and grow—just like a human mind—until it did a pretty good job at reasoning.
> I would like to see the day when the context size is in gigabytes or tens of billions of tokens, not RAG or whatever, actual context.
Might not make a difference. I believe we are already at the point of negative returns - doubling context from 800k tokens to 1600k tokens loses a larger percentage of context than halving it from 800k tokens to 400k tokens.
It reminds me of that Apple ad where a guy just rocks up to a meeting completely unprepared and spits out an AI summary to all his coworkers. Great job Apple, thanks for proving Graeber right all along.
> Those questions that it will no longer occur to you to ask (not just of the robot, but of yourself) might be the most pertinent ones!
That is true, but then again also with google. You could see why some people want to go back to the "read the book" era where you didn't have google to query anything and had to make the real questions.
One thing AI should eliminate is the "proof of work" reports. Sometimes the long report is not meant to be read, but used as proof somebody has thoroughly thought through various things (captured by, for instance, required sections).
When AI is doing that, it loses all value as a proof of work (just as it does for a school report).
My AI writes for your AI to read is low value. But there is probably still some value in "My AI takes these notes and makes them into a concise readable doc".
> Using AI to output noise and learn nothing at breakneck speeds is worse than simply looking out the window, because you now have a false sense of security about your understanding of the material.
i may put this into my email signature with your permission, this is a whip-smart sentence.
and it is true. i used AI to "curate information" for me when i was heads-down deep in learning mode, about sound and music.
there was enough all-important info being omitted, i soon realized i was developing a textbook case of superficial, incomplete knowledge.
i stopped using AI and did it all over again through books and learning by doing. in retrospect, i'm glad to have had that experience because it taught me something about knowledge and learning.
mostly that something boils down to RTFM. a good manual or technical book written by an expert doesn't have a lot of fluff. what exactly are you expecting the AI to do? zip the rar file? it will do something, it might look great, lossless compression it will be not.
P.S. not a prompt skill issue. i was up to date on cutting edge prompting techniques and using multiple frontier models. i was developing an app using local models and audio analysis AI-powered libraries. in other words i was up to my neck immersed in AI.
after i grokked as much as i could, given my limited math knowledge, of the underlying tech from reading the theory, i realized the skill issue invectives don't hold water. if things break exactly in the way they're expected to break as per their design, it's a little too much on the nose. even appealing to your impostor syndrome won't work.
P.P.S. it's interesting how a lot of the slogans of the AI party are weaponizing trauma triggers or appealing to character weaknesses.
"hop on the train, commit fully, or you'll be left behind" > fear of abandonment trigger
"pah, skill issue. my prompts on the other hand...i'm afraid i can't share them as this IP is making me millions of passive income as we speak (i know you won't probe further cause asking a person about their finances is impolite)" > imposter syndrome inducer par excellence, also FOMO -- thinking to yourself "how long can the gold rush last? this person is raking it in!! what am i doing? the miserable sod i am"
1. outlandish claims (Claude writes ALL the code) noone can seem to reproduce, and indeed everyone non-affiliated is having a very different experience
2. some of the darkest patterns you've seen in marketing are the key tenets of the gospel
3. it's probably a duck.
i've been 100% clear on the grift since October '25. Steve Eisman of the "Big Short" was just hopping onto the hype train back then. i thought...oh. how much analysis does this guru of analysts really make? now Steve sings of AI panic and blood in the streets.
these things really make you think, about what an economy even is. it sure doesn't seem to have a lot to do with supply and demand, products and services, and all those archaisms.
Spiel des Jahres (Game of the Year) is a boardgame award that notably favors family-friendly accessibility. If you're seeing the award on the box, you can be confident that even if the strategy can potentially get really deep, you can still break this out to have a fun time with both grandma and your 12 year old cousin.
For a nice entry game for a group setting, I recommend Carcassonne. It has a simple and engaging basic gameplay with a surprising amount of depth, that can easily be scaled up and down in complexity depending on your group's preference and experience level by simply adding more pieces/mechanics.
Carcassonne is also really nice with children. You can start them on just the "puzzle" aspect on attaching matching tiles, without scoring.
Our oldest child is now capable of the base game, and I can still make it interesting for me by going for secondary objectives, such as filling difficult gaps ;-)
Very clever to introduce it to kids as purely a puzzle game! I'll keep that in mind.
When my girlfriend and I play, we sometimes give unofficial bonus points as compensation for suboptimal plays that fill out unseemly gaps that would otherwise stay open. Makes for a nice, aesthetic endstate board without handwringing over your score :b
For you mindwarriors out there who like to go on the offence, leave a removable fold over your third eye so you can quickly flip it open and be ready to engage in PSI-combat on the astral plane or realspace as the situation requires.
We are all constantly being gaslit. People have insane amounts of money and prestige riding on this thing paying off in such a comically huge way that it can absolutely not deliver on it in the foreseeable future. Creating a constant pressing sentiment that actually You Are Being Left Behind Get On Now Now Now is the only way they can keep inflating the balloon.
If this stuff was self-evidently as useful as it's being made out to be, there would be no point in constantly trying to pressure, coax and cajole people into it. You don't need to spook people into using things that are useful, they'll do it when it makes sense.
The actual use-case of LLMs is dwarfed by the massive investment bubble it has become, and it's all riding on future gains that are so hugely inflated they will leave a crater that makes the dotcom bubble look like a pothole.
>The problem with this that after doing this hard work someone can just copy easily your hard work and UI/UX taste.
Or indeed, somebody might steal and launder your work by scooping them up into a training set for their model and letting it spit out sloppy versions of your thing.
That was certainly my experience. I got rid of the app and only used reddit in browser mode to read without participating. The noticeable quality drop after the migration, as well as all the artificial hurdles they put up to force you back on the app eventually made me stop using it entirely.
Using AI to output noise and learn nothing at breakneck speeds is worse than simply looking out the window, because you now have a false sense of security about your understanding of the material.
Relatedly, I think people get the sense that 'getting better at prompting' is purely a one-way issue of training the robot to give better outputs. But you are also training yourself to only ask the sorts of questions that it can answer well. Those questions that it will no longer occur to you to ask (not just of the robot, but of yourself) might be the most pertinent ones!
reply