10 years from now: "my AI brain implant erased all my childhood memories by mistake." Why would anyone do that? Because running it in the no_sandbox mode will give people an intellectual edge over others.
"Technocracy advocates contended that price system-based forms of government and economy are structurally incapable of effective action, and promoted a society headed by technical experts, which they argued would be more rational and productive."
Big corpos have reached the stage where they can hire ex-politburo apparatchiks from the soviets or china, straight into C-suite roles, and nothing will materially change.
That's also a good test for chatbots: give it a picture and ask it to write a shadertoy demo that make this picture a 3d animation. So far the results are meh.
Birds don't need airports, don't need expensive maintenance every N hours of flight, they run on seeds and bugs found everywhere that they find themselves, instead of expensive poisonous fuel that must be fed to planes by mechanics, they self-replicate for cheap, and the noises they produce are pleasant rather than deafening.
Spoken Query Language? Just like SQL, but for unstructured blobs of text as a database and unstructured language as a query? Also known as Slop Query Language or just Slop Machine for its unpredictable results.
> Spoken Query Language? Just like SQL, but for unstructured blobs of text as a database and unstructured language as a query?
I feel that's more a description of a search engine. Doesn't really give an intuition of why LLMs can do the things they do (beyond retrieval), or where/why they'll fail.
If you want actionable intuition, try "a human with almost zero self-awareness".
"Self-awareness" used in a purely mechanical sense here: having actionable information about itself and its own capabilities.
If you ask an old LLM whether it's able to count the Rs in "strawberry" successfully, it'll say "yes". And then you ask it to do so, and it'll say "2 Rs". It doesn't have the self-awareness to know the practical limits of its knowledge and capabilities. If it did, it would be able to work around the tokenizer and count the Rs successfully.
That's a major pattern in LLM behavior. They have a lot of capabilities and knowledge, but not nearly enough knowledge of how reliable those capabilities are, or meta-knowledge that tells them where the limits of their knowledge lie. So, unreliable reasoning, hallucinations and more.
Agree that's a better intuition, with pretraining pushing the model towards saying "I don't know" in the kinds of situations where people write that as opposed to by introspection of its own confidence.
There appears to be a degree of "introspection of its own confidence" in modern LLMs. They can identify their own hallucinations, at a rate significantly better than chance. So there must be some sort of "do I recall this?" mechanism built into them. Even if it's not exactly a reliable mechanism.
Anthropic has discovered that this is definitely the case for name recognition, and I suspect that names aren't the only things subject to a process like that.
"Costs $20" really means "one of those poor call center reps got paid $20, barely enough to pay rent." Once you solve the supposed problem, all those people will be on the streets.
Those who work at call centers are already desperate for any job and have zero savings. I'm not sure where they will down even further. I guess the governments will have to pick them up at the end: give them some fictious jobs and pay the minimum out of taxes from the remaining populace who still have jobs.
reply