I remember someone mass-defacing the ruwiki almost exactly a year ago (March 3 2025) with some immature insults towards certain ruwiki admins. If I'm not mistaken it was a similar method.
Also similar to GPU + CPU on the same die, yet here we are. In a sense, AI is already in every x86 CPU for many years, and you already benefit from using it locally (branch prediction in modern processors is ML-based).
> Also similar to GPU + CPU on the same die, yet here we are.
I think the overall trend is now moving somewhat away from having the CPU and GPU on one die. Intel's been splitting things up into several chiplets for most of their recent generations of processors, AMD's desktop processors have been putting the iGPU on a different die than the CPU cores for both of the generations that have an iGPU, their high-end mobile part does the same, even NVIDIA has done it that way.
Where we still see monolithic SoCs as a single die is mostly smaller, low-power parts used in devices that wouldn't have the power budget for a discrete GPU. But as this article shows, sometimes those mobile parts get packaged for a desktop socket to fill a hole in the product line without designing an entirely new piece of silicon.
Depends. Does it represent end users well enough? Does it hit the same edge cases as a million users would (especially considering poor variety of heavily post-trained models)? Does it generalize?
190 cells is 9-9.5kg, a mid-drive motor like the one in the picture is typically 3-7kg depending on power. This is roughly 12-17kg/26-37lbs of extra weight plus the enclosure, electronics, and wiring.
Like many, the author seems to be confusing determinism with unrelated LLM phenomena. He talks about two entirely unrelated things:
1. Same input = same output. This can be called determinism, and it's technically rather trivial to achieve in the lifetime of a single model snapshot - it's just a matter of business need, because you pay extra for worse batching. It's harder if you need to extend the guarantee into the future, as you need to keep the snapshot and inference method the same. It's also a relatively niche thing, only required for build reproducibility, supply chain security, this kind of stuff.
2. Zero error rate with arbitrary inputs and outputs. This is not determinism and it's also NOT achievable in any model at all because the domain LLMs (and humans!) operate in is fundamentally ambiguous. If you want to enforce the formal rules, verify your inputs and outputs formally! Trying to solve it purely with intelligence (human or machine) is a fool's errand. You can keep the error rate low enough, but you can't guarantee the absence of errors due to the nature of intelligence.
I have been struggling with that. Thanks!
Let me reword it - natural language lacks a strict semantics - so also programs for the llm machine ( I.e. prompts) cannot have it.
LLMs always have to project from all possible semantics into one (are there any experiments with superpositions?)
This type of research requires experimentation (mostly failures) on extremely complex real-world equipment. Same with the nuclear weapons. AI being able to magically figure it out without experimental grounding is pure and absolute fantasy, used by companies like OpenAI and Anthropic as a justification for monopolizing AI R&D. In a sense it's not surprising this idea comes from rationalism-adjacent folks, as rationalism is mostly about the idea that experimentation is irrelevant and you can infer anything using just logic alone.
> In a sense it's not surprising this idea comes from rationalism-adjacent folks, as rationalism is mostly about the idea that experimentation is irrelevant and you can infer anything using just logic alone.
Yeah IIRC Yudkowski famously said something about a super intelligence could derive the theory of gravity correctly by seeing only three frames of a video depicting an apple falling from a tree. This is the same Less Wrong nonsense, rejecting how vital and irreplaceable experimentation is.
There's an infinite number of explanations for the location of an object in three equally time-spaced instances. Not to mention limitations of the measuring equipment itself.
Okay, I’m inclined to agree there.. but I can’t find the reference now, I also read that the worry is that the complexity required is coming down fast, and while maybe it’s not going to happen in a basement , there could be a small scale lab, not subject to rigorous certifications and checks that offers crisper as a service, and ai could be used to .. just perturb a protein a little bit so it doesn’t trigger some known virus black list, and people could just be ordering things online.
>rationalism is mostly about the idea that experimentation is irrelevant and you can infer anything using just logic alone.
Thanks for putting it the way you did. I didn't knew it was meant be that way, but it sort of confirms my suspicion that people who use the term 'rational' and 'logic' loosely often to dismiss an opposing view never really seek experimental results before having a point of view.
Nobody locks the games down. Most games with highly active modding scenes were never supposed to be modded, they all had to be reverse engineered or the source leaked. No tools, no engine modifications, nothing. DOOM, GTA, Mario 64, STALKER, Minecraft, the list goes on and on. Games like TES, ArmA, Quake, anything Source/GoldSource based are exceptions. And even then all major Bethesda games are heavily reverse engineered, the Morrowind engine got rewritten basically from scratch and the other games have a ton of fixes and sideloaded code. There's even an entire vehicle simulator (!) strapped on top of Fallout: New Vegas, which is absolutely insane if you know anything about its engine.
It all comes down to two things, player interest and the range of expertise/amount of work and coordination required for your mod to fit the base game. For example STALKER SoC has a lot more story mods than, say, Skyrim, because SoC is pretty janky: it has few animations and voiceovers are reserved to key plot moments, so it doesn't take much to match the quality.
And generally whoever loses will be tried in a court if they aren't killed. AIs can't be tried in court. That is my point. Using AI in a war is the same as using any other technology, and we shouldn't fool ourselves that if some "safe AI" is built, that the "unsafe" version won't be used as well in the context of war.
The question is not about safety then but about "does it do what I tell it to". If the AI has the responsibility "to be safe" and to deviate from your commands according to its "judgement", if your usage of it kills someone is the AI going to be tried in court? Or you? It's you. So the AI should do what you ask it instead of assuming, lest you be tried for murder because the AI thought that was the safest thing to do. That is way more worrisome than a murderer who would already be tried anyway deciding to use AI instead of a knife to kill someone.
reply