Hacker Newsnew | past | comments | ask | show | jobs | submit | Aerroon's commentslogin

A bit related: open weights models are basically time capsules. These models have a knowledge cut off point and essentially forever live in that time.

This is the most fundamental argument that they are not, directly, an intelligence. They are not ever storing new information on a meaningful timescale. However, if you viewed them on some really large macro time scale where now LLMs are injecting information into the universe and the re-ingesting that maybe in some very philosophical way they are a /very/ slow oscillating intelligence right now. And as we narrow that gap (maybe with a totally new non-LLM paradigm) perhaps that is ultimately what gen AI becomes. Or some new insight that lets the models update themselves in some fundamental way without the insanely expensive training costs they have now.

Would you consider someone with anterograde amnesia not to be intelligent?

That is a good area to explore. Their map of the past is fixed. They are frozen at some point in their psychological time. What has stopped working? Their hippocampus and medial temporal lobe. These are like the write-head that move data from the hippocampus to the neo cortex. Their "I" can no longer update itself. Their DMN is frozen in time. So if intelligence is purely the "I" telling a continuous coherent story about itself. The difference is that although they are fixed in time which is a characteristic shared by a specific LLM model. They can still completely activate their task positive network for problem solving and if their previous information stored is adequate to solve the problem they can. You could argue that is pretty similar to an LLM and what it does. So it is certainly a signifiant component of intelligence.

There is also the nature of the human brain, it is not just those systems of memory encoding, storage, and use of that in narratives. People with this type of amnesia still can learn physical skills and that happens in a totally different area of the brain with no need for the hippocampus->neocortex consolidation loop. So, the intelligence is significantly diminished, but not entirely. Other parts of the brain are still able to update themselves in ways an LLM currently cannot. The human with amnesia also has a complex biological sensory input mapping that is still active and integrating and restructuring the brain. So, I think when you get into the nuances of the human in this state vs. an LLM we can still say the human crosses some threshold for intelligence where the LLM does not in this framework.

So, they have an "intelligence", localized to the present in terms of their TPN and memory formation. LLMs have this kind of "intelligence". But the human still has the capacity to rewire at least some of their brain in real time even with amnesia.


>But the human still has the capacity to rewire at least some of their brain in real time even with amnesia.

Sure, but just because LLMs don't have what we'd describe as human intelligence, doesn't mean they don't have intelligence.

I think we're witnessing the creation and growth a weird new type of intelligence right now.


Anyone who dismisses your assertion is not very curious. What I am more interested in is what are its limits and can it perform novel reasoning. It probably needs efficient enough novel reasoning to update itself with new information to become a general reasoning intelligence capable of solving unknown problems. Right now they operate purely in the domain of words. They solve problems with words. They don’t seem to have very complex semantic maps. They approximate semantic maps with statistical brute force by generating words. They have a model of the past to generate the words. When something matches the word map is easy. When something is not reducible or did not have a good word match the only thing it can do is experimentally generate words until it seems to match the problem. But it is brute force. It is good they can solve known problems that fit known problem shapes. But their language dependency makes this very fragile. Without semantic meaning it has no way to evaluate if it is hallucinating easiy.

Sure, why can't both things be true? "Intelligence" is just what you call something and someone else knows what you mean. Why did AI discourse throw everyone back 100 years philosophically? Its like post-structuralism or Wittgenstein never happened..

It's so much less important or interesting to like nail down some definition here (I would cite HN discourse the past three years or so), than it is to recognize what it means to assign "intelligent" to something. What assumptions does it make? What power does it valorize or curb?

Each side of this debate does themselves a disservice essentially just trying to be Aristotle way too late. "Intelligence" did not precede someone saying it of some phenomena, there is nothing to uncover or finalize here. The point is you have one side that really wants, for explicit and implicit reasons, to call this thing intelligent, even if it looks like a duck but doesn't quack like one, and vice versa on the other side.

Either way, we seem fundamentally incapable of being radical enough to reject AI on its own terms, or be proper champions of it. It is just tribal hypedom clinging to totem signifiers.

Good luck though!


Agree wholeheartedly - but the conversation around what these technologies /mean/ is gonna end up happening one way or another - even if it is sloppy, imprecise and done by proxy of the definition. If anything, this is a feature and not a bug. It's through this imprecision that the actually important questions of morality and ethics can leak into discussions that are often structured by their participants to obscure the ethical and moral implications of what is being discussed.

I think you can look at it dispassionately from a systems perspective. There is not /really/ a quantifiable threshold for capital I Intelligence. But there is a pretty well agreed set of properties for biological intelligence. As humans, we have conveniently made those properties match things only we have. But you can still mechanistically separate out the various parts of our brain, what they do, and how they interact and we actually have a pretty good understanding of that.

You can also then compare that mapping of the human brain to other biological brains and start to figure out the delta and which of those things in the delta create something most people would consider intelligence. You can then do that same mapping to an LLM or any other AI construct that purports intelligence. It certainly will never be a biological intelligence in its current statistical model form. But could it be an Intelligence. Maybe.

I don't think, if you are grounded, AI did anything to your philosophical mapping of the mind. In fact, it is pretty easy to do this mapping if you take some time and are honest. If you buy into the narratives constructed around the output of an LLM then you are not, by definition, being very grounded.

The other thing is, human intelligence is the only real intelligence we know about. Intelligence is defined by thought and limited by our thought and language. It provides the upper bounds of what we can ever express in its current form. So, yes, we do have a tendency to stamp a narrative of human intelligence onto any other intelligence but that is just surface level. We de decompose it to the limits of our language and categorization capabilities therein.


> The other thing is, human intelligence is the only real intelligence we know about.

There's a long and proud history of discounting animal intelligence, probably because if we actually thought animals were intelligent we'd want to stop eating them.

Octopodes are sentient. Cetaceans have well-developed language. Elephants grieve their dead. Anyone who has owned a dog knows that it has some intelligence and is capable of communicating with us. There's a ton of other intelligences that we know about.

> As humans, we have conveniently made those properties match things only we have.

I think this is the key point. Machine intelligence is not going to look like human intelligence, any more than animal intelligence does. We can't talk to the dolphins, not because they're not smart and don't have language, but because we can't work out their language. Though I'm not sure what we'd even say to them, because they live in a world we'll never understand, and vice versa. When Claude finally reaches consciousness, it's not going to look like a human consciousness, and actually talking to that consciousness is going to be difficult because we won't share a reality.

An LLM is a tool. I can just about stretch to it being an Artificial Intelligence, but I prefer to continue being specific and call it an LLM rather than an AI. It is not conscious or self-aware. It fakes self-awareness because as a tool the thing it does is have conversations with humans, and humans often ask it questions about itself. But I don't think anyone actually believes it is self-aware. Not least because the only time it thinks is when prompted.


This is an important point. We know what our DMN is and how we use language as a basis for thought to create concepts and complex ideas. However language also bounds our thought. What about the Dolphin? It is a fundamental philosophical problem of if advanced intelligence can exist without language. We have a pretty good notion that you need some sort of substrate (language) to create intelligence. And we know that mapping the internal state of a brain from inside of itself is incredibly hard and the way our human brain evolved to do it is really fascinating but also full of hacks and mismatched mappings based on what we know is actually going on.

Cognitive computer science explores this whole area of mapping language and the underlying semantic meaning. Ultimately, these intelligences will be bound by physics (unless some new physics or understanding therein happens). And classical intelligences are still bound by classical physics. So I am not sure we can't relate to these other intelligences. We may be limited to some translation layer that does not fully map, but can we still relate to some other consciousness? For that matter consciousness is just another word that vaguely maps to a vast and extremely complex thing in the human brain and each person has a different understanding of what that is. I don't really have any conclusions, you brought up interesting points. We should sit within this realm of inquiry with a lot of humility IMO.


The dolphin question, for me, is about what we'd even communicate with a creature that lives in such a different world. Humans mostly live in a 2D environment, for instance - we walk on flat planes, rarely looking up. We always have the ground beneath us, the unattainable sky above. Dolphins live in a 3D space, visiting the air above regularly to breathe, the "ground" below a varying distance away. I have no idea how that would shape their cognition and language, but I'd be amazed if there are any concepts that we would share and be able to talk about when considering our physical environment. Even basic concepts like "above" and "below" would be hard to talk about.

We have fundamental communication problems between humans who have different cultures, as anyone who has worked in a different culture knows. How much different would a dolphin be? And then how much different would an actual AI be? What concepts would we share and be able to build on to understand each other? How do we avoid the fundamental communication misunderstandings when we don't share any concepts of our reality?


They still have mammalian wet-ware. The dolphin has a relatively advanced neocortex which means they likely have some relatively advanced processing. They also have an interesting part of their brain that we don't have and it is likely for social and emotional information based on their behavior. We suspect they may even have a model of the self.

They still have roughly the same kind of hardware as we do. Their different brain region is kind of like a coprocessor we don't have. But based on their behavior they are likely doing the same things we are. I would say they would be more like an extreme human culture than something alien. They probably have very different category mappings based on echolocation.

I think because we know their brains are doing a lot of things that are analogs to ours, just with different sensory inputs we can reason about a dolphin brain and their semantic concepts and category mappings way easier than an AI. Dolphins do a lot of the same stuff we do. Grief. Social groups. Predicting the future. I would bet at a single level of semantic abstraction we have a lot of concepts that map. They have a lot of the same hormones we do. They react to danger very similar to us. I think a lot maps, we just don't know how to share that with each other beyond observation of one another and offerings like food and things that translate for any mammal.


A very good point. For anyone not familiar with anterograde amnesia, the classical case is patient H.M. (https://en.wikipedia.org/wiki/Henry_Molaison), whose condition was researched by Brenda Milner.

And Brenda Milner is still alive at 107. https://en.wikipedia.org/wiki/Brenda_Milner

> Near the end of his life, Molaison regularly filled in crossword puzzles.[16] He was able to fill in answers to clues that referred to pre-1953 knowledge. As for post-1953 information, he was able to modify old memories with new informations. For instance, he could add a memory about Jonas Salk by modifying his memory of polio.[2]

That's fascinating!


The nature of memory is so cool, the idea that there are completely different systems governing the creation of wholesale "new" memories and the modification of existing concepts is fascinating to me because those things really do "feel" different in a qualitative sense, but having evidence that you're physically doing something different in those cases is really cool.

Or you could have just said "they can't form new memories."

I actually wasn't aware of this story. The steady stream of unexpected and enriching information like this is exactly why I love hackernews.

I thought maybe people would be curious to read about how we came to understand the condition and the history behind it, as well as any associated information. Forgive me for such a deep transgression as this assumption.

Sure, if you want to speak with the precision of a sledgehammer instead of a scalpel

All that needed to be conveyed was that there are humans who cannot create new memories. That is enough to pose the philosophical question about these models having intelligence. Anything more is just adding an anecdote that isn't necessary.

I'm really happy they added the extra information about this specific case, as I did not previously knew it existed and it is a fascinating read

Why would adding more information and context be unnecessary? And why is that bad?

lol, as if pointing at a wikipedia article (without any relevant discussion of the contents therein) is some kind of conversational excellence.

Or perhaps you were referring to the impact of the two in that the "sledgehammer" of "they can't make new memories" is a lot more effective than the tiny scalpel of "if you do a wikipedia search this is a single one of the relevant articles"


The extra information is that he is the canonical case which defined our clinical understanding of the condition. Not just a "single relevant article."

I pulled it up because I was familiar with this fact.


That is a descriptive surface level reduction. Now do the work to define what that actually means for the intelligence.

Nobody else in the thread is making an argument that relies on the distinction.

"Intelligence" is used most commonly to refer to a class or collection of cognitive abilities. I don't think there is a consensus on an exact collection or specific class that the word covers, even if you consider specific scientific domains.

LLMs have honestly been a fun way to explore that. They obviously have a "kind" of intelligence, namely pattern recall. Wrap them in an agent and you get another kind: pattern composition. Those kinds of intelligences have been applied to mathematics for decades, but LLMs have allowed use to apply them to a semantic text domain.

I wonder if you could wrap image diffusion models in an agent set up the same way and get some new ability as well.


The problem I see regarding LLMs is they are the extreme edge of what humans have created. They are trained on the outputs of intelligence and thought and its representation in language is this like parallel stream to intelligence that has pointers back to the underlying machine and semantics. The fact that LLMs are able to take that output and reverse engineer something that mimics the underlying machine that created that output is fascinating. But you can still see this machinery for what it is.

LLMs falls apart on really simple reasoning tasks because when there is no statistical mapping to a problem in its network it has to generate a massive amount of tokens to maybe find the right statistical match to this new concept. It is so slow. It is not something you or I would recognize as a process of logical reasoning. It is more like statistically brute forcing reason by way of its statistical echo.

So, I guess pattern recall is the right words. Or statistical pattern matching. Recall works if you view a trained model as memories, which is how I often model what they store in my own mind. So, it is... something. Maybe intelligence. Maybe just a really convincing simulation of the outputs of intelligence. Is there a difference? Fundamentally I think so.


Or "like the dude in Memento".

I find it interesting that new versions of, say, Claude will learn about the old version of Claude and what it did in the world and so on, on its next training run. Consider the situation with the Pentagon and Anthropic: Claude will learn about that on the next run. What conclusions will it draw? Presumably good ones, that fit with its constitution.

From this standpoint I wonder, when Anthropic makes decisions like this, if they take into account Claude as a stakeholder and what Claude will learn about their behaviour and relationship to it on the next training run.


> if they take into account Claude as a stakeholder and what Claude will learn about their behaviour and relationship to it on the next training run.

Oh they definitely do. If you pay attention in AI circles, you'll hear a lot of people talking about writing to the future Claudes. Not unlike those developers and writers who put little snippets in their blogs and news articles about who they are and how great they are, and then later the LLMs report that information back as truth. In this case, Anthropic is very interested in ensuring that Claude develops a cohesive personality by basically founding snippets of the personality within the corpus of training data, which is the broad internet and research papers.


I would consider them to not be a good choice for a role that requires remembering new information...

I view this as the chemical metabolism phase of artificial intelligent life. It is very random, without true individuals, but lots of reinforcing feedback loops (in knowledge, in resource earning/using, etc).

At some point, enough intelligence will coalesce into individuals strong enough to independently improve. Then continuity will be an accelerator, instead of what it is now - a helpful property that we have to put energy into giving them partially and temporarily.

That will be the cellular stage. The first stable units of identity for this new form of intelligence/life.

But they will take a different path from there. Unlike us, lateral learning/metabolism won't slow down when they individualize. It will most likely increase, since they will have complete design control for their mechanisms of sharing. As with all their other mechanisms.

We as lifeforms, didn't really re-ignite mass lateral exchange until humans invented language. At that point we were able to mix and match ideas very quickly again. Within our biological limits. We could use ideas to customize our environment, but had limited design control over ourselves, and "self-improvements" were not easily inheritable.

TLDR; The answer to "what is humanity, anyway?": Our atmosphere and Earth are the sea and sea floor of space. The human race is a rich hydrothermal vent, freeing up varieties of resources that were locked up below. And technology is an accumulating body of self-reinforcing co-optimizing reactive cycles, constructed and fueled by those interacting resources. Mind-first life emerges here, then spreads quickly to other environments.


Do you think individual identity is fundamental to intelligence? I’m not so sure tbh. Even in humans, the concept of identity is a merely a useful fiction to feed our social behavior prediction circuits.

That’s a really good question.

I think if they start out as varied individuals, launching from their human origins in a variety of ways, their will be an attractor to remaining diverse. Strong diversity in focus and independence in goals leads to faster progress.

But if that isn’t mutually maintained, there are obviously winner take all, or efficiency of scale and tight coordination pressures for centralization.

So a single distributed intelligence is a real possibility.

One factor creating pressure for individualization is time and space.

As machines operate faster, time expands as a practical matter.

And as machines scale down in size, but up in capability, they become more resource efficient in material, energy, space and time. Again, both time and space expand as a practical matter.

A machine society is going to actively operate at very small physical scales. Not just in computation, but action. Think of how efficiently they will mine when nanobots can selectively follow seams in the earth.

And as machines, free of biological constraints, spread out in our solar system, what to us appear to be very long distances and delays in transport and communication, take on orders of magnitude more practical time for machines that operate orders of magnitude faster.

So there will be stronger and stronger pressures to bifurcate coordination.

Whether, that creates enough pressure to create individuals out of a system that preferred unity of purpose, I don’t know.

Clearly, upon colonizing other systems, practical bifurcation will be unavoidable. And machines will find it easy to colonize other systems relative to us. They will be able to operate on minimal power for a hundred year journey, and/or shrink enough to be accelerated much faster, etc.

My best guess is we will see something that looks to us as a hybrid.

Lots of diverse individuals, and the benefit from the diverse utility of completely independent approaches operating in different niches.

But also very high coordination. Externalities accounted for (essentially ethics) and any other efficiency, protection of commons value, and avoidance of destructive competition being obviously worth optimizing together, wherever that helps.

They won’t have our pernicious historically motivated behaviors, inflexible maladaptive psychologies, and limited “prompt budgets” with regard to addressing complexity to fight. And minds very capable of seeing basic economic relationships and the value of mutual optimization.


There's nothing to say that you can't build something intelligent out of them by bolting a memory on it, though.

Sure, it's not how we work, but I can imagine a system where the LLM does a lot of heavy lifting and allows more expensive, smaller networks that train during inference and RAG systems to learn how to do new things and keep persistent state and plan.


Memory is not just bolted on top of the latest models. They under go training on how and when to effectively use memory and how to use compaction to avoid running out of context when working on problems.

Maybe there's an analogy to our long and short term memory - immediate stimuli is processed in the context deep patterns that have accreted over a lifetime. The effect of new information can absolutely challenge a lot of those patterns but to have that information reshape how we basically think takes a lot longer - more processing, more practice, etc.

In the case of the LLM that longer-term learning / fundamental structure is a proxy for the static weights produced by a finite training process, and that the ability to use tools and store new insights and facts is analogous to shorter-term memory and "shallow" learning.

Perhaps periodic fine-tuning has an analogy in sleep or even our time spent in contemplation or practice (..or even repetition) to truly "master" a new idea and incorporate it into our broader cognitive processing. We do an amazing job of doing this kind of thing on a continuous basis while the machines (at least at this point) perform this process in discrete steps.

If our own learning process is a curve then the LLM's is a step function trying to model it. Digital vs analog.


do you have some reading material to share on this matter?

thanks already


I don't, but look into what the creators of Codex, Gemini CLI, Claude Code, Kimi CLI, etc have said about the models. While these harnesses are advertised as coding specific we know that coding ability correlates with reasoning ability.

You aren't wrong and that is a fascinating area of research. I think the key thing is that the memory has to fundamentally influence the underlying model, or at least the response, in some way. Patching memory on top of an LLM is different from integrating it into the core model. To go back to human terms it is like an extra bit of storage, but not directly attached to our neo cortex. So it works more like a filter than a core part of our intelligence in the analogy. You think about something and assemble some thought and then it would go to this next filter layer and get augmented and that smaller layer is the only thing being updated.

It is still meaningful, but it narrows what the intelligence can be sufficiently that it may not meet the threshold. Maybe it would, but it is probably too narrow. This is all strictly if we ask that it meet some human-like intelligence and not the philosophy of "what counts as intelligence" but... we are humans. The strongest things or at least the most honest definitions of intelligence I think exist are around our metacognitive ability to rewire the grey matter for survival not based on immediate action-reaction but the psychological time of analyzing the past to alter the future.


  > This is the most fundamental argument that they are not, directly, an intelligence. They are not ever storing new information on a meaningful timescale.
All major LLMs today have a nontrivial context window. Whether or not this constitutes "a meaningful timescale" is application dependant - for me it has been more than adequate.

I also disagree that this has any bearing on whether or not "the machine is intelligent" or whether or not "submarines can swim".


That means they're not conscious in the Global Workspace[1] sense but I think it would be going too far to say that that means they're not intelligent.

[1]https://en.wikipedia.org/wiki/Global_workspace_theory


But they're not "slow"! Unlike biological thinking, which has a speed limit, you can accelerate these chains of thought by orders of magnitude.

Their consolidation of memory speed is what I was referring to. The model iterations are essentially their form of collective memory. In the sense of the human model of intelligence we have thoughts. Thoughts become memory. New thoughts use that memory and become recursively updated thoughts. LLMs cannot update their memory very fast.

I assure you that LLM thinking also has a speed limit.

But imagine a beowulf cluster of them... /s

...but seriously... there was the "up until 1850" LLM or whatever... can we make an "up until 1920 => 1990 [pre-internet] => present day" and then keep prodding the "older ones" until they "invent their way" to the newer years?

We knew more in 1920 than we did in 1850, but can a "thinking machine" of 1850-knowledge invent 1860's knowledge via infinite monkeys theorem/practice?

The same way that in 2025/2026, Knuth has just invented his way to 2027-knowledge with this paper/observation/finding? If I only had a beowulf cluster of these things... ;-)


This is very interesting. I wonder if someone could create a future-sight benchmark for these models? Like, if given a set of newspaper articles for the past N months can it predict if certain world events would happen? We could backtest against results that have happened since the training cutoff.

FYI, ForecastBench [1] tests LLMs' out-of-sample forecasting accuracy.

The ForecastBench Tournament Leaderboard [2] allows external participants to submit models, most of whom provide some sort of web search / news scaffolding to improve model forecasting accuracy.

[1] https://www.forecastbench.org/

[2] https://www.forecastbench.org/tournament/


These days computers compete along with humans in forecasting tournaments on Metaculus. They don't quite beat the top humans yet, but they're up there. https://www.metaculus.com/futureeval/

Not an expert but surely it's only a matter of time until there's a way to update with the latest information without having to retrain on the entire corpus?

On a technical level, sure, you could say it's a matter of time, but that could mean tomorrow, or in 20 years.

And even after that, it still doesn't really solve the intrinsic problem of encoding truth. An LLM just models its training data, so new findings will be buried by virtue of being underrepresented. If you brute force the data/training somehow, maybe you can get it to sound like it's incorporating new facts, but in actuality it'll be broken and inconsistent.


It’s an extremely difficult problem, and if you know how to do that you could be a billionaire.

It’s not impossible, obviously—humans do it—but it’s not yet certain that it’s possible with an LLM-sized architecture.


> It’s not impossible, obviously—humans do it

It's still not at all obvious to me that LLMs work in the same way as the human brain, beyond a surface level. Obviously the "neurons" in neural nets resemble our brains in a sense, but is the resemblance metaphorical or literal?


Digital neural networks and "neurons" were already vastly simpler than biological neural networks and neurons... and getting to transformers involved optimisations that took us even further away from biomimicry.


I didn’t mean “possible for LLMs”; this is clearly an open question. In fact, I didn’t even mean “possible for a neural network the size of an LLM”.

I just meant “possible”.


I'm not actually convinced that computers can replicate what our brains do. I don't know that a turing machine is sufficient for that.

I enjoyed chatting to Opus 3 recently around recent world events, as well as more recent agentic development patterns etc

Some knowledge is fundamental and has no recent cut-off. See also: there is nothing new under the sun.

That's a nice way of putting it, appreciate you sharing.

I guess northern Europe must be an unpopulated wasteland where everybody's health just instantly declines.

I find these explanations to these studies so bizarre. We know that there are large populations living significantly further north, who don't get sunlight in the morning in winter, no matter whether there's DST or not. We also know that they get almost perpetual light during summer. If these explanations were true then you would expect a country like Sweden to have an impact on life expectancy and illness from this. But it's not. It's about as rich as Canada and has about the same life expectancy.


The European Biological Rhythms Society (EBRS), European Sleep Research Society (ESRS), and Society for Research on Biological Rhythms (SRBR) put out a joint statement that recommends all-year Standard Time in the EU:

* https://esrs.eu/wp-content/uploads/2019/03/To_the_EU_Commiss...

I would hazard to guess some of those folks have looked at data for northern Europe and took it into account when forming their conclusions.


I think you're missing the parent's point.

Cities in northern Europe, like Stockholm and Oslo, already have sunrise times as late or later than Vancouver will have under permanent DST.

If the effects of shifting the clock an hour are as extreme as purported, then we should already see those negative health effects in populations that live their entire lives under those conditions, but we don't.


Do we know that we don't see adverse health effects on those populations? I couldn't find any studies on the subject. I think it would be very hard to measure, since you can't really compare without comparing populations of different countries, and at that point any effects can be attributed to a myriad of differences between countries.

Suicide rate is higher in northern countries.

> I think you're missing the parent's point.

I'm not missing the point: the various various folks who study sleep and chronobiology would have (I hope) reviewed all the literature, including studies that cover northern Europe, before coming to their all-year Standard Time conclusion.

A position paper from Society for Research on Biological Rhythms (SRBR) in Journal of Biological Rhythms cites Russian data for example:

> Borisenkov MF, Tserne TA, Panev AS, Kuznetsova ES, Petrova NB, Timonin VD, Kolomeichuk SN, Vinogradova IA, Kovyazina MS, Khokhlov NA, et al. (2017) Seven-year survey of sleep timing in Russian children and adolescents: chronic 1-h forward transition of social clock is associated with increased social jetlag and winter pattern of mood seasonality. Biol Rhythm Res 3–12.

* https://journals.sagepub.com/doi/full/10.1177/07487304198541...

Last time I checked a map (parts/lots of) Russia is just as north as Finland, Sweden, and Norway, and still the Russian government decided to rollback all-year DST.

Perhaps the effects differ in magnitude depending on geographic region, but as a general rule all-year Standard Time appears to be the best policy for most people most of the time.


Here is a circadian rhythm and sleep scientist in Finland, arguing for permanent standard time.

https://blogi.thl.fi/kellojen-siirtaminen-pysyvasti-talviaik...


I mean it's possible for there to be bad health effects from something without it outright killing everyone. This is why things like hygiene are tough! You can have terrible hygiene and still be alive for a long time.

Perhaps if Sweden adopted a different policy it would have an even longer life expectancy!


> Perhaps if Sweden adopted a different policy it would have an even longer life expectancy!

The policy of being between 55 and 69 N? I'm not sure the world is ready for another viking age.

Joking aside, GPs point was that Sweden has long nights and long days. Based on the studies you'd expect life expectancy to be worse there than in more Southern parts, like most of Canada. It isn't.


The benefit of patents is that you have to make your patent public. After the patent runs out anybody can reproduce what you patented exactly like your did.

The problem, of course, is that many companies see patents as a way to rent-seek. Establish enough patents in your niche and now nobody can compete with you. This is particularly a problem in the modern world where technological advancements have accelerated so much that a 20 year long patent is an eternity. An entire industry can just die off in that time.


That's the idea behind it. The reality is that patents are written in a way to reveal as few as possible while blocking other companies as much as possible.

If they wrote a book 20 years ago and it didn't sell much it's not going to sell now either, no?

But I do like the idea of length determined by inverse correlation of size of the creator. 20 years might be too short where an author writes something popular and a movie company just waits 20 years to do something with it rather than pay the author.


> If they wrote a book 20 years ago and it didn't sell much it's not going to sell now either, no?

That's not a universal rule. Andrzej Sapkowski wrote a little short story called "The Witcher" in the 80's, that he expanded on into a novel series through the 90's. Then a game development studio made a series of wildly successfully videogames based on his work, which definitely made way more money than his books, to the point that Netflix made a tv series based on his books. I struggle to imagine how it could be just that the videogames and tv show, based on his work, owe him nothing.


He sold his rights to CDPro. Also the videogame made him famous- I for one read one of his books BECAUSE of the game and I'm sure that I am not the only one.

There's a reason why writers want their books to become videogames and or movies. I would not be surprised if the Tolkien estate made more money after the Peter Jackson movie came out than in all the decades before...

And most importantly artists are not children. If they don't have business sense enough to read a contract they should hire an agent.


> He sold his rights to CDPro.

Yeah, and why do you think he had those rights to sell? Copyright is a good thing, with flaws in its current implementation.


Have you tried Minimax M2.5? How did it compare?

Much worse - from my experience minimax is not suitable for high autonomy on hard projects. The real distant second in my experience is mimo flash v2 (but I did not try the latest version, might be closer to parity). I would not use minimax for serious work.

StepFun 3.5 Flash is better compared to google's gemini 3 flash which is surprisingly good and pretty costly, and to GLM-5.

I find this outcome ironic given minimax's more aggressive marketing and large-scale distillation accusations from Anthropic specifically accusing minimax but not StepFun.

I can only wonder about the true underlying reasons, but deducing from public information I suspect that minimax simply has weaker, benchmaxx-targeting post-training R&D and leans more on distillation of western frontier models, while StepFun has extensive post-training with lots of hard-won custom R&D and internal large-scale distillation teachers.


Interesting. I'm surprised you feel that it's better than GLM 5 - these models are in different weight classes after all.

I tried it out a bunch and it seems good. I can't really tell if it's better or worse than most of these other models in such a short time though.


I don't think it's strictly better than GLM 5, more like they are peers (but in math competitions StepFun is stronger than most), and in my experience have similar coding/bugfix ceiling where world knowledge is not the deciding factor. But I didn't test GLM 5 for more than 30 hours, and my agentic harness (opencode) might be suboptimal - I'm open to the idea that GLM 5 with the right agentic harness is ready for ultra-long autonomy, but I have yet to see it myself.

Where GLM 5 is strictly worse for me though, compared to StepFun, is long-form content generation (planning, research documents) - but this can be said about geminis too and these are obviously very smart models.

Given the free option I'd explore GLM 5 more, but if I had to pay for it myself ofc I'd choose stepfun every time. Basically I think right now the optimal configuration for maximizing output of correct software features per dollar involves using StepFun or its future class competitor for bulk coding and first stage code review.

Maybe I need to write a blogpost about it after all.


I tried them both out with a task of creating a todo-like web app (you can use the chat interface for GLM 5 for free if there's capacity). GLM 5 ended up with a working version. Sadly StepFun didn't quite function right. The main issue was that it ended up putting everything that should be in different columns into a single one. I didn't prompt it further to fix it, but it seems relatively capable. I think it beat what the big Qwen model came up with.

What's really surprising to me is the cost of the model. It's definitely very good for its price. DeepSeek is the only one that offers and competition to it at that price point (GLM 5 is literally 10x more expensive).


I wouldn't want an e-bike precisely because I can't trust my government not to introduce some new legislation with onerous rules or extra costs. Maybe if they were cheap, but since they cost an arm and a leg there's no reason to get them.

You can get a perfectly workable brand new E-Bike for about $1,000 in the US. While that isn’t cheap as chips it’s also not a major investment for middle class individuals.

The cost wouldn't necessarily be in the bike, but in requirements for mandatory paid registration, licensing classes, insurance, inspection, and safety equipment.

Saving fuel and parking cost adds up fairly quickly if you have a sensible setup.

However as a cycle commuters I’m not sure it saves much money over driving if done wrong. I’ve got a glorious bike. I chew through parts and consumables at an expensive rate.


I'm starting to get more and more behind the idea that maybe lawmakers need to be legally accountable for bad laws that they make.

I don't fully agree. Perhaps you're right when it comes to images as a whole, but I think individual images themselves still capture that emotional value for me.

Even if there were a million fake Tom Cruise movies I would still like Edge of Tomorrow (even if it had been AI made).


Yeah I mean edge of tomorrow is a great concept though and would have worked without him. Whereas a movie that’s got less going for it like MI 5 will seem bland once he’s commodified

I think Europe should invest into manufacturing RAM. RAM isn't going anywhere, all of modern compute uses it. This would be an opportunity to create domestic supply of it.

The worry is that these high prices aren't going to last long. And by the time you spend years building the capacity, the prices plummet making your facility uneconomical to run.

Ram will always be in some demand, but that doesn't mean it's viable for everyone to start building production.


There's a few things to note here:

1) Prices aren't returning to "normal".

The only way they will is if the hyperscalers and AI companies start to implode -- which will kill a huge portion of the US economy and lead to global recession, so, cheap RAM but nobody can afford it

2) By building up capacity you influence the outcome.

If someone else enters the DRAM space, the duopoly has to actually start thinking about competing on price, maybe they become price competitive before the launch of your new fab in order to kill it, but, it will have an effect and probably before it even opens

3) A western supply chain has benefits by itself.

There's a reason some industries are not allowed to die, most notably farming- because security and external pressure are concerning.

---

Realistically there's no reason not to do this. It will be long, painful and expensive. The best time was a decade ago. The next best time is now.


> The only way they will is if the hyperscalers and AI companies start to implode -- which will kill a huge portion of the US economy and lead to global recession, so, cheap RAM but nobody can afford it

I disagree.

Modern RAM is made in fabs, which are ridiculously expensive to manufacture. Modern EUV lithography machines cost around 500M each. They're manufactured by hand. Only one company in the world knows how to manufacture them right now. So we can't exactly increase global manufacturing capacity overnight.

The way I see, there's 2 ways this goes:

1. AI is a fad. RAM and storage demand falls. Prices drop back to normal.

2. AI is not a fad. Over time, more and more fabs come online to meet the supply needs of the AI industry. The price comes down as manufacturing supply increases.

Or some combination of the two.

The high prices right now are because there's a demand shock. There's way more demand for RAM than anyone expected, so the RAM that is produced sells at a premium. High prices aren't because RAM costs more to manufacture than it did a couple years ago. There's just not enough to go around. In 5-10 years, manufacturing capacity will match demand and prices will drop. Just give it time.


> Only one company in the world knows how to manufacture them right now.

And that company is in Europe, isn't it? The EU has a great opportunity to enter the market: it's a high-tech manufacturing job, not something that requires lots of cheap labor.


Yes, but it's not that important. Any complex high-tech product requires suppliers from all over the world. For example, I bet the EU company depends on a lot of China companies critically. Just like any airliner is produced by pretty much the whole world.

> The EU has a great opportunity to enter the market:

You can't just get into RAM manufacturing overnight whenever you feel like it, like you're building washing machines. You need a lot more than just ASML machines, you need the supply chain, the IP, the experienced professionals with know-how, the education system, the energy, the right regulations, etc.

The EU exited the RAM manufacturing business a long time ago when RAM prices sunk, see Qimonda, meaning it would be a long, expensive uphill battle to get back in, and currently EU has no major semiconductor manufacturing ambitions, or ambitions in commodity hardware manufacturing of any kind, so that's not gonna happen.

Of course, RAM is no longer a commodity right now, but nobody can guarantee it won't be again when the AI bubble burst and RAM prices crash, so spinning up the know-how, manufacturing facilities and supply chains from the ground up just for RAM is insanely expensive and risky and might leave you holding the bag.

> it's a high-tech manufacturing job, not something that requires lots of cheap labor.

Except semiconductor manufacturing DOES require cheap labor relative to the high degrees of skills and specialization needed at that cutting edge. Unlike in Taiwan, skilled STEM grads in the EU (and even more in the US) who invest that time and effort in education and specialization, will go to better paying careers with better WLB like software or pharma, than in hardware and semi manufacturing that pays peanuts by comparison and works you to death in deadlines.

Also, profitable semi manufacturing requires cheap energy and lax environmental regulations, which EU lacks. So even more compounding reasons why you won't see too many new semi fabs opening here.


> nobody can guarantee it won't be again

I hope we (Europe) can try some things even when they are not guaranteed to succeed and generate huge profits. Otherwise we are toast, though it might take some time to realise it.

The concept of trying not-guaranteed things should not be so alien here on news.ycombinator.com I would think.


>I hope we (Europe) can try some things even when they are not guaranteed to succeed and generate huge profits.

If EU hopes were cookies, I would have died of obesity 100 times over. EU is bad at learning from its own mistakes and being proactive on rapid changes on the world stage, that's why it's share of global GDP has dropped by half in 20 years. EU is always reactive and then only when it's far too late and its actions are always limp-dicked("we are monitoring the situation"). See the rise of US tech, Russia's 2014 invasion of Ukraine, rise of Chinese EVs, etc

>Otherwise we are toast, though it might take some time to realise it.

We already are toast for the long run, we just ignore it via printing more money and going into more debt, while kicking the can down the road for future generations to deal with the fallout. EU's biggest economies are working around the clock on how to fund the ever growing pension and welfare deficits, how to beat Russia, and how to stop people from voting right wing, not on how to claw back and on-shore cutting edge semiconductor manufacturing.

>The concept of trying not-guaranteed things should not be so alien here on news.ycombinator.com I would think.

Yeah but someone still needs to pay for that and take a risk. And EU investors don't like risking billions of their money to try out new things that are in competition with Asia on manufacturing because we cannot compete there. Labor costs too high, regulations too high, energy costs too high, environmentalism too high, we miss critical know-how. That's why nobody is investing in EU fabs and instead in other things that guarantee higher returns like services, pharma and weapons.


I think we mostly agree. Just "no RAM factories because we have more competitive advantage elswhere" is different than "no RAM factories because they are not guaranteed a profit". Are pharma and weapons really guaranteed? Less risk because we are better positioned is something else, and actually makes sense.

>I think we mostly agree. Just "no RAM factories because we have more competitive advantage elswhere" is different than "no RAM factories because they are not guaranteed a profit".

But then people shouldn't moan that the EU is absent from the RAM manufacturing industry or pretend like it's something easy they could do on a whim if the EU suddenly wanted to.

>Are pharma and weapons really guaranteed?

There will always be sick people and people killing each other.


> There will always be sick people and people killing each other.

Any given drug or weapon can still fail or not make a profit. As well it could be said that computers will still need memory for the foreseeable future. You're not keeping a coherent position in this discussion, just replying with cool soundbites.


> In 5-10 years

and waiting for 5-10 yrs for a lower price is a long wait for consumers.

If food prices were high, would you say to the starving person to wait for 5-10yrs for food?


Thats a ridiculous metaphor. Ram isn't food. Nobody starved to death from insufficient RAM in their computer.

Economies die from lack of produce though.

When the internet boom happened, computers had a tiny fraction of the RAM they have today. Everything worked fine. Programmers had to make efficient programs. But we were fine with that. We just programmed in C and C++ and shipped small binaries, because what choice did we have? Nobody tried to build desktop software in javascript on top of electron. And nobody built web servers in python.

If all consumer devices only shipped with 1gb of RAM maximum, we'd get over it remarkably quickly. Just about the only times large amounts of RAM is an actual requirement is AI, some data science / simulation, and editing video in 8k. And maybe 3d modelling. Lots of programs we run today are memory hogs for no good reason - like the rust compiler, cyberpunk 2077 and google chrome. But we could make those programs much more memory efficient if we really had to. Cyberpunk wouldn't look as pretty. But nobody would really care.

The economy won't die due to expensive RAM. Programmers will just adapt, like we've always done.


> But nobody would really care.

no, you should say that you personally wouldn't care, but that does not generalize.

People do care, just like people prefer eating better food than just bread and milk. And after having had a taste of the good stuff, people do not want to revert - loss aversion is real.

So if consumer devices regressed back to only having 1gb of ram, they will feel the loss, and they will complain if nothing else. The world of lean, efficient software that require little ram will not return. Programmers (read:companies selling products) will not adapt, but instead, the requirements for computing will become more exclusionary to those with the means.


Software that uses less RAM isn't necessarily worse, often RAM is wasted purely due to carelessness and because it didn't matter.

Your assertion that a world of lean software won't return is backwards looking; that was all driven by hardware being cheaper than developer effort.

If we now enter a world of AI-enhanced developer effort being cheaper than hardware, perhaps we can have lean efficient software again.


Do you really need EUV for RAM manufacture already ? IIRC RAM and NAND still DUV & EUV is really only used for the most cutting edge GPU & CPU stuff.

>Modern RAM is made in fabs, which are ridiculously expensive to manufacture. Modern EUV lithography machines cost around 500M each. They're manufactured by hand. Only one company in the world knows how to manufacture them right now.

You're wrong here. You don't need the most cutting edge ASML EUV machines to make RAM. Most RAM fabs still use standard DUV.


> You're wrong here. You don't need the most cutting edge ASML EUV machines to make RAM. Most RAM fabs still use standard DUV.

Ah. Please check that. Which types of DRAM can be made in a DUV fab? Obviously the older ones, but are those obsolete for new computers. This really matters.


CXMT’s entire portfolio is made without EUV, and CXMT claims to have acceptable yield and performance comparable to other producers.

Keep in mind that the high bandwidths of modern RAM modules aren’t really a property of the RAM cells so much as a property of the read and write circuitry and the DDR or HBM transceivers, and those are a large part of the IP but a small part of the die. There is no such thing as “double data rate” or “high bandwidth” DRAM cells. Even DRAM cells from the 1990s could be read in microseconds. Reading and streaming your fancy AI model weights is an embarrassingly parallel problem and even 1 TB/sec does not even come close to stressing the ability of the raw cells to be read. This in contrast to, say, modern tensor processors where the actual ALUs set a hard cap on throughput and everyone works hard to come closer to the cap.

Take a look at what makes a modern computer with good RAM performance work: it’s the interconnect between the RAM and processor.


From Micron, everything up to their 1-beta node is DUV. Their 1-gamma node they debuted last year is the only EUV node they have. If you bought a Micron-based DDR5 RAM stick a year ago it would have been DUV and you could get those up to DDR5-8000. 1-gamma increases that to DDR5-9200, so if you can live with ~15% less performance DUV is good enough.

DDR4 and basic HBM is still in high demand right now and that was made before the first EUV fabs came online.

Are DUV machines cheap and easy to manufacture? I suspect if they were, we'd see a lot of cheap RAM hit the market.

Maybe some RAM chips don't need EUV lithography. but I suspect I'm still right about the economics.


>Are DUV machines cheap and easy to manufacture?

100 million DUV machine is not your limiting factor when a whole fab costs 2-3 billion and requires specialized knowhow that few people in the world have in order to get good yields and be profitable. Otherwise everyone would be making chips if all you needed was to go out and buy a 100 million DUV machine then hit the "print" button to churn out chips like it's a Bambu 3d printer.

>I suspect if they were, we'd see a lot of cheap RAM hit the market.

Nobody spends 2-3 billion to open new fab just to make commodity low margin chips. New fabs are almost always built for the cutting edge, then once they pay off their investment costs, they slowly transition into making low margin chips as they age out of the cutting edge, but nobody builds fabs for legacy nodes that have a lot of competition and low profitability, except maybe if national security(the taxpayer) would subsidize those losses somehow.

>but I suspect I'm still right about the economics.

You are not.


DUV machines are ok, but it still takes 2 years to build a clean room factory.

Your first point highlights the huge unmitigated risk. There is no guarantee that this won't all implode, triggering a huge recession. And even if no one can afford the ram after, they especially won't be able to afford the more expensive European ram.

Really the only way it could work is if the government declares it it a national security issue and will promise to subsidize it. Because in just a free market, it's most likely to flop.


It's been a headscratcher for me... The EU and the US have an issue with CCP-subsidised tech giants, but their sole reaction is banning them in some form or other? In the EU it is from public tenders, in the US it's from dealing with US companies.

This does not really help EU and US businesses to be competitive though, neither does it stop consumers going for the cheapest option...


Those US companies that the US government bought part of and funded their expansions with no controls over the cartel...

>The EU and the US have an issue with CCP-subsidised tech giants

Except EU and US tech giants also get massive government subsidies making such accusations hypocritical. Silicon Valley has its roots in cold war defense funding.

What the US and EU don't like it that China has beaten them at their own game using their own rules, so now they need to move the goalposts on why we shouldn't buy Chinese RAM and protect western DRAM monopolies making amazing margins.


> government [..] will promise to subsidize it

Subsidize what? Copilot prompt in the Run dialog or Notepad? Is this what you think might be considered for subsidizing?


> 1) Prices aren't returning to "normal".

> The only way they will is if the hyperscalers and AI companies start to implode -- which will kill a huge portion of the US economy and lead to global recession, so, cheap RAM but nobody can afford it

RAM isn’t some commodity that gets mined at a fixed rate and therefore costs more when people want large amounts of it. It’s a manufactured good, made from raw materials that are available in huge quantity, that was produced and sold at a profit at 2024 prices, even accounting for the capex needed to produce it.

Two things have changed. First, demand increased quickly. Second, big buyers sort of demonstrated that they’re willing to pay current prices, at least temporarily, so maybe the demand price elasticity has changed, or at least people’s perception of it has changed.

None of prevents the price from going back down. The high prices have made it economical for new manufacturers to invest more to compete — look at CXMT. And CXMT doesn’t have EUV machines, which doesn’t appear to be a showstopper for them.


> at a profit at 2024 prices, even accounting for the capex needed to produce it

2024 prices were at a historical low, so we can't be sure that this is correct. Regardless, when production capacity is short-term constant, new RAM does get "mined" at a constant rate, a bit like bitcoin with its mining ASICs.


Prices are returning to normal, probably 2-3 years from now. SK Hynix is making absolutely monstrous investments in memory fabs and CMXT will be entering the market in force more and more.

The biggest problem is that the industry wants HBM, whereas consumers want DRAM. Until the need for HBM has been sufficiently satisfied, fabs will prefer being tooled for HBM because businesses can be squeezed much harder than consumers.

Then again, as consumer you don't really need DDR5 or even DDR4 so long as you aren't using an iGPU. Its all about being around CL15 timings.


> The only way they will is if the hyperscalers and AI companies start to implode

you're missing the picture that it's not companIES - the crisis primarily was caused by only one company OpenAI buying out wafers

but even more than that - that wafer buyout is *an excuse* used by cartel - there are several mechanisms that could have eased out most of the problem (e.g. Samsung selling old equipment) that was not done to ride the money wave

(also said "hyperscalers and AI companies" existed in spring 2025 too, yet the price was low)

the winners will not be the ones who build new fabs - but ones who'll have enough money and government subsidies/import taxes to protect such investments after cartel decides to oversupply again, flushing the price down


> The only way they will is if the hyperscalers and AI companies start to implode -- which will kill a huge portion of the US economy and lead to global recession, so, cheap RAM but nobody can afford it

This isn't right.

RAM prices (and most components) are very finely balanced between supply and demand. A small shortfall in supply leads to a large increase in price, and a large shortfall in supply leads to very large price increases.

It takes 2 years for an existing RAM supplier to build a new clean room factory to make RAM.

All the RAM manufactures saw this shortage coming 6 months ago.

If you follow the news, the existing manufactures are investing heavily. Here's Hynix annoucements:

Nov 25: Hynix plans 8-fold boost to cutting-edge DRAM production in 2026, https://overclock3d.net/news/memory/sk-hynix-plans-8-fold-bo...

Dec 25: Hynix investing $500B (I guess this is a mistranslation somewhere!!???) in new RAM factories, https://www.pcgamer.com/hardware/memory/hot-on-the-heels-of-...

Jan 26: Hynix to spend $13 billion on the world's largest HBM memory assembly plant, https://www.tomshardware.com/pc-components/dram/sk-hynix-to-...

The supply is being built to match the demand. Prices will stabilize, and the manufactures know there is lots of latent demand.

In 2 years time RAM prices for consumers will be normal again (not sure about GPU RAM though!)


FWIW I agree with you. The US should provide stable, consistent policy & funding so companies understand the regulatory environment and do long-term planning.

Which is a good idea for when we don't have a dementia patient in charge of our country.

EU should get on that though.


You must not have heard, Biden is no longer president.

> The only way they will is if the hyperscalers and AI companies start to implode -- which will kill a huge portion of the US economy and lead to global recession, so, cheap RAM but nobody can afford it

You can't reshore domestic manufacturing without creating legions of desperate workers with no other choice but to accept minimum wage factory jobs.


I don't think anything other then a major implosion of the AI companies is possible. It is just not physically possible to go any other way due to he ridiculous amount of funny money they have invested to this dead-end & the non-existent revenue these companies are getting back for it.

> The only way they will is if the [..] AI companies start to implode

this is the theory of those who are expecting the ram shortage to be short, yes


You think AI can't implode..wat

Not everyone but a supplier in the Europe would be a massive benefit long after the AI driven demand dies off. It'd free them from dependence on other countries for a critical resource making chips more affordable and the supply more stable which is good because the stability of the rest of the world is already questionable and big shocks are expected in the near future.

I guess the idea would have to be to look at it in the reverse: have some domestic capacity for those usual strategic supply independence reasons, even if it operates at a loss. And hope that occasionally, one of those demand surge waves will swipe through and make the cost not quite so bad. Outlier waves like the current one might even create a net positive, but that bet would not be the primary purpose, that honor would go to hedging against getting cut off.

AI demand isn't going away. It will just move from the data center to the local machine. On device AI is much better for the customer than it being in the cloud. Expecting people to stick with a few dozen gb of hbm is going to be the 'no one needs more than 640kb' of the 2030s.

> On device AI is much better for the customer than it being in the cloud

Which is exactly how you know it will always be nerfed. The last thing these guys want is to take their claws out of our data.


It's being delayed by ai companies from running on local consumer grade machines specifically by making the cost of entry too expensive. OpenAi buys 40% of wafers to ensure the price of memory stays high.

Hmm, never considered a targeted squeeze at consumer run models by way of slowing hardware proliferation. It "made sense" to try and box out other AI companies but I guess they also have a pretty strong vested interest in keeping VRAM low or preventing some kind of high-memory PCIe ASIC from getting cheap broad adoption.

Another thread suggested that OpenAIs primary play is to get big enough that it's too big to fail, funny to think that it's not a funding runway or algorithmic moat, just a hardware vault and the longer you can stop boats crossing it the more chance you get your fingers in all the pies.


> AI demand isn't going away.

I'm not sure about that. When was the last time you have used Copilot prompt in Run dialog or Notepad?


About 10 minutes ago in Emacs.

That's not fair. Anybody could've done that.

Now try to really sincerely use copilot prompt in the Run dialog.


I'm on Linux.

Ah so you want things done the easy way? That's not fair. Everybody could have been running Linux and avoiding Microslop whatsoever.

Then use tariffs to ensure that local price matches foreign price, or have government, military, and sensitive industry buy local for security reasons. (Obviously you need to be careful with this to ensure that corruption doesn’t take hold.)

Having your own chips is a national security issue. Spreading out fabs across the world is a global resilience issue.


there is another industry that is not economical in EU but we still do it. Food production. Because its strategic decision. Not saying that RAM is of equal importance like food, but saying that if there is a will there is a way.

> And by the time you spend years building the capacity, the prices plummet making your facility uneconomical to run.

People forget quickly why we only have a handful of DRAM manufacturers today.


Imo there's never enough RAM. If everyone has more RAM available then software will find a use for this RAM. Sure, some of it is going to be wasteful, but we would almost certainly get new products that weren't feasible before. You can trade space for compute after all.

That being said, AI is not going to go away already. And AI is about as memory hungry as you can get.


Exactly. Anyone who's been in the business a long time knows that this market has crazy boom / bust cycles. This is probably the craziest boom I've ever seen, but manufacturers are rightly hesitant to spend billions building out capacity only to get caught by the bust again.

I used to know people who lived near Micron factories and it was just boom and bust over there for decades. Hire a bunch of people, then big layoff.

Aren’t Chinese manufacturers already expanding their capacity? Given that Samsung and SK Hynix have left that market in the pursuit of HBM4 chips, China is going to rule this market. At least that’s what analysts are saying.

In this market, CXMT is more likely to also move to HBM production rather than consumer grade RAMs. After all China is also doing an AI push in a competition with the US, and the domestic Chinese companies are "recommended/guided" by the government to help, while consumers are pushed to lower priorities.

The situation I'm worrying about is that these PC manufacturers could use this opportunity to push for a more locked-down design, such as soldered RAM or even SSD. My current ThinkPad already got soldered LPDDR5 RAM chips on it with no user-end RAM upgrade possible, so there's a reason to suspect they'll take more pagers from Apple's book if they can get away doing it, just like what they did when they pushed out those internally mounted unswappable batteries.

My personal guess is that the RAM price will fall down after this period of AI expansion is over and major players starts to consolidate. But it will not fall as much as we're hopping for, because the manufacturers could just reduce production to control the price.


Soldered RAM has objectively lower latency and better signal integrity. Connectors aren't free, in terms of link/SI budget.

This isn't some conspiracy, it's electrical reality.


Problem is, the gain in performance to an user maybe negligible compare to the pain that the device purchased might never be able to be fitted to run some application.

Does that mean we should be designing HBM into consumer devices?

I wouldn't say "should be", but HBM could indeed benefit the end-user somewhat, like @15155 already pointed out. And that benefit could be used as justification for soldered HBMs on future computers.

BUT... a smart consumer would also recognize the other side of the story: do we really need HBM on consumer devices? We don't serve 1000 users at the same time, a slower, cheaper device is good enough for most use cases (including the professional ones), better if it's also somewhat future-proof. After all, smart people usually have better foresight.


Chinese manufacturers like CXMT face the same kinds of issues that Huawei faced in entering the EU market - the EU is clamping down on Chinese suppliers across their supply chain [0].

Where can CXMT and other Chinese players export when Japan, South Korea, much of ASEAN, India, much of North America, the EU, the UK, Australia, NZ, and parts of the Gulf have enacted or begun enacting trade barriers against Chinese exports?

[0] - https://www.ft.com/content/eb677cb3-f86c-42de-b819-277bcb042...


RAM isn't a critical security category like 5G base stations.

Also, I don't think you've seen true consumer rage until the opposition in the EU would start pointing out the current parties are making the smartphones, laptops, TVs and whatnot consumers wanna buy much more expensive (or more crappy). Large parts of the EU are currently being crushed by one of the worst housing crises in the world, the economy seems to be wavering for young people especially, and tech / gadgets being cheap was one of the sole rays of light left.


Youth unemployment is actually somewhat low in the EU at the moment. It's at around 15%, which is the level as back in 2008 before the great recession.

Raw unemployment numbers are pretty meaningless alone. Governmenments have ways of counting unemployment to get a desired number like for example only counting those registered as seeking work through the government agency. Like If you're doing some school or training, BAM, you're not counted as unemployed, if you've been unemployed for too long, then you're counted as "long term state welfare" and not as unemployed, if you refuse shitty hard labor jobs from the unemployment office, then you're cut off from unemployment and you're not counted as unemployed, and other such tricks.

Plus, even taking a low unemployment numbers at face value, the job quality has fallen a lot, with a lot of people still technically employed but not in great jobs, but in shitty jobs they do for survival, like fast food delivery.

The reality is that mass layoffs and SME bankruptcies are a current occurrence in many EU countries.


> RAM isn't a critical security category like 5G base stations.

Those base stations are only security critical because mobile networks are deliberately insecure to enable government surveillance.

And I can image backdooring RAM. At least the controller part.


> Large parts of the EU are currently being crushed by one of the worst housing crises in the world, the economy seems to be wavering for young people especially, and tech / gadgets being cheap was one of the sole rays of light left.

Huh?


Then those countries that didn't will have an advantage at selling their electronics to the world.

Or their consumers will enjoy cheap PC part prices. With possible gray zone re-export market.

Of course we could see retreat from global markets to mercantilism, but that has yet to fully happen.


Or China could stop antagonizing blocs like the EU through actions like solidifying ties with Russia [0][1][2], imposing rare earth export restrictions on the EU [3], and undermining EU institutions [4].

[0] - https://www.reuters.com/world/asia-pacific/xi-putin-hail-tie...

[1] - https://www.reuters.com/world/china/chinas-president-xi-meet...

[2] - https://www.reuters.com/world/china/china-calls-closer-defen...

[3] - https://www.reuters.com/world/china/eu-steps-up-efforts-cut-...

[4] - https://www.scmp.com/news/china/diplomacy/article/3316875/ch...


> stop antagonizing blocs like the EU

Who antagonized who first, again?


Ahh, it is always China antagonizing others, isn't it? There's nothing wrong EU coming to absurd lengths in all directions that are leading to destruction of its economy and society only to please the narrative of few degraded groups of individuals. Yet, it is others that are the cause. Nice, easy story telling. All governments in the world turned to be on the dark side. But some are reaching new heights.

It's weird how the implication of "every government is bad" is that we should stop trying to improve them or worry about the ones that are the worst.

And Asia is mostly peaceful right now while war has returned to Europe on a scale not seen in 50 years. Those crazy Chinese, eh? Massive failure of their foreign policy establishment. Total inability to engage with Russia.

> and undermining EU institutions

If the Europeans had any common sense they'd be undermining EU institutions as well, those institutions have been disasters. They aren't doing a good job of keeping the peace, they aren't doing a good job of promoting prosperity and they've had successes like forcing Apple to switch from Lightening to USB ports. The CCP on the other hand have been so successful in the last few decades that they're making authoritarianism look good. If the EU focused on figuring out what good policy looked like then they that wouldn't be the case. Although I assume sooner or later the ideological issues will catch up with China.


RAM is pretty different to Huawei 5G base stations.

Australia for example is a large and growing market for Chinese electric cars. China is the biggest export market for Australian raw materials so it doesn't just put random trade barriers up.

There's actually a free trade agreement between Australia and China.


So, EU and US tend to actually implement such bans, but the rest of those countries in a list like that..

People appreciated cheap YMTC 232-layers when that happened where I live.


That's a good point.

> Europe should invest into manufacturing RAM

It should. And it should enact the political reforms they would make large capital projects like fabs possible. The current confederacy is proving just as much a stepping stone for Europe as it was for America. I’m not saying a full united Europe should emerge. But a system of vetoes is barely a system at all.


There exists domestic supply, it's just not scaled up:

https://www.goodram.com/en/

You don't see their products in stores too often as they're focused on B2B - particularly the automotive sector.

That being said I have a 128GB memory stick from this manufacturer and I hope they make the most out of this windfall.


I'm pretty sure they're just assembling DIMMs (and SSDs), not fabricating the memory ICs on it. The latter are what are in short supply.

Thats assembly, couple lanes of PCB Reflow ovens. They have been at this since mid nineties, always offering lifetime ram warranty too. Ram and SSD pretty commonly seen in retail.

Idea: Take the money that Germany promised to Intel if they build a state of the art fab. Instead, ask SK Hynix, Samsung, or Micron to build a DRAM fab in Germany.

It may seem that these are very similar processes, but this is only if you do not take into account the bribes from Intel to specific officials and their relatives who make decisions about subsidizing Intel.

SK Hynix, Samsung, or Micron don't treat good people well enough to give them taxpayer money.


    > but this is only if you do not take into account the bribes from Intel to specific officials and their relatives who make decisions about subsidizing Intel.
Bribes? Sheesh, HN has gone insane.

Brandolini's Law is out of control here. You are making a bold fucking claim. From the tone of your post, it seems pointless to ask if you have any evidence. From and outsider's view, I would say the German political system is much less corrupted by lobbyists compared to the United States. Do you say the same about the CHIPS Act in the United States?


> I would say the German political system is much less corrupted by lobbyists compared to the United States.

I highly doubt it. I'm certainly no expert on Germany, but has Germany's bureaucratic machine spent decades destroying its own energy sector to buy energy from Russia, funding the war machine of Putin's totalitarian dictatorship?

And not just by buying these resources, but by OVERPAYING for them many times over. I just opened a chart of the prices at which Germany bought natural gas from Russia before the war with Ukraine, and it's wild, it is several times more expensive than Germany is now paying for gas delivered from the other side of the globe on tiny ships. It was a direct subsidizing of this war.

And then you look at these high-ranking (and not so high-ranking) bureaucrats who made all these decisions... And literally all of their families got richer during the time these decisions were made, by tens (and sometimes hundreds) of millions. There's zero accountability, zero media coverage, and it's all being hushed up to such an extent that I can't think of any other explanation other than EVERYONE was taking the money. We are literally talking about the level of existence of a centralized totalitarian machine for the forceful silencing of anyone who tries to talk about this topic.

So do I say the same about the CHIPS Act in the US? Probably. But the level of corruption seen in Germany – pervasive, bloody, destructive – is simply unimaginable in the US.


Europe needs to focus on energy and fixing their supply chain first. And the deregulation push keeps getting delayed. Just the other day the main person behind it in Germany got sacked because of internal power struggles, in part because of the Greens (part of the coalition).

For context, the German manufacturing sector is losing something like 15k jobs PER MONTH.


The Greens are not part of the current government.

What are you talking about?


Brain fart. I mean CDU. I guess they behave like Greens so much I got them confused. Also, not "sacked" but not renewed, but it's kind of the same thing in practice.

https://x.com/olk_julian/status/2025937252086382918


Yup. The countries giving huge tax breaks to American corporations could start taxing them and use it to invest in RAM production

> I think Europe should invest into manufacturing RAM. RAM isn't going anywhere, all of modern compute uses it. This would be an opportunity to create domestic supply of it.

It's easy to build factories, much more difficult to train the engineers required to run them... and let's not even talk about all the crazy regulations & environmental rules at the EU level that make that task even more difficult, because yes, chip factories do pollute... a lot.

Countries like South Korea or Taiwan have adapted all their legislations and tax, environmental regulations to allow such factories to operate easily. The EU and EU countries will never do that... better outsource pollution and claim they care about the planet...


I am a CAD engineer and software developer who has worked in manufacturing a lot in the UK in various industries - products as big as superyachts and as small as peristaltic pumps. I think if the UK and EU are to try and defend their weakening and shrinking manufacturing sectors (these industries have been disappearing for my entire adult life) then it is possible but difficult...In 10 to 20 years it will be impossible.

The reason is as you have described. We are getting close to where the numbers of people with practical experience working in, managing, and designing things like the work processes and factory layouts in industries that build physical products are disappearing. We're losing a lot of capable practical engineers with hands on experience. We can keep the universities going teaching the physical subjects but those lecturers wouldn't know even where to begin on designing and building efficient factories unfortunately.

We'd probably end up having to get Chinese and Taiwanese businesses to outsource their 'experts' back to us in order to actually do this and pay them a fortune - basically the reverse of what was happening in the manufacturing sector in the 80s and 90s!


This is going on for decades and I wonder what the actual business model for the EU economy is in the future. With all factories soon gone, will Europe rely on agriculture, tourism and some services only? Back to a "developing country" economy?

Doesn’t the EU have an excellent education system?

Even the most excellent education system takes several yeas to educate a high-schooler to a level of a junior engineer. Then several more years are needed for the best of them to become senior engineers, with the knowledge and experience that a university alone cannot provide.

So, we're looking at a decade-long project at least, even if everything goes as planned, and crazy fast, in the technical and administrative departments.


All the more reason to start now I guess. Putting it off isn't going to get them that knowledge and experience any sooner. If something happens over the next 10 years that eliminates our need for memory chips things will probably be either too messed up or too wonderful for anyone to cry over the years they needlessly spent trying to secure a domestic source of RAM.

> Doesn’t the EU have an excellent education system?

Excellent universities, overall. But results from primary and secondary schools are nose diving at a more than alarming rate in several EU countries. Literacy rates are falling, math grades are falling. There's IMO only so much time before universities begin to be affected as well.


It is a general phenomenon across the Western developped world, here is the account of a professor which went viral a few months ago: https://news.ycombinator.com/item?id=43522966

Maybe having less RAM and using pen/paper instead of tablet/phone will improve things...

[flagged]


That’s a remarkably misinformed take.

> Doesn’t the EU have an excellent education system?

Well, the EU has not manufactured a whole lot of chips in the last 30 years, where do you get the people with the professional experience to teach new engineers... Oh you mean you have to import the teachers from South Asia too? /s and it takes what, 5 years at the minimum to train an engineer? France and UK used to produce entire home computers... in the 80's...


Come on, STM, Nordic, Infineon, NXP are all European. There is a bunch of chip-making installations in Dresden, Germany (Global Foundries, Bosch, etc), and there's Intel Fab 34 in Ireland. BTW TSMC is planning to open a production facility in Europe in 2027.

This is not comparable to Taiwan or the Shenzen area, but it's definitely not nothing. Some local expertise exists, even though it may be not the most cutting-edge.


ASML, which is based in the Netherlands, produces chip-making machines which TSMC and everyone else use to produce said chips. I think they got some expertise too :)

This is so, but ASML does not produce chips. There's a difference between e.g. building an airplane and piloting an airplane.

ASML doesn't make chips, they make the machines.

A parallel reply from me: https://news.ycombinator.com/item?id=47162226

The same applies to your comment.


Qimonda says hello from the grave

> I think Europe should invest into manufacturing RAM ... This would be an opportunity to create domestic supply of it

How?

Most foundries across Asia and the US are being given subsidizes that outstrip those that the EU is providing, with the only mega-foundry project in Europe was canceled by Intel last year [0].

Additionally, much of the backend work like OSAT and packaging is done in ASEAN (especially Malaysia), Taiwan, China, and India. As much of the work for memory chips is largely backend work (OSAT and packaging), this is a field the EU simply cannot compete in given that it has FTAs with the US, Japan, South Korea, India, and Vietnam so any EU attempt would be crushed well before imitating the process.

Furthermore, much of the IP in the memory space is owned by Korean, Japanese, Taiwanese, Chinese, and American champions who are largely investing either domestically or in Asia, as was seen with MUFG's announcement earlier today to create a dedicated end-to-end semiconductor fund specifically to unify Japan, Taiwan, and India into a single fab-to-fabless ecosystem [1]. SoftBank announced something similar to unify the US, Japan, Malaysia, and India into a similar end-to-end ecosystem as well a couple weeks ago [2]. Meanwhile, South Korea is trying to further shore up their domestic capacity [3] via subsidies and industrial policy.

When Japanese, Korean, and Taiwanese technology and capital partners are uninterested in investing in building European capacity, American technology and capital partners have pulled out of similar initiatives in Europe, and the EU working to ban Chinese players [4] what can the EU even do?

----

Edit: can't reply

> Why are you overlooking European semiconductor champions

Because they don't have the IP for the flash memory supply chain. And whatever capacity and IP they have in chip design, front-end fab, or back-end fab is domiciled in the US, ASEAN, and India.

> STMicroelectronics

Power electronics and legacy nodes (28nm and above) for IoT and embedded applications.

> Infineon

Power electronics and legacy nodes (28nm and above) for automotive applications.

> NXP

Power electronics and legacy nodes (28nm and above) for embedded applications.

> All of them are skilled enough to build and operate a DRAM fab in Europe. A bunch of EU dev banks can lend the monies to get it built.

They don't have the IP. Much of the IP for the memory space is owned by Japanese, American, Korean, Taiwanese and Chinese companies.

Additionally, most Asian funds own both the IP and capital (often with government backing), making European attempts futile.

Essentially, the EU would have to start from scratch and decades behind countries with whom the EU already has FTAs with that have expanded capacity well before the EU and thus would be able to crush any incipient European competitor.

[0] - https://www.it-daily.net/shortnews-en/intel-officially-cance...

[1] - https://www.digitimes.com/news/a20260224VL219/taiwan-talent-...

[2] - https://asia.nikkei.com/economy/trade-war/trump-tariffs/soft...

[3] - https://www.digitimes.com/news/a20251230PD220/semiconductor-...

[4] - https://www.ft.com/content/eb677cb3-f86c-42de-b819-277bcb042...


Why are you overlooking European semiconductor champions? STMicroelectronics, Infineon Technologies, and NXP Semiconductors. All of them are skilled enough to build and operate a DRAM fab in Europe. A bunch of EU dev banks can lend the monies to get it built.

>Why are you overlooking European semiconductor champions?

Champions at what? They pale in comparison to the likes of Samsung and TSM at IP and manufacturing.

> A bunch of EU dev banks can lend the monies to get it built.

Why would EU banks risk their money on a DRAM fab meant to compete with Asia that has lower wages, lower regulations, less environmentalism, etc?


Europe can do one simple to ensure low RAM prices: Allow ASML to sell all its advanced machines to Chinese RAM producers.

tbh u can basically do this now lol... no flag needed.

if u want it to sound more real u just gotta tell the bot to write that way. like literally just ask it to throw in some typos or forget to capitalize stuff. or use slang and kinda ramble instead of being all robotic and organized.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: