Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've always thought it logical to integrate RAM & cpu.

Look over the course of history: there's often a roughly-fixed ratio between cpu performance & RAM bandwidth (+ size), and cpu/memory <-> graphics interconnect speed. Any of those factors get too far out of wack, and you've got a machine with poor bang/$.

GPU integration is old now. Ok I've read there's a mismatch between silicon processes used for logic (like cpu/gpu etc) and memory? But hey, this is 2023. Chiplets? Stacked dies? Or just do the best with available tech? It's not like there are no ICs that integrate RAM & compute.

With the right architecture, the "memory wall" could be (mostly) a thing of the past. With enough cores & on-chip RAM bandwidth, maybe specialized gpu processing wouldn't even be needed anymore? For example:

https://en.wikipedia.org/wiki/Massively_parallel_processor_a...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: