Thanks for the good links. I think we generally have become so accustomed to the scaled up von Neumann strategy that we don´t see how much efficiency and performance we leave on the table by not building much smaller memory hierarchies.
A resource like this is a good place to discuss where the two languages are near and far. Of course there are going to be styles within each language that differ as much as the languages themselves.
I find it inspiring that we are getting to where we are dealing with models that classify vulnerabilities at a systems level. However I also think we are kind of barking up the wrong three. There is IMHO something wrong with the current strategy of scaling up the von Neumann architecture. It leads to fragile software partitioning, noisy neighbors and both slow and sometimes unintended communication through shared memory. I’ve tried to lay this out in detail here https://lnkd.in/dRNSYPWC
> “In the next five to 10 years,” Barham predicts, “there are going to be many varieties of multicore machines. There are going to be a small number of each type of machine, and you won’t be able to afford to spend two years rewriting an operating system to work on each new machine that comes out. Trying to write the OS so it can be installed on a completely new computer it’s never seen before, measure things, and think about the best way to optimize itself on this computer—that’s quite a different approach to making an operating system for a single, specific multiprocessor.” The problem, the researchers say, stems from the use of a shared-memory kernel with data structures protected by locks. The Barrelfish project opts instead for a distributed system in which each unit communicates explicitly.
Mothy Roscoe, the Barrelfish PI, gave a really great talk at ATC 2021 [0]. A lot of OS research is basically "here's a clever way we bypassed Linux to touch hardware directly", but his argument is that the "VAX model" of hardware that Linux still uses has ossified, and CPU manufacturers have to build complexity to support that.
Concretely, there are a lot of things that are getting more "NOC-y" (network-on-chip). I'm not an OS expert, but deal with a lot of forthcoming features from hardware vendors at my current role. Most are abstracted as some sorta PCI device that does a little "mailbox protocol" to get some values (perhaps directly, perhaps read out of memory upon success). Examples are HSMP from AMD and OOBMSM from Intel. In both, the OS doesn't directly configure a setting, but asks some other chunk of code (provided by the CPU vendor) to configure the setting. Mothy's argument is that that is an architectural failure, and we should create OSes that can deal with this NOC-y heterogeneous architecture.
Even if one disagrees with Mothy's premise, this is a banger of a talk, well worth watching and easy to understand.
He is right. The point of the operating system is to, well, operate the system. Hardware, firmware, software engineers should work together to make good systems. Political and social barriers are not an excuse for poor products delivered to end users.
In fact, Barrelfish is based on running a microkernel per core, and makes good use of this design to better adapt to hardware diversity.
I understand why Linux develops everything in one place. This makes it far easier to manage. However, it is far more difficult to configure and specialize kernels. (Saw a paper where core operations of default Linux had gotten slower over the years, requiring reconfiguration.) Or to badly paraphrase Ingo Molnar: aim for one of two ideals in operating system design: the one that's easiest for developers to change/maintain and the one that maximizes performance.
10 years of shipped code for multiple platforms (x86, ARMv7, ARMv8) is not varporware. Based on software experience with existing platforms, they have created an open hardware RISC-V core which requires custom software to achieve energy effiency with improved performance, https://spectrum.ieee.org/snitch-riscv-processor-6x-faster
> Snitch proved to be 3.5 times more energy efficient and up to six times faster than the others.. "While we could already demonstrate a very energy-efficient and versatile 8-core Snitch cluster configuration in silicon, there are exciting opportunities ahead in building computing platforms scalable to thousands of Snitch cores, even spreading over multiple chiplets," says Zaruba, noting that his team is currently working towards this goal.
I think your take is interesting, but your article does not go into details with ideas about how to address these problems at the architectural level. Would you like to elaborate?
There is some elaboration in part four of the series. A fifth part on actor model, gaps and surfaces is in the works. Part four https://lnkd.in/dEVabpkN
I think it also really limits the AI to the context of human discourse which means it's hamstrung by our imagination, interests and knowledge. This is not where an AGI needs to go, it shouldn't copy and paste what we think. It should think on its own.
But I view LLMs not as a path to AGI on their own. I think they're really great at being text engines and for human interfacing but there will need to be other models for the actual thinking. Instead of having just one model (the LLM) doing everything, I think there will be a hive of different more specific purpose models and the LLM will be how they communicate with us. That solves so many problems that we currently have by using LLMs for things they were never meant to do.
There was this "end of history" idea where Europe, elites or not, believed that democracy would follow globalization and free trade. This turned out to be very naive but still common belief. Now Europe will have to fend for it self. In the long run I think this will benefit both US and Europe, but short term it will hurt both. Also I don´t think Europe will pivot away from US, we still understand that at least half of US shares our values, but US can´t be trusted to stand up for these values anymore. Same as Hungary, Turkey. I personally see it as a good opportunity to get some action going in Europe, as I discuss here in the context of building digital infrastructure: https://lnkd.in/dRNSYPWC
Shares aren’t the sole mechanism for influence though. In Russia there are open sixth floor windows one could fall out of. In China you could disappear to a camp for a few months. Shares are kind of soft in comparison.
My biggest fear is that Trumps deterrence of CRINK countries will cause XI to miscalculate. Other than that I think this is manageable. EU will get a boost as the internal awakening materializes. As an European I had difficulty understanding why Trump was even an alternative, but I have come to realize that the Plutocratic nature of the US was causing more suffering for the people than was easily observed from here.
Both are examples of communication by means of frequency modulated and amplitude modulated electromagnetic waves with distortion from a moving three. Also a good example that a large change in quantity is a change in kind. Probably a legit analogy imho.
It’s a terrific analogy. OP is arguing that it isn’t an analogy but an identity. For what should be obvious reasons, it isn’t. And in this case, the difference between analogy and our best model of reality is material.
I generally view churn and nice primitives to be the essential balancing act for a platform. Perhaps any leading edge (software)system. We aren’t going to get anything complex perfect the first time, so it is change or mediocrity.
Funny. This is genuinely one of my deepest insights after 17 years in IT, some of it even as lead on infrastructure and platforms. Perhaps not well formulated, ok. And to date one of my most down voted comments on HN. What are the incentives supposed to achieve?
Shameless plug here, where I explore possible gains in efficiency, performance and security by scaling out rather than up; (no subscription) https://anderscj.substack.com/p/liberal-democracies-needs-a-...