He likes C, hate c++, allowing rust in the kernel is trying something new, and maybe to attract new maintainers that don't care too much about C, and as he said "maybe it work, maybe don't, if don't we learn"
Maybe not much to mourn? We used to write lots of assembler, kernels and such. Now they're in C++. As long as you can get at some intrinsics (special instructions to manipulate certain registers e.g. system flags, interrupt condition) you don't really want to mess with assembler any more.
I remember when 'PC Compatible' was absolutely required to be a player in the market. Then, laptops came out and it was suddenly not a thing. Nothing a PC did could be done on a laptop (no accessory cards, new code to use laptop features etc) and the lure of portable overrode any marketing hype.
As markets grow the decisions we make, the optimal product locations, it all changes. You have to reinvent yourself to stay vital.
Michael Dell wrote about this. Every time Dell doubled in size, he had to reinvent the corporate processes. Things that were manual had to be automated, the job got too big for a single person (support, inventory etc).
And things that used to be automated, had to be made bespoke - remember the days when you could 'dial up' a custom Dell computer, every feature? They'd make just the one you wanted. All done on a website, that I imagine resulted in a ticket in front of an assembly line worker.
Then they dropped that, probably because there were no more workers on the line, that got automated too. Then you had options, but from a short list. And only the popular options survived.
I'm just surprised that hardware can persist so long. For instance x86 started in 1978! Most folks using it were not even born back then.
We could really, really use more innovation in the hardware space. For the first ten years it was all about putting mainframe features on silicon, not a lot to invent, just the die getting smaller so more fit, caches and paging and DMA and multiple busses.
Did anything happen after that? I'm not sure it did. What might have happened? Well, real security for instance. Keeping the kernel space 'secret' is a botched idea, we all know secrecy isn't security. Witness Spectre, Downfall and so on.
And why hasn't more of the kernel migrated into silicon? Waiting on a signal, communicating between processes, blocking on an interrupt, mapping memory to user space for i/o and on and on. All possible in silicon, all would avoid thousands of machine cycles and blowing process caches.
What do we get with each new generation? A little faster, a little less power. More compromising kernel bugs. Sigh.
https://www.techradar.com/computing/macbooks/theyre-just-che...
https://www.computerworld.com/article/1500175/apple-s-steve-...