The Unix philosophy was a reasonably good model decades ago. But I think it is over romanticized.
It's binary blob design is no good for security, as opposed to a byte code design like Forth. Its user security model was poor and doesn't help with modern devices like phones. Its multiprocess model was ham fisted into a multithreading model to compete with windows NT. Its asynchronous i/o model has always been a train wreck even compared to NT. Its design creates performance issues, especially in multiproc networking code with needless amount of memcopys. Now folks are rewriting the networking stack in user space. Its software abstraction layer was some simple scheme from the 70s which has fragmented into crazy number of implementations now. Open source developers still complain about how much easier it is to build a package for windows, as opposed to linux. It was never meant to be a distributed system either. Modern enterprise compute cannot scale by treating and managing each individual VM as it's own thing with clusters held together by some sysadmins batch scripts.
A good paper giving a concrete example of all this is "A fork() in the road", where you can see how an API just like fork(2) has an absolutely massive amount of ramifications on the overall design of the system, to the point "POSIX compliance" resulted in some substantial perversions of the authors' non-traditional OS design, all of which did nothing but add complexity and failure modes ("oh, but I thought UNIX magically gave you simplicity and made everything easy?") It also has significantly diverged from its "simple" original incarnation in the PDP-11 to a massive complex beast. So you can add "CreateProcess(), not fork()" on the list of things NT did better, IMO.
And that's just a single system call, albeit a very important one. People simply vastly overestimate how rose-tinted their glasses are and all the devils in the details, until they actually get into the nitty gritty of it all.
I agree that fork (a Unix implementation detail) creates issues like overcommit and complicating memory management (took a look at the paper and I won't dispute the issues it points out). I don't agree that farming out in-app functionality into a herd of daemons (d-bus, desktop portals, pipewire, pipewire-pulse, wireplumber, system and user systemd with daemon-level environment variables) is beneficial for system functionality and doesn't create added complexity (each daemon's state, which daemons are running or crashed, reliance on IPC instead of being able to trace each process in isolation) and new failure modes (apps can't find D-Bus to load the desktop portal, and hang instead, if you login to Wayfire unless you login to Xfce first without killing systemd --user).
Ah, a bit meh. Haiku OS/Be did similar queries with BFS and yet it can be much faster on SSD's.
Ok, no proper permissions/ACL's, but NTFS it's on par on EXT3 performance with some additions.
It needs two news FS', one for desktops and another one for the enteprise. Linux' ones should be F2FS for flash media and BcacheFS for the professional storage needs.
It looks like you have zero familiarity with NTFS. NTFS had a far more fine grained ACL model since version 1. Perhaps linux caught up several decades later. I am not really sure.
I am also not sure why you insist on arguing about a topic you are not familiar with at all.
It's binary blob design is no good for security, as opposed to a byte code design like Forth. Its user security model was poor and doesn't help with modern devices like phones. Its multiprocess model was ham fisted into a multithreading model to compete with windows NT. Its asynchronous i/o model has always been a train wreck even compared to NT. Its design creates performance issues, especially in multiproc networking code with needless amount of memcopys. Now folks are rewriting the networking stack in user space. Its software abstraction layer was some simple scheme from the 70s which has fragmented into crazy number of implementations now. Open source developers still complain about how much easier it is to build a package for windows, as opposed to linux. It was never meant to be a distributed system either. Modern enterprise compute cannot scale by treating and managing each individual VM as it's own thing with clusters held together by some sysadmins batch scripts.