Along with the web browsers it's the most sophisticated software in the world. Battle tested, flexible, secure, powerful, polished, honed and revised for decades by the some of the worlds best developers.
It ultimately depends on your requirements, but there are valid reasons why you'd
want to use an operating system that isn't Linux in a given project.
Linux for the most part is made up of a large amount of code written in C running in kernel mode. Its trusted computing base includes drivers, file systems and the network stack, which makes for a rather large attack surface. Operating systems running these components in user mode offer a different
set of trade-offs between security/reliability and performance.
It's also a Unix-like kernel with a bunch of security features retrofitted in (SELinux, cgroups, namespaces...). It's fine if you want to run Unix-flavored software, but it is not a pure capability-based system like you can find on Fuchsia for example. Unix systems by design confer processes an intrinsic ambient authority (like access to the global filesystem or process table namespaces) and you have to explicitly isolate and confine stuff, whereas on capability-based systems you can't manipulate or even enumerate objects without a handle to something by design.
What I do wish is that people stop putting Unix and POSIX on a pedestal. These are 50 years old designs that keep accumulating cruft. They work, but that doesn't mean we should keep teaching computer engineering students Unix without criticizing its design at the same time, lest they think process forking is a good idea in the modern era.
There's plenty of literature on the topic, but you can start with "A fork() in the road" [1] that explains why this Unix feature has long passed its best-by date. Another good read is "Dot Dot Considered Harmful" [2]. There are other papers on features that have badly aged like signals for example, but I don't have them on hand.
It's interesting and I've experienced slow forks which lead to using a tiny companion process to execute programs (before spawn arrived).
I have to say I hate CreateProcess more for taking a string rather than an array of string pointers to arguments like argv. This always made it extra difficult to escape special characters in arguments correctly.
Another example is select() API, it’s still in use but the limitations are no longer adequate.
Another example is ioctl() API for communicating with device drivers. It technically works, but marshaling huge APIs like V4L2 or DRM through a single kernel call is less than ideal: https://lwn.net/Articles/897202/
Speaking of select(), a while ago I got a PR merged into SerenityOS [1] that removed it from the kernel and reimplemented it as a compatibility shim on top of poll() inside the C library.
You can shove some of the minor cruft from Unix out to the side even on a Unix-like system, but you can't get rid of it all this way.
Well, it's the best design that was implemented inside SerenityOS when I contributed this, as mentioned inside the PR. The event loop still used select() at the time, although it was migrated to poll() a couple of months ago [1].
Polling mechanisms that keep track of sets of file descriptors in-kernel are especially useful when there's a large number of them to watch, because with poll() the kernel has to keep copying the sets from userspace at each invocation. Given that SerenityOS is focused on being a Unix-like workstation operating system rather than being a high-performance server operating system, there is usually not a lot of file descriptors to poll at once in that context. It's possible that poll() will adequately serve their needs for a long time.
That PR was an exercise of reducing unnecessary code bloat in the kernel. It wasn't a performance optimization.
Along with the web browsers it's the most sophisticated software in the world. Battle tested, flexible, secure, powerful, polished, honed and revised for decades by the some of the worlds best developers.
Why replace it?