Sorry, what's the platonic ideal you're suggesting as an alternative? There are threaded Ruby web servers, but, just like with Python, the interpreter is mostly giantlocked. The overwhelming majority of web apps out there are running under Apache, which just like Mongrel is preforking and queueing, not running everything simultaneously.
If you have N requests hitting Apache and one of them is slow, that one slow request will run in its own process while the fast requests are sent off to other processes. The fact that each process only handles one request at once is irrelevant.
Maybe I misunderstood the article -- it sounded to me like requests were being distributed between Mongrel processes and queued on the individual processes rather than being queued centrally and only allocated to individual processes when a process is free (like Apache does).
The problem with Mongrel is that you allocate N mongrel instances at setup time. Apache, on the other hand, can dynamically allocate new processes (up to a limit) in order to meet increased demand. This is especially important for people like me, who host more than one site on a machine, and want to be able to handle load up to a certain point without fiddling with config files every time there is a spike in traffic.
"Sorry, what's the platonic ideal you're suggesting as an alternative?"
Something like Yaws, built in Erlang. With fine-grain threading that works well, you don't get the one-to-one OS process to task mapping.
But that certainly comes with its own set of tradeoffs. There are some workloads where that can be a massive win, but committing to any of the currently still-obscure languages/runtimes that can pull this off with panache means you're committing to a less-well-developed library environment.
Facebook chat runs on Erlang for a reason... and the rest of Facebook runs on PHP, also for a reason.