I'm a sysadmin, not a programmer, so maybe that's why it seems like the Haskell solution is more complicated to me - but can someone explain why a supposed improvement for something is more complicated, and involves a bunch of stuff that can't be expected to be included on any Unix-like system you sit down in front of?
It's especially confusing, considering that the blogger claims to have changed their opinion, but doesn't bother to clarify what has changed on the new blog that he "helpfully" links to. It's also interesting that the author claims that McIllroy would approve of his solution, without checking with him. McIllroys email isn't exactly hidden if you know where to look, and I know he still posts on a few mailing lists regularily, so it's not like he's completely unavailable.
McIlroys solution works on any POSIX compatible system. Feel free to check for yourself: http://shellhaters.org/
I agree with you, the Haskell solution is worse. But I disagree with some of your reasons.
Forget the ubiquity of Unix. Forget POSIX--McIlroy's text was written before even the first drafts of POSIX.
Part of the premise of the challenge to Knuth was to use his solution to advocate for his programming system: WEB (essentially a variant of Pascal)--look how great it is to program in WEB! So naturally, McIlroy included in his response a comparison to his programming system: UNIX. Knuth had designed WEB to make programming nicer; McIlroy had designed UNIX[1] to make programming nicer. It wasn't just a showdown between word count programs, it was a showdown of WEB vs UNIX.
And to hear some people tell it, the things that lead to Unix's victory in that little showdown are the same things that lead to its ubiquity today. If people liked Knuth's solution better, maybe we'd have WEB/Pascal systems everywhere instead of Unix.
[1]: He wasn't the sole designer, but he did invent pipes, which is the big item in using the Unix shell as a programming model.
It's a bit of an unfair comparison though. The problem and tool set were predefined before Knuth started. Also it's a problem that's particularly suited to Unix tools. There are many problems where Web might have resulted in the better solution. As a kid, I saw a program that computed the position of Moon in the sky given a location and time. That would probably be better solved with WEB than Unix pipes.
Yes, it is more complicated. However, it is doing more: type checking, which is useful when the program needs to be changed, and potential for future optimizations, as mentioned in the article.
This is neat and all, but I'm not sure I understand how it's better than 'less /var/run/dmesg.boot'? Which is not to say that this doesn't have any merit, I just don't understand it.
A trivial inspection of my FreeBSD 11.1 system's /var/run/dmesg.boot shows that it doesn't contain all the information displayed by lscpu. So what's hard to understand?
lscpu displays CPU information in a predictable, concise format whereas 'less /var/run/dmesg.boot' requires you to read through a log file that contains a subset of the same information that also happens to be interspersed among unrelated log entries?
It'd probably be more fair to discuss the merits of lscpu vs some invocation of grep '<some regex>' /var/run/dmesg.boot that produced similar output.
As far as I can see (and comparing with the output on the github page), the only thing missing from dmesg is the amount of cache and byte order within the first 20 or so lines. One depends on CPU purchase, the other depends on the ISA. Either way, they're not something I can do anything about even if I need to know them - which, most of the time, I don't.
I'd argue that the reason you think lscpu is predictable and concise is that you're used to reading it.
As far as I can see (and comparing with the output on the github page), the only
thing missing from dmesg is the amount of cache and byte order within the first 20 or so lines.
Well, yes. A subset of the information and interspersed among unrelated log entries as I stated.
One depends on CPU purchase, the other depends on the ISA. Either way, they're not something
I can do anything about even if I need to know them - which, most of the time, I don't.
Great!
I'd argue that the reason you think lscpu is predictable and concise is that you're used to reading it.
Actually, I've rarely used that command (probably a handful of times in the past 5 years) so you're about as wrong as you can get when it comes to using the familiarity argument. :)
I actually based my statement on:
lscpu -> gives CPU information
less dmesg.boot -> contains CPU information mixed in with other log entries
lscpu is more concise than the dmesg.boot log file
lscpu -> fields have fixed fieldnames
less dmesg.boot -> unstructured log entries
lscpu has predictable output compared to the dmesg.boot log file
In my world parsing output from command line tools that use predictable identifiers in their output is easier than parsing similar data out of unstructured logs.
> lscpu has predictable output compared to the dmesg.boot log file
The CPU info appears in the same place each time, assuming no other changes took place. Even if something did it would effect only it's relative position.
And yes the extra info provided by lscpu is very irrelevant for nearly all use cases.
> In my world parsing output from command line tools that use predictable identifiers in their output is easier than parsing similar data out of unstructured logs.
Sure, but if your argument is that you can't easily do this without the reinvention of the wheel, I disagree.
The CPU info appears in the same place each time, assuming no other changes took place.
Even if something did it would effect only it's relative position.
The initial blob of CPU info, sure. I never really argued otherwise other than to say the log file is unstructured data and the relevant CPU information is not grouped together.
I don't have a ton of other FreeBSD machines handy to check. Is the number of cpus always line 21 of the dmesg.boot log? I mean sure, if it is there is some consistency there but that information isn't grouped with the other.
And yes the extra info provided by lscpu is very irrelevant for nearly all use cases.
Maybe for your use cases? Seems a little presumptuous to declare what information is irrelevant for other people's use cases.
Sure, but if your argument is that you can't easily do this without the reinvention of the wheel, I disagree.
Ultimately, even if you ignore FreeBSDs and OpenBSDs history of cooperation (or the lack thereof, either of which I won't get into), the projects have different values that don't necessarily mix - so I'm not sure it's possible. It's okay to have different values, we don't all need to think a certain way.
Personally, I'd also begin worrying about mono-cultures and a lack of competition, if they were to be forced together.
In the market-place of ideas, it's good to have multiple approaches to a single problem, because it lets you shop around, evaluate, and pick whatever solution fits you and your requirements best.
https://github.com/dspinellis/unix-history-repo is a project that attempts to document Unix from its very first line all the way up to FreeBSDs HEAD (at least whenever it's imported, which might only be once a year). There's even a gource video showing the evolution.
My guess would be that it's because jails are exclusive to FreeBSD, and not that many people (compared to Linux, that is) run FreeBSD. Jails were also devised as a tool for the sysadmins toolbox, whereas docker is a tool for developers toolbox - and each has its own strengths and weaknesses.
Finally, jails do lack a bit of the functionality that lets docker do some things - but that isn't something that can't exist, an in fact there are certain signs that such instrumentation might be in the process of being written: https://twitter.com/FiLiS/status/894651614002393088.
FreeBSD jails can't just be easily ported to any platform, as they're not designed for portability - kernel-features being portable wasn't really a thing back in the late 1990s when jails were developed. They're designed to contain software (in fact, the title of the original paper is quite demonstrably "confining the omnipotent root"), which is why they're the first actual type of container (chroots original purpose isn't known by anyone but Bill Joy and while he isn't saying anything much on the subject, its first documented use that I know of was building BSD in a clean enviroment).
It's especially confusing, considering that the blogger claims to have changed their opinion, but doesn't bother to clarify what has changed on the new blog that he "helpfully" links to. It's also interesting that the author claims that McIllroy would approve of his solution, without checking with him. McIllroys email isn't exactly hidden if you know where to look, and I know he still posts on a few mailing lists regularily, so it's not like he's completely unavailable.
McIlroys solution works on any POSIX compatible system. Feel free to check for yourself: http://shellhaters.org/