Hacker Newsnew | past | comments | ask | show | jobs | submit | wahern's commentslogin

Chile. At least starting from the second decade onward Chilean growth significantly outpaced South America generally.

Taiwan. South Korea. Many others. Generally, right-wing governments almost by definition tend to be more free market oriented relative to leftist governments, while leftist governments tend to be more populist. You can get alot of graft and corruption either way, but the path to growth and out of poverty, if you can get there at all, is generally more right-wing, certainly at least for developing economies.

In poor countries, left-wing and right-wing, the rich hoard wealth, and they generally see the competition for wealth as a zero sum game. Leftism tends toward always seeing a zero-sum game, i.e. class struggle over a fixed pie. It's only certain strains of right-leaning governments that figure out you can grow the pie so rich and poor alike become wealthier. (Second-order inequality, i.e. growing wealth gap despite everybody becoming wealthier, is a thornier problem, but relatively recent in historical terms, and I'm not sure the old left/right dichotomy of political economy schools is useful here.)

But relative to historical exemplars, I'm not sure any advanced economy can truly be called leftist, rhetoric notwithstanding. Full throated leftist governments end up like Venezuela. New Zealand is hardly leftist by comparison.


> It's only certain strains of right-leaning governments that > figure out you can grow the pie so rich and poor alike become > wealthier.

Credit to a few. Roger Douglas in New Zealand. Contemporary Peter Walsh in Australia, the Hawke finance minister, also got it. Keating somewhat got it, and put his neck on the line for politically-difficult but structurally-easy growth-pie macro reforms as treasurer, but did not follow through for the politically-difficult and structurally-hard reforms, like wholesale sales tax, and then became a fixed-pie prime minister. Walsh was gone by then.


> leftist governments tend to be more populist.

This might have been true once, it’s not true now.


This reads like word games. The article basically argues for optimizing the calling discipline on a per-function basis, using the function body to guide the optimization. That's not a calling convention and definitely not a standard ABI. What they're arguing for is a kind of static optimization mid-way between targeting a calling convention and inlining. That's not a bad idea on its face, but has nothing at all to do with the C ABI. As to whether it would actually improve anything, frankly, I'm half-surprised compilers don't already do this, i.e. for functions where it's deemed too costly to inline, but which aren't externally visible, and the fact that they don't suggests that maybe there's not much to gain here.

I've yet to read an article criticizing the so-called C ABI that doesn't end up effectively changing the problem statement (in this case, into something utterly incomparable), as opposed to providing a better solution to the same problem. Changing the problem statement is often how you arrive at better solutions overall, but don't try to sell it as something it isn't, insinuating that the pre-existing solution is stupid.


  > for functions where it's deemed too costly to inline, but which aren't externally visible
in LTO mode, gcc does

> I remain unclear what authority the federal government has over such a matter

It's actually an enumerated power under Article I, Section 8, Clause 5:

> [The Congress shall have Power...] To coin Money, regulate the Value thereof, and of foreign Coin, and fix the Standard of Weights and Measures; ...

https://constitution.congress.gov/browse/essay/artI-S8-C5-1/...


I'm surprised that would be interpreted to include time zones. Units of time, arguably (measures), but time zones? Time zones are not a measure of anything. Time zones do not follow on from definitions of units of time, any more than road speed limits follow on from the definition of a mile.

I would be less surprised if it were the commerce power used to uphold time zone coordination - for the promotion and regularity of interstate commerce etc etc. Tenuous, but consistent with a lot of the other nonsense that's been hung from the commerce power over the years.

Then there's the actual enforcement angle - time zones are just a social convention whereby people in a given area pretend that the time is slightly different than it 'really' is (local solar time). There's no reason local / state government and businesses can't post / operate on different hours, and leave federal bodies to operate on whatever 'federal time' they want. This already happens in parts of the world where the official time is locally inappropriate, such as Eucla in Australia or Xinjiang in China.

Obviously the optimal solution here is to coordinate a time change at all levels of government, but failing that there are other options.


I don't think there've been many court cases exploring the meaning of "[to] fix the Standard of Weights and Measures", and probably none as it regards time. And in the modern era the Commerce Power would probably be sufficient on its own. SCOTUS has suggested that under exiting precedent Congress would have the power to grant copyrights and patents under the Commerce Clause, and so the Copyright and Patent clauses today act more like restrictions on Congressional power.

But I don't see a problem relating time zones to measurement. Part of the authority to standardize measurement is the ability to dictate the manner and means of determining a quantitative value. Under the Weights and Measures clause I think Congress can regulate things like scales, including their precision and accuracy, at least in so far as they claim to provide a measurement of a Federally standardized unit. You might intuitively think the only reasonable end to such power is using it to improve and mandate ever greater precision and accuracy. But sometimes too much precision and accuracy is a bad thing--it can create transactional friction. Case in point, when 12PM noon varied between every town it become increasingly problematic as the speed of long-distance transportation improved, i.e. the rail roads. So the solution was to mandate worse accuracy.

Relatedly, there's a whole separate question of what time means. Most HN readers understand time in the scientific sense, and think of time in the sense of the SI second. But civil time used for general daily life has a slightly more nuanced meaning. That said, UTC/TAI time is very much like time zones in the sense of fudging accuracy. Modern clocks and gravimeter, even the kind regular people can buy for a few hundred or thousand dollars, are precise enough to be able to distinguish local time dilation. So the time passing in your living room is actually different from UTC/TAI. But think of how complex and, for the most part, useless it would be to try to "solve" that discrepancy by trying to integrate that reality in the general definition of civil time.

Also, AFAIU the authority to standardize measurement, and time specifically, operates more as a prohibition on states imposing their own mandates. See, generally, the Legal Tender Cases for the push and pull between various powers allocated between the federal government and the states.


They measure the number of minutes since sunrise.

Which changes every day.

GCC supports specifying endianness of structs and unions: https://gcc.gnu.org/onlinedocs/gcc-15.2.0/gcc/Common-Type-At...

I'm not sure how useful it is, though it was only added 10 years ago with GCC 6.1 (recent'ish in the world of arcane features like this, and only just about now something you could reasonably rely upon existing in all enterprise environments), so it seems some people thought it would still be useful.


> But it turns out that files commonly do fit in memory

The difference between slurping a file into malloc'd memory and just mmap'ing it is that the latter doesn't use up anonymous memory. Under memory pressure, the mmap'd file can just be evicted and transparently reloaded later, whereas if it was copied into anonymous memory it either needs to be copied out to swap or, if there's not enough swap (e.g. if swap is disabled), the OOM killer will be invoked to shoot down some (often innocent) process.

If you need an entire file loaded into your address space, and you don't have to worry about the file being modified (e.g. have to deal with SIGBUS if the file is truncated), then mmap'ing the file is being a good citizen in terms of wisely using system resources. On a system like Linux that aggressively buffers file data, there likely won't be a performance difference if your system memory usage assumptions are correct, though you can use madvise & friends to hint to the kernel. If your assumptions are wrong, then you get graceful performance degradation (back pressure, effectively) rather than breaking things.

Are you tired of bloated software slowing your systems to a crawl because most developers and application processes think they're special snowflakes that will have a machine all to themselves? Be part of the solution, not part of the problem.


I'm not sure about the most recent successful charge, but this is one of the charges used against James Comey: https://en.wikipedia.org/wiki/Prosecution_of_James_Comey

Here's a 2018 article listing some prosecutions: https://www.nbcnews.com/politics/congress/5-people-who-lied-...

Because it's the DoJ prosecuting, I think it's uncommon for the government to prosecute administration witnesses, even from previous administrations. Once upon a time Congress would prosecute and impose punishment itself: https://en.wikipedia.org/wiki/Contempt_of_Congress#Inherent_...


In the case where you're using the top of the stack as a, well, stack, I don't see the problem. It would only work if you're not interleaving processing of dynamically-sized objects and function codegen works out. It's similar to TCO in the sense of maintaining certain invariants across calls (e.g. no temporaries need be preserved), and actually in languages with TCO, like Lua, you can hack an application-level stack data structure using tail recursion (and coroutines/threads if you need more than one) that can sometimes be more performant or more convenient than using a native data structure.

There's been a least one experiment (posted a few years ago to HN) where someone benchmarked a stackful coroutine implementation with hundreds of thousands (millions?) of stacks that could grow contiguously on-demand up to, e.g., 2MB, but were initially minimally sized and didn't reserve the maximum stack size upfront. The bottleneck was the VMA bookkeeping--the syscalls, exploding the page table, TLB flushing, etc. In principle it could work well and be even more performant than existing solutions, and it might work better today since Linux 6.13's lightweight guard page feature, MADV_GUARD_INSTALL, but we probably still need more architectural support from the system (kernel, if not hardware) to make it performant and competitive with language-level solutions like goroutines, Rust async, etc.


Recent article on the younger Bundy, "Ammon Bundy Is All Alone. The anti-government militia leader can’t make sense of his allies’ support for ICE violence." https://www.theatlantic.com/ideas/2026/02/ammon-bundy-trump-...

Ammon Bundy has held relatively libertarian opinions on immigration for a long long time. Since at least the days of the standoffs. His political ideals are closer to the old time westy classical liberalism (something like founding era anti-federalists with a view of the law that essentially mirrors Bastiat) than they are to neo-conservatism.

I find it easier to understand in terms of the Unix syscall API. `2>&1` literally translates as `dup2(1, 2)`, and indeed that's exactly how it works. In the classic unix shells that's all that happens; in more modern shells there may be some additional internal bookkeeping to remember state. Understanding it as dup2 means it's easier to understand how successive redirections work, though you also have to know that redirection operators are executed left-to-right, and traditionally each operator was executed immediately as it was parsed, left-to-right. The pipe operator works similarly, though it's a combination of fork and dup'ing, with the command being forked off from the shell as a child before processing the remainder of the line.

Though, understanding it this way makes the direction of the angled bracket a little odd; at least for me it's more natural to understand dup2(2, 1) as 2<1, as in make fd 2 a duplicate of fd 1, but in terms of abstract I/O semantics that would be misleading.


Another fun consequence of this is that you can initialize otherwise-unset file descriptors this way:

    $ cat foo.sh
    #!/usr/bin/env bash

    >&1 echo "will print on stdout"
    >&2 echo "will print on stderr"
    >&3 echo "will print on fd 3"

    $ ./foo.sh 3>&1 1>/dev/null 2>/dev/null
    will print on fd 3
It's a trick you can use if you've got a super chatty script or set of scripts, you want to silence or slurp up all of their output, but you still want to allow some mechanism for printing directly to the terminal.

The danger is that if you don't open it before running the script, you'll get an error:

    $ ./foo.sh
    will print on stdout
    will print on stderr
    ./foo.sh: line 5: 3: Bad file descriptor

With exec you can open file descriptors of your current process.

  if [[ ! -e /proc/$$/fd/3 ]]; then
      # check if fd 3 already open and if not open, open it to /dev/null
      exec 3>/dev/null
  fi
  >&3 echo "will print on fd 3"
This will fix the error you are describing while keeping the functionality intact.

Now with that exec trick the fun only gets started. Because you can redirect to subshells and subshells inherit their redirection of the parent:

  set -x # when debugging, print all commands ran prefixed with CMD:
  PID=$$
  BASH_XTRACEFD=7
  LOG_FILE=/some/place/to/your/log/or/just/stdout
  exec 3> >(gawk '!/^RUN \+ echo/{ print strftime("[%Y-%m-%d %H:%M:%S] <PID:'$PID'> "), $0; fflush() }' >> $LOG_FILE)
  exec > >(sed -u 's/^/INFO:  /' >&3)
  exec 2> >(sed -u 's/^/ERROR: /' >&3)
  exec 7> >(sed -u 's/^/CMD:   /' >&3)
  exec 8>&1 #normal stdout with >&8
  exec 9>&2 #normal stderr with >&9
And now your bash script will have a nice log with stdout and stderr prefixed with INFO and ERROR and has timestamps with the PID.

Now the disclaimer is that you will not have gaurantees that the order of stdout and stderr will be correct unfortunately, even though we run it unbuffered (-u and fflush).


Nice! Not really sure the point since AI can bang out a much more maintainable (and sync'd) wrapper in go in about 0.3 seconds

(if runners have sh then they might as well have a real compiler scratch > debian > alpine , "don't debug in prod")


If you just want to print of the terminal even if normal stdout/stderr is disabled you can also use >/dev/tty but obviously that is less flexible.

Interesting. Is this just literally “fun”, or do you see real world use cases?

The aws cli has a set of porcelain for s3 access (aws s3) and plumbing commands for lower level access to advanced controls (aws s3api). The plumbing command aws s3api get-object doesn't support stdout natively, so if you need it and want to use it in a pipeline (e.g. pv), you would naively do something like

  $ aws s3api get-object --bucket foo --key bar /dev/stdout | pv ...
Unfortunately, aws s3api already prints the API response to stdout, and error messages to stderr, so if you do the above you'll clobber your pipeline with noise, and using /dev/stderr has the same effect on error.

You can, though, do the following:

  $ aws s3api get-object --bucket foo --key bar /dev/fd/3 3>&1 >/dev/null | pv ...
This will pipe only the object contents to stdout, and the API response to /dev/null.

Would be nice if `curl` had something to dump headers to a third file descriptor while outputting the response on stdout.

This should work?

  curl --dump-header /dev/fd/xxx https://google.com
or

  mkfifo headers.out
  curl --dump-header headers.out https://google.com
unless I'm misunderstanding you.

Ah yeah, `/dev/fd/xxx` works :) somehow thought that was Linux only.

(Principal Skinner voice) Ah, it's a Bash expression!

I have used this in the past when building shell scripts and Makefiles to orchestrate an existing build system:

https://github.com/jez/symbol/blob/master/scaffold/symbol#L1...

The existing build system I did not have control over, and would produce output on stdout/stderr. I wanted my build scripts to be able to only show the output from the build system if building failed (and there might have been multiple build system invocations leading to that failure). I also wanted the second level to be able to log progress messages that were shown to the user immediately on stdout.

    Level 1: create fd=3, capture fd 1/2 (done in one place at the top-level)
    Level 2: log progress messages to fd=3 so the user knows what's happening
    Level 3: original build system, will log to fd 1/2, but will be captured
It was janky and it's not a project I have a need for anymore, but it was technically a real world use case.

One of my use-cases previously has been enforcing ultimate or fully trust of a gpg signature.

    tmpfifo="$(mktemp -u -t gpgverifyXXXXXXXXX)"
    gpg --status-fd 3 --verify checksums.txt.sig checksums.txt 3>$tmpfifo
    grep -Eq '^\[GNUPG:] TRUST_(ULTIMATE|FULLY)' $tmpfifo
It was a while ago since I implemented this, but iirc the reason for that was to validate that the key that has signed this is actually trusted, and the signature isn't just cryptographically valid.

You can also redirect specific file descriptors into other commands:

    gpg --status-fd 3 --verify checksums.txt.sig checksums.txt 3>(grep -Eq '^\[GNUPG:] TRUST_(ULTIMATE|FULLY)')

This is often used by shell scripts to wrap another program, so that those's input and output can be controlled. E.g. Autoconf uses this to invoke the compiler and also to control nested log output.

Red hat and other RPM based distributions recommended kickstart scripts use tty3 using a similar method

Multiple levels of logging, all of which you want to capture but not all in the same place.

Wasn't the idiomatic way the `-v` flag (repeated for verbosity). And then stderr for errors (maybe warning too).

It is, and all logs should ideally go to stderr. But that doesn’t let you pipe them to different places.

Yes, but sometimes you want just important non-error logs to go to the console or journal, and then those plus verbose logs to go to a file that gets rotated, and then also stderr on top of that.

This is probably one of the reasons why many find POSIX shell languages to be unpleasant. There are too many syntactical sugars that abstract too much of the underlying mechanisms away, to the level that we don't get it unless someone explains it. Compare this with Lisps, for example. There may be only one branching construct or a looping construct. Yet, they provide more options than regular programming languages using macros. And this fact is not hidden from us. You know that all of them ultimately expand to the limited number of special forms.

The shell syntactical sugars also have some weird gotchas. The &2>&1 question and its answer are a good example of that. You're just trading one complexity (low level knowledge) for another (the long list of syntax rules). Shell languages break the rule of not letting abstractions get in the way of insight and intuitiveness.

I know that people will argue that shell languages are not programming languages, and that terseness is important for the former. And yet, we still have people complaining about it. This is the programmer ego and the sysadmin ego of people clashing with each other. After all, nobody is purely just one of those two.


There must be a law of system design about this, because this happens all the time. Every abstraction creates a class of users who are powerful but fragile.

People who build a system or at least know how it works internally want to simplify their life by building abstractions.

As people come later to use the system with the embedded abstractions, they only know the abstractions but have no idea of the underlying implementations. Those abstractions used to make perfect sense for those with prior knowledge but can also carry subtle bias which makes their use error prone for non initiated users.


> Those abstractions used to make perfect sense for those with prior knowledge but can also carry subtle bias which makes their use error prone for non initiated users.

I don't think 2>&1 ever made any sense.

I think shell language is simply awful.


> I don't think 2>&1 ever made any sense.

It's not that hard. Consider the following:

  $ command &2>&1
The shell thinks that you're trying to run the portion before the & (command) in the background and the portion after the & (2>&1) in the foreground. There is just one problem. The second part (2>&1) means that you're redirecting stderr/fd2 to stdout/fd1 for a command that is to follow (similar to how you set environment variables for a command invocation). However, you haven't specified the command that follows. The second part just freezes waiting for the command. Try it and see for yourself.

  $ command 2>1
Here the shell redirects the output of stderr/fd2 to a file named 1. It doesn't know that you're talking about a file descriptor instead of a filename. So you need to use &1 to indicate your intention. The same confusion doesn't happen for the left side (fd2) because that will always be a file descriptor. Hence the correct form is:

  $ command 2>&1
> I think shell language is simply awful.

Honestly, I wish I could ask the person who designed it, why they made such decisions.


> Honestly, I wish I could ask the person who designed it, why they made such decisions.

https://web.archive.org/web/20250115051355/https://www.bell-...

https://www.cs.dartmouth.edu/~doug/sieve/sieve.pdf

https://www.in-ulm.de/~mascheck/bourne/index.html#origins

They also include citations to papers by Thompson, Bourne, and others.


I like abstractions when they hide complexity I don't need to see nor understand to get my job done. But if abstractions misdirect and confuse me, they are not syntactical sugar to me, but rather poison.

(But I won't claim that I am always able to strike the right balance here)


Seems related to the Law of Leaky Abstractions?

It's not necessarily a leaky abstraction. But a lack of _knowledge in the world_.

The abstraction may be great, the problem is the lack of intuitive understanding you can get from super terse, symbol heavy syntax.


make 2>&1 | tee m.log is in my muscle memory, like adding a & at the end of a command to launch a job, or ctrl+z bg when I forget it, or tar cfz (without the minus so that the order is not important). Without this terseness, people would build myriads of personal alias.

This redirection relies on foundational concepts (file descriptors, stdin 0, stdout 1, stderr 2) that need to be well understood when using unix. IMO, this helps to build insight and intuitiveness. A pipe is not magic, it is just a simple operation on file descriptors. Complexity exists (buffering, zombies), but not there.


Are you sure you understood the comment you replied to?

I agree that 2>&1 is not complex. But I think I speak for many Bash users when I say that this idiom looks bad, is hard to Google, hard to read and hard to memorize.


It’s not like someone woke up one morning and decided to design a confusing language full of shortcuts to make your life harder. Bash is the sum of decades of decisions made, some with poor planning, many contradictory, by hundreds of individuals working all over the world in different decades, to add features to solve and work around real world problems, keep backwards compatibility with decades of working programs, and attempt to have a shared glue language usable across many platforms. Most of the special syntax was developed long before Google existed.

So, sure, there are practical issues with details like this. And yet, it is simple. And there are simple methods for learning and retaining little tidbits like this over time if you care to do so. Bash and its cousins aren’t going away, so take notes, make a cheat sheet, or work on a better replacement (you’ll fail and make the problem worse, but go ahead).


Yeah, seriously. It's as if people want to playact as illiterate programmers.

The "Redirections" section of the manual [0] is just seven US Letter pages. This guy's cheat sheet [1] that took me ten seconds to find is a single printed page.

[0] <https://www.gnu.org/software/bash/manual/html_node/Redirecti...>

[1] <https://catonmat.net/ftp/bash-redirections-cheat-sheet.pdf>


> The "Redirections" section of the manual [0] is just seven US Letter pages.

"Just" seven US Letter pages? You're talking about redirections alone, right? How many such features exist in Bash? I find Python, Perl and even Lisps easier to understand. Some of those languages wouldn't have been even conceived if shell languages were good enough.

There is another shell language called 'execline' (to be precise, it's a replacement for a shell). The redirections in its commands are done using a program named 'fdmove' [1]. It doesn't leave any confusion as to what it's actually doing. fdmove doesn't mention the fact that it resorts to FD inheritance to achieve this. However, the entire 'shell' is based on chain loading of programs (fork, exec, FD inheritance, environment inheritance, etc). So fdmove's behavior doesn't really create any confusion to begin with. Despite execline needing some clever thinking from the coder, I find it easier to understand what it's actually doing, compared to bash. This is where bash and other POSIX shell languages went wrong with abstractions. They got carried away with them.

[1] https://www.skarnet.org/software/execline/fdmove.html


> "Just" seven US Letter pages?

Yes. It's the syntax alongside prose explaining the behavior in detail. Go give it a read.

If you want documentation that's done up in the "modern" style, then you'll prefer that one-page cheat sheet that that guy made. I find that "modern" documentation tends to leave it up to each reader to discover the non-obvious parts of the behavior for themselves.

> I find Python ... easier to understand.

Have you read the [0] docs for Python's 'subprocess' library? The [1] docs for Python's 'multiprocess' library? Or many of the other libraries in the Python standard library that deal with nontrivial process and I/O management? Unless you want to underdocument and leave important parts of the behavior for users to incorrectly guess, such documentation is going to be much larger than a cheat sheet would be.

[0] ...twenty-five pages of...

[1] ...fifty-nine pages of...


> Yes. It's the syntax alongside prose explaining the behavior in detail. Go give it a read.

Bold of you to assume that I or the others didn't. I made my statement in spite of reading it. Not because I didn't read it. So my opinion is unchanged here.

The point here is simple. Documentation is a very important addition. But you can't paper over other deficiencies with documentation, especially if you find yourself referring the same documentation again and again. It's an indication that you're dealing with an abstraction that can't easily be internalized. Throwing the book at everyone isn't a good solution to every problem.

> Have you read the [0] docs for Python's 'subprocess' library? The ...

Yes, I have! All of those. Their difference with bash documentation is that you get the idea in a single glance. I spend much less time wondering how to make sense of it all. Python's abstractions are well thought out, carefully selected, consistently and orthogonally implemented and stays out of the way - something I can hardly say about bash. If that's not enough for you, Python has something that bash lacks - PEPs. The documents that neatly outline the rationale behind their decisions. That's what a lot of programmers want to know and every programmer should know.

Fun fact: The Epstein files contain a copy of the bash manual! Of course they weren't involved in his crimes. It was just one of the documents found on his system. A sysadmin is believed to have downloaded it for reference. But it's telling that it wasn't the Python manual, or the Perl manual, or something else. Meanwhile, I don't really think that Epstein was running Linux on his system.

> Unless you want to underdocument and leave important parts of the behavior for users to incorrectly guess, such documentation is going to be much larger than a cheat sheet would be.

If properly designed, such expansive documentation would be unnecessary, as they would be obvious even with the abstractions. For example when you use a buffer abstraction in modern languages, you have a fairly good idea what it does and why you need it, even though you may not care about its exact implementation details. That's the sort of quality where bash and other POSIX shells fail on several counts. In fact, check how many other shells break POSIX compatibility to solve this problem. Fish and nushell, for example.

"The developer is too lazy to read the documentation" isn't the appropriate stance to assume when so many are expressing their frustration and displeasure at it. At some point, you have to concede that there are genuine problems that cannot be blamed on the developer alone.


> But you can't paper over other deficiencies with documentation, especially if you find yourself referring the same documentation again and again. It's an indication that you're dealing with an abstraction that can't easily be internalized.

> Their difference with bash documentation is that you get the idea in a single glance.

> If properly designed, such expansive documentation would be unnecessary, as they would be obvious even with the abstractions.

What is it the kids say? "Tell me you don't make use of 'multiprocessing', 'subprocess', and other such inherently-complicated modules without telling that you don't..."? Well, it's either that, or you that often use them, and rarely use bash I/O redirections... because, man, the docs for just the 'subprocess.Popen' constructor are massive and full of caveats and warnings.


You're resorting to non sequiturs, nitpicking and vague assertions to just skirt around the point here. Python syntax rarely confuses people as much as bash does. Look at this entire discussion list for example.

subprocess module isn't a reasonable example to the contrary, because it isn't Python's syntactical sugar that makes it confusing. And even in case of modules that aren't well designed, the language developers and the community strive to provide a more ergonomic alternative.

But instead of addressing the point, you decided to make it about me and my development patterns based on some wild reasoning. But that's not surprising because this started with you asserting that it's the developers' fault that bash appears so confusing to them. Just some worthless condescension instead of staying on topic. What a disgrace!


Shell is optimized for the minimal number of keystrokes (just like Vim, Amadeus and the Bloomberg Terminal are optimized for the minimum number of keystrokes. Programming languages are primarily optimized for future code readability, with terseness and intuitiveness being second or third (depending on language).

  ? (defun even(num) (= (mod num 2) 0))
  ? (filter '(6 4 3 5 2) #'even)
I'm zero Lisp expert and I don't feel comfortable at all reading this snippet.

This:

> I'm zero Lisp expert

and this:

> I don't feel comfortable at all reading this snippet

are related. The comfort in reading Lisp comes from how few syntactic/semantic rules there are. There's a standard form and a few special forms. Compare that to C - possibly one of the smallest popular languages around. How many syntactical and semantic rules do you need to know to be a half decent C programmer?

If you look at the Lisp code, it has just 2 main features - a tree in the form of nested lists and some operations in prefix notation. It needs some getting used to for regular programmers. But it's said that programming newbies learn Lisps faster than regular programming languages, due to the fewer rules they have to remember.


The initial discussion was about bash syntax. I do understand that exceptions to rules are what make a language more complicated (either human or computer language, it doesn't matter), but also a language barrier of entry is a very important factor in how complicated a language is.

Yep, there's a strong unifying feel between the Unix api, C, the shell, and also say Perl.

Which is lost when using more modern or languages foreign to Unix.


Python too under the hood, a lot of its core is still from how it started as a quick way to do unixy/C things.

And just like dup2 allows you to duplicate into a brand new file descriptor, shells also allow you to specify bigger numbers so you aren’t restricted to 1 and 2. This can be useful for things like communication between different parts of the same shell script.

> The pipe operator works similarly, though it's a combination of fork and dup'ing

Any time the shell executes a program it forks, not just for redirections. Redirections will use dup before exec on the child process. Piping will be two forks and obviously the `pipe` syscall, with one process having its stdout dup'd to the input end of the pipe, and the other process having its stdin dup'd to the output end.

Honestly, I find the BASH manual to be excellently written, and it's probably available on your system even without an internet connection. I'd always go there than rely on stack overflow or an LLM.

https://www.gnu.org/software/bash/manual/bash.html#Redirecti...


Haha, I'm even more confused now. I have no idea what dup is...

There are a couple of ways to figure out.

open a terminal (OSX/Linux) and type:

    man dup
open a browser window and search for:

    man dup
Both will bring up the man page for the function call.

To get recursive, you can try:

    man man unix
(the unix is important, otherwise it gives you manly men)

otherwise it gives you manly men

That's only just after midnight [1][2]

[1] - https://www.youtube.com/watch?v=XEjLoHdbVeE

[2] - https://unix.stackexchange.com/questions/405783/why-does-man...


I love that this situation occured.

you may also consider gnu info

  info dup

I did a google search on “dup2(2, 1)” in a fresh private tab on my iPhone Safari and this thread came up the second, between

https://man7.org/linux/man-pages/man2/dup.2.html

and

https://man.archlinux.org/man/dup2.2.en

A lot of bots are reading this. Amazing.


> Though, understanding it this way makes the direction of the angled bracket a little odd; at least for me it's more natural to understand dup2(2, 1) as 2<1, as in make fd 2 a duplicate of fd 1, but in terms of abstract I/O semantics that would be misleading.

Since they're both just `dup2(1, 2)`, `2>&1` and `2<&1` are the same. However, yes, `2<&1` would be misleading because it looks like you're treating stderr like an input.


I find it very intuitive as is

Respectfully, what was the purpose of this comment, really?

And I also disagree, your suggestion is not easier. The & operator is quite intuitive as it is, and conveys the intention.


Perhaps it is intuitive for you based on how you learned it. But their explanation is more intuitive for anyone dealing with low level stuff like POSIX-style embedded programming, low level unix-y C programming, etc, since it ties into what they already know. There is also a limit to how much you can learn about the underlying system and its unseen potential by learning from the abstractions alone.

> Respectfully, what was the purpose of this comment, really?

Judging by its replies alone, not everyone considers it purposeless. And even though I know enough to use shell redirections correctly, I still found that comment insightful. This is why I still prefer human explanations over AI. It often contains information you didn't think you needed. HN is one of the sources of the gradually dwindling supply of such information. That comment is still on-topic. Please don't discourage such habits.


Several years ago I put up an XMPP server (Prosody) to chat with some friends in a private group, and it's still going strong. Years ago we used to have a mailing-list to keep in touch, but that eventually went quiet. I had run an XMPP server ~10 years ago (ejabberd), but that was before mobile chat utterly displaced everything else, it never took hold among my friends, and I stopped using it when Google Talk stopped federating, cutting me off from most of the people I had been occassionally chatting with.

The modern chat experience is all about the clients, and the mobile clients in particular, especially push notification support and seamless setup, so direct comparisons with XMPP to Matrix, et al, kinda misses the point, IMO. Conversations is a really amazing Android client. Our one iPhone member is content with Modal (I use the desktop version sometimes, but it's clearly designed for the iPhone). A new member uses, I think, Gajim on Windows; they don't want the distraction of chatting on their phone.

I host a bunch of other services on OpenBSD, all using the integrated base daemons--httpd, OpenSMTPD, NSD, etc--that sandbox themselves. I was hesitant to run a daemon like Prosody that didn't integrate OpenBSD security features, so I wrote my own module (mod_unveil) that uses pledge and unveil to sandbox Prosody: https://github.com/wahern/prosody-openbsd Most Prosody users host on Linux, and many of them seem to use Docker containers, which presumably offers some isolation, but I can't really speak to it.

The only non-private, large group chat I've joined recently has been the Prosody support MUC. I'm not a chat power user, but it seems to work just as well as any large chatroom. I've was content with ntalk and IRC back in the day, so I don't really get all the chat protocol bike shedding. In any event, I expect XMPP momentum (such as it is) to outlast all the interest in Discord, Matrix, etc.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: