If it's an offsite backup, you would deal with it like you would any DR site-- either plan for yet another backup, or presume that it's unlikely that both the primary and replica would go down simultaneously and accept the risk.
I don't have the data to demonstrate that this is incorrect, and that's because we lack a fundamental model of how brains operate. Brains probably compute under an expansive definition of computer, but to say it's a classical computer is sorely underdetermined by the evidence.
I disagree a bit. Coding can remain an artistic passion for you indefinitely, it's just your ability to demand that everyone crafts each line of code artisinally won't be subsidized by your employer for much longer. There will probably always be a heavily diminished demand for handcrafted code.
Interesting article! I think I get what the concern is, but I think your view of human cognition is going to heavily color what you think the limitations of an LLM are. For example, if you think that broadly speaking, the human learning process (concept acquisition, inference and concept revision, etc) is modeled by modern LLM training, these limitations might apply to humans in the same sense they do to LLMs.
Systematicity is a really powerful concept that can drive a wedge between classical cognition folks and connectionists/associationists, but it has never been clear to me that we actually instantiate systematic cognitive patterns, only that in some arenas we should.