Life extension research isn’t going to make anyone immortal - it can’t prevent deaths from accidents or foul play, and after a few thousand years the odds you will succumb to one or the other becomes quite high. Suicide is likely to be another major factor, including active suicide (possibly styled as euthanasia), the passive suicide of choosing to stop all this life extension wizardry, and intentional recklessness soon resulting in accidental death. Finally, for all we know there is a long tail of obscure disease processes that only kick in after lifespans no one has as yet ever reached-and even though that too might eventually be solved, if it takes you a thousand years to find the first case of such a disease, how many will die from it before you find a cure?
> it can’t prevent deaths from accidents or foul play
Cory Doctorow's wonderful sci-fi book "Down and Out in the Magic Kingdom" explored exactly this in interesting ways. In the book people in the future can live essentially forever by transferring their consciousness into new bodies. They can also back up the contents of their consciousness, something most people do nightly but certainly before doing some dangerous extreme sport. Doing dangerous things without backing yourself up is considered tantamount to suicide since you lose all the memories and personal growth, essentially the person you became since your last backup.
People do get bored and will sometimes choose to "deadhead" for hundreds of years at a time, which is putting yourself into stasis and skipping those centuries. The book is full of provocative ideas about how practical immortality might actually work on a personal and societal level.
Life extension to make people live several times longer? Seems plausible we’ll get there eventually, simply by extrapolating current trends in science and biotechnology, and observing what is possible in other species.
Mind uploading? That’s a whole other level of sci-fi. It isn’t extrapolating what we already have, it is waving your hand and declaring “as far as we know it isn’t physically impossible so why wouldn’t we get there eventually?”
Plus it raises all these difficult questions about the philosophy of mind and theory of personal identity - is the backup actually you? Or do you die, and you are survived by someone else who isn’t you but thinks they are?
> Plus it raises all these difficult questions about the philosophy of mind and theory of personal identity - is the backup actually you? Or do you die, and you are survived by someone else who isn’t you but thinks they are?
You don’t need sci-fi mind backups for that. How certain are you that when you go to sleep tonight, the person who wakes up tomorrow will be „actually you”? How certain are you that all your memories were lived by „actually you”?
The answer, I suppose, is that we don’t know what „actually you” even means, how consciousness works, or why you’ve even got a (seemingly) continuous internal experience.
I think there's no "you", just an illusion that there's this uninterrupted "you"-ness from birth to death. It's a very useful illusion for the most part.
I view life (in the philosophical sense; consciousness) as the stream of subjective experiences (qualia) that arise out of life (in the biological sense; neurons and such). Right now my life consists of a collection of sustained interest in this discussion, a little hunger, the qualia of seeing the screen and the realization that I'm sitting a bit uncomfortably. In a few moments "I" will be a collection of other ephemeral qualia.
There's no "real" continuation between one experience and then next, just like there's no real continuation between my past "self" and my future "self", but they're both extremely useful illusions. I'll eat to subside that hunger that was registered a moment ago or change my position to get comfortable. I'll be responsible for "my" previous actions, as well. I'll basically be able to function as a temporally continuous being.
On the topic of immortality, I'd like to be virtually immortal so I can pursue my goals indefinitely. If I stop having goals or feel like I've had enough, I could always kill myself. My goals arise from my ethics, my biological needs and probably many other things. Why would I be OK with biology and death preventing me from achieving my goals at some arbitrary age?
So for me "immortality" is both being able to continue the illusions of self indefinitely (which I admit, feels good intrinsically), and being to continue the pursuit of my goals indefinitely. The goals seem to actually have more "real" continuity than "I" do.
The most troubling thing with immortality is the biological imperative to live that makes suicide so hard. But I think after a few centuries many people will reach that point. It's not a bad thing, it's just a personal choice.
We know other species have different maximum lifespans-some shorter, some longer. Obviously this is determined by genetics-as our knowledge of that subject continues to improve, why wouldn’t we eventually work out how to alter it?
We can already change the maximum lifespan of some other species. Why shouldn’t we expect to gradually be able to do it for more species? And then what makes humans special that we couldn’t eventually do the same for humans?
You're making a huge leap there based on zero scientific evidence. No one has ever demonstrated maximum lifespan extension for mammals living in the wild. Experiments have been limited to animals living in nice safe, sterile cages. There's no free lunch in genetics and modifications that increase maximum lifespan are likely to result in other undesirable changes. Like suppression of immune response can be helpful but that comes with a huge obvious downside if you're ever exposed to random pathogens. Outside of some very limited genetic defects it's usually impossible to alter a single trait in isolation.
No one has ever done it is not in itself evidence it can’t be done, only that it is hard.
And my point was-no matter how hard human life extension is, mind uploading is many orders of magnitude harder. The first, it seems likely in principle that we could achieve it if only we knew the right genetic changes to make-now, you may be right that in a thousand years we still won’t work out in practice exactly what they are-but human life extension has a certain kind of theoretical in principle feasibility which could well coexist with practical infeasibility; mind uploading lacks even that level of theoretical in principle feasibility.
To even consider "immortal" as possible suggests someone hasn't had a lot of formal math training. Infinity is rather large. In an infinite amount of time, any possible conjunction of circumstances that could cause an immortality system to fail will happen. Talking in thousands, millions or even billions of years doesn't even need to be rounded to be basically zero when compared to eternity.
Death is a certainty. No amount of technology can change that even theoretically. We don't even have reason to be confident that the universe itself is eternal, let alone any component of it.
I don't think I am. What would the path be to thinking it is possible? In the best case scenario where everything we know about physics turns out to be wrong and the universe miraculously allows complex eternal patterns to form it'd still eventually end up as some entity that thought a completely different way, had a completely different form, and has a very limited understanding of the concept of "what I am" because it'd have to keep changing parts of itself due to unexpected circumstances. It'd be a ship of Thesius to the point where there wasn't even a memory of what a ship was any more. A severe Alzheimers patient would be the same person they always have been compared to what an eternity of change would bring.
If that is immortality then we may as well call it a tautology and say we're already immortal. None of the things that make people who they are need to be preserved to achieve it so we're realistically already there.
Living an absurdly long time I can get behind. Billions of years, trillions of years, unimaginable numbers of years, sure. That could happen. But immortality isn't an option, everything eventually dies off unless we play semantic games where there aren't any properties of the thing that need to be preserved. And maybe even reality has an expiration date for all we know, which would render the whole project moot.
If we look at afterlife beliefs-and their secular substitutes such as life extension, cryonics, mind uploading, simulationism, quantum immortality-I don’t think they all have the same motivation-two people may adopt the same belief with different psychological motivations.
For some people, the idea that their present conscious moment might eventually be left permanently without any future extension is terrifying-but provided that doesn’t happen, they might be neutral (or even positive) about the prospect of the contents of that consciousness eventually becoming so radically transformed that it becomes a completely different person, or even something which transcends human notions of personhood, albeit ultimately still continuous with the person they are now. For other people, that prospect is terrifying. It really depends on what one is most attached to - the mere continuation of one’s own consciousness, or its distinctive contents that makes you you.
Maybe. There's plenty of science fiction that addresses this. For example the "meths" (short for Methuselah) in altered carbon, who achieve immortality by making backups of their brains that can be spawned to cloned bodies. You could recover from accidents, or roll back to before the obscure disease kicked in
It might look like immortality to outside observers, but I don't see how it is the same thing
Any process that could theoretically allow me to exist at the same time as my future self must clearly not be me anymore
So any kind if "mind backup" is a copy. A clone with a copy of my memories absolutely cannot be me. We would somehow need to be able to transfer my consciousness into a clone body
It seems very likely to me that consciousness is actually a side effect of a physical network in the brain and cannot actually be separated from the biological brain to move to an artificial brain
> One problem that I have with trying to understand "time" is that we can't measure how quickly it "flows" or at least how quickly we travel through it.
Our experience of time passing is heavily influenced by the temporal granuality of our subjective experience-at the upper end, “now” lasts 2-3 seconds; at the lower end, our temporal discrimination goes down to tens of milliseconds for visual and tactile stimuli, and reaches down to microseconds for certain types of auditory stimuli. But, one supposes other species with different neurology would have these durations be shorter or longer, which would make time pass more slowly or faster for them, in subjective terms.
> Existing for zero time would imply it never existed.
In the mathematics of infinity, something can have exactly zero probability yet still happen, and exactly one probability yet nonetheless fail to happen (hence the standard term “almost surely”).
If something can have literally zero probability of existing yet still exist, why can’t it exist for literally zero time yet still exist?
B2C businesses need consumers. If AIs take all the jobs, then most of the population-minus the small minority who are independently wealthy and can live off their investments-go broke, and can’t afford to buy anything any more. Then all the B2C businesses go broke. Then all the B2B businesses lose all their B2C business customers and go broke. Then the stock market crashes and the independently wealthy lose all their investments and go broke. Then nobody can afford to pay the AI power bills any more, so the AIs get turned off.
And that’s why across-the-board AI-induced job losses aren’t going to happen-nobody wants the economic house of cards to collapse. Corporate leaders aren’t stupid enough to blow everything up because they don’t want to be blown up in the process. And if they actually are stupid enough, politicians will intervene with human protectionism measures like regulations mandating humans in the loop of major business processes.
The horse comparison ultimately doesn’t work because horses don’t vote.
Businesses need consumers when those consumers are necessary to provide something in return (e.g. labor). If I want beef and only have grass, my grass business needs people with cattle wanting my grass so that we can trade grass for beef, certainly. But if technology can provide me beef (and anything else I desire) without involving any other people, I don't need a business anymore. Businesses is just a tool to facilitate trade. No need for trade, no need for business.
This is the optimistic take, too. There are plenty of countries which don’t care about votes, indeed there are dictators that don’t care about their subjects, they only care about outcomes for themselves. The economic argument only works in capitalism and rule of law - and that’s assuming money is worth anything anymore.
The Chinese Communist Party is obsessed with social stability. Do you think they’ll allow AI to take all the jobs, destroying China’s domestic economy in the process? Or will they enact human protectionism regulations? What Would Xi Jinping Do?
> Do you think they’ll allow AI to take all the jobs, destroying China’s domestic economy in the process?
If AI can take all the jobs (IMO at least a decade away for the robotics, and that's a minimum not a best-guess), the economy hasn't been destroyed, it's just doing whatever mega-projects the owners (presumably in this case the Chinese government) want it to do.
That can be all the social stability stuff they want. Which may be anything from "none at all" to whatever the Chinese equivalent is of the American traditional family in a big detached house with a white picket fence, everyone going to the local church every Sunday, people supporting whichever sports teams they prefer, etc.
I don't know Chinese culture at all (well, not beyond OSP and their e.g. retelling of Journey to the West), so I don't know what their equivalents to any of those things would be.
Look at what China does to protect its citizens against social media. You see China enacting many of the social media protections that many HN enthusiasts demand, yet Sinophobia makes them reframe it as a negative. "Children shouldn't have access to social media, except when China does it then it's bad!"
Can the process be similar to a sudden collapse of USSR's economic system? The leaders weren't stupid and tried to keep it afloat but with underlying systemic issues everything just cratered.
Can the process be modelled using game theory where the actors are greedy corporate leaders and hungry populace?
> The idea in the article would refute the inductive step.
No it doesn't. The article describes a proof that it is impossible for a computer to simulate this physical universe with perfect accuracy; but, that's not actually a problem for Nick Bostrom's simulation argument. For the simulation argument to work, you don't need to simulate the universe with perfect accuracy – just with sufficient accuracy that your simulated people can't distinguish it from a real one. And this proof isn't about "ability to simulate a universe to the point the simulated people can't tell that it is a simulation", it is about "ability to simulate a universe with perfect accuracy". So the proof isn't actually relevant to that argument at all.
Please explain how to simulate a universe which is indistinguishable from a simulation but which is not accurate according to the rules of the article.
Does the article propose anything empirically testable?
I mean, suppose we are actually in a computer simulation-what observations could we perform, which according to the rules of this article, would show that we were in one, and not the “real” world?
Addendum: from what I understand, the article’s proof relies on computational quantum gravity having a Gödel sentence. Now, quantum gravity is in practice, as far as we know, experimentally untestable-the distinctive phenomena it predicts only occur at scales far beyond our present technological ability to explore-and who can say if that will ever change. So, is it possible for a computer to simulate humanity as it currently exists, such that the simulated humans couldn’t detect they were simulated? I don’t know; but what I can confidently say, is this research has nothing useful to say about that question, because this is theoretical quantum gravity research, and I’m not aware of any good reason to believe quantum gravity has any relevance to answering that specific question. This research claims to show computers are incapable of simulating aspects of reality which are empirically unavailable to us; even if the research is right, it makes zero difference to the question of whether the actual empirical experiences we do have are simulated or not.
The article claims to prove no computer could accurately simulate quantum gravity. Suppose they are right, and as a result our simulators are forced to make quantum gravity experiments (if that were a thing) give “incorrect” answers, because the real ones are uncomputable. Would that be proof we live in a simulation? Or would it be taken as proof that quantum gravity (whether loop quantum gravity or M theory or whatever) had finally been empirically refuted?
That said, if they really wanted to give us the “correct” answer-why would they bother when we could never know that a wrong answer were wrong?-why couldn’t they just suspend the simulation, run the experiment themselves, then resume it simulating the result?
Someone I know is a high school English teacher (being vague because I don’t want to cause them trouble or embarrassment). They told me they were asking ChatGPT to tell them whether their students’ creative writing assignments were AI-generated or not-I pointed out that LLMs such as ChatGPT have poor reliability at this; classifier models trained specifically for this task perform somewhat better, yet also have their limitations. In any event, if the student has access to whatever model the teacher is using to test for AI-generation (or even comparable models), they can always respond adversarially by tinkering with an AI-generated story until it is no longer classified as AI-generated
A New York lawyer used ChatGPT to write a filing with references to fake cases. After a human told him they were hallucinated, he asked ChatGPT if that was true (which said they were real cases). He then screenshotted that answer and submitted it to the judge with the explanation "ChatGPT ... assured the reliability of its content." https://www.courtlistener.com/docket/63107798/54/mata-v-avia... (pages 19, 41-43)
Reminds me of a Reddit story that made the rounds about a professor asking ChatGPT if it wrote papers, to which it frequently responded afirmatively. He sent an angry email about it, and a student responded by showing a response from ChatGPT claiming it wrote his email.
Yes, I missed the student using the teacher's trust in those tools to make them even more angry and neuter their angry email that they (probably) actually wrote themselves. Well-played.
I realize you might have failed to comprehend the level of my argument. It wasn't even about LLMs in particular, rather having someone/something else do your work for you. I read it as the student criticizing the teacher for not writing his own emails, since the teacher criticizes the students for not writing their own classwork. Whether it's an LLM or them hiring someone else to do the writing, this is what my rebuttal applied to. I saw what I thought was flawed reasoning and wanted to correct it. I hope it's clear why a student using an LLM (or another person) to write classwork is far more than a quality issue, whereas someone not being tested/graded using an LLM to prepare written material is "merely" a quality issue (and the personal choice to atrophy their mental fitness).
I don't think I was arguing for LLMs. I wish nobody used them. But the argument against a student using it for assignments is significantly different than that against people in general using them. It's similar to using a calculator or asking someone else for the answer: fine normally but not if the goal is to demonstrate that you learned/know something.
I admit I missed the joke. I read it as the usual "you hypocrite teacher, you don't want us using tools but you use them" argument I see. There's no need to be condescending towards me for that. I see now that the "joke" was about the unreliability of AI checkers and making the teacher really angry by suggesting that their impassioned email wasn't even their writing, bolstered by their insistence that checkers are reliable.
Two posts from you addressing a one-line reply? May be time to put down the coffee and take a drag from the mood-altering-substance of your preference.
Students (and some of my coworkers) are now learning new content by reading AI generated text. Of course when tested on this, they are going to respond in the style of AI.
> It seems in the DSM 5 the definition was narrowed
I think it may have been narrowed in theory, but often not in practice.
Here in Australia, making DSM-5 ASD a shortcut to getting funded by our national disability insurance scheme (NDIS) caused a lot of pressure to broaden the diagnosis in practice - if clinicians have to stretch the diagnosis to get someone the support they need, they feel ethically obliged to engage in that stretching, since it is in the best interests of their client (who are experiencing real challenges, even if those challenges map poorly to the official diagnostic criteria).
And Australia is not unique in providing funding pressures for ASD diagnosis, although NDIS is arguably a global outlier in the scale of that pressure. Apart from funding, growing popular and clinical mindshare of the diagnosis creates independent pressure to broaden its definition.
So a theoretical narrowing coexists with a practical broadening - and the latter is arguably what really counts
> If you ask for an endpoint to a CRUD API, it'll make one. If you ask for 5, it'll repeat the same code 5 times and modify it for the use case.
I don’t think this is an inherent issue to the technology. Duplicate code detectors have been around for ages. Given an AI agent a tool which calls one, and ask it to reduce duplication, it will start refactoring.
Of course, there is a risk of going too far in the other direction-refactorings which technically reduce duplication but which have unacceptable costs (you can be too DRY). But some possible solutions: (a) ask it to judge if the refactoring is worth it or not - if it judges no, just ignore the duplication and move on; (b) get a human to review the decision in (a); (c) if AI repeatedly makes wrong decision (according to human), prompt engineering, or maybe even just some hardcoded heuristics
It actually is somewhat a limit of the technology. LLMs can't go back and modify their own output, later tokens are always dependent on earlier tokens and they can't do anything out of order. "Thinking" helps somewhat by allowing some iteration before they give the user actual output, but that requires them to write it the long way and THEN refactor it without being asked, which is both very expensive and something they have to recognize the user wants.
Coding agents can edit their own output - because their output is tool calls to read and write files, and so it can write a file, run some check on it, modify the file to try to make it pass, run the check again, etc
Sorry but from where I sit, this is only marginally closes gap from AI to truly senior engineers.
Basically human junior engineers start by writing code in a very procedural and literal style with duplicate logic all over the place because that's the first step in adapting human intelligence to learning how to program. Then the programmer realizes this leads to things becoming unmaintainable and so they start to learn the abstraction techniques of functions, etc. An LLM doesn't have to learn any of that, because they already know all languages and mechanical technique in their corpus, so this beginning journey never applies.
But what the junior programmer has that the LLM doesn't, is an innate common sense understanding of human goals that are driving the creation of the code to begin with, and that serves them through their entire progression from junior to senior. As you point out, code can be "too DRY", but why? Senior engineers understand that DRYing up code is not a style issue, its more about maintainability and understanding what is likely to change, and what will be the apparent effects to human stakeholders who depend on the software. Basically do these things map to things that are conceptually the same for human users and are unlikely to diverge in the future. This is also a surprisingly deep question as perhaps every human stakeholder will swear up and down they are the same, but nevertheless 6 months from now a problem arises that requires them to diverge. At this point there is now a cognitive overhead and dissonance of explaining that divergence of the users who were heretofore perfectly satisfied with one domain concept.
Ultimately the value function for success of a specific code factoring style depends on a lot of implicit context and assumptions that are baked into the heads of various stakeholders for the specific use case and can change based on myriad outside factors that are not visible to an LLM. Senior engineers understand the map is not the territory, for LLMs there is no territory.
I’m not suggesting AIs can replace senior engineers (I don’t want to be replaced!)
But, senior engineers can supervise the AI, notice when it makes suboptimal decisions, intervene to address that somehow (by editing prompts or providing new tools)… and the idea is gradually the AI will do better.
Rather than replacing engineers with AIs, engineers can use AIs to deliver more in the same amount of time
Which I think points out the biggest issue with current AI - knowledge workers in any profession at any skill level tend to get the impression that AI is very impressive, but is prone to fail at real world tasks unpredictably, thus the mental model of 'junior engineer' or any human that does its simple tasks by itself reliably, is wrong.
AI operating at all levels needs to be constantly supervised.
Which would still make AI a worthwhile technology, as a tool, as many have remarked before me.
The problem is, companies are pushing for agentic AI instead of one that can do repetitve, short horizon tasks in a fast and reliable manner.
Sure. My point was AI was already 25% of the way there even with their verbose messy style. I think with your suggestions (style guidance, human in the loop, etc) we get at most 30% of the way there.
> In 1977, Apple, a young fledgling company on the West Coast, invents the Apple II, the first personal computer as we know it today. IBM dismisses the personal computer as too small to do serious computing and unimportant to their business.
IBM released the 5100 in September 1975 [0] which was essentially a personal computer in feature set. The biggest problem with it was the price tag - the entry model cost US$8975, compared to US$1298 for the entry Apple II released in June 1977 (close to two years later). The IBM PC was released in August 1981 for US$1565 for the most basic system (which almost no one bought, so in practice they cost more). And the original IBM PC had model number 5150, officially positioning it as a successor to the 5100.
IBM’s big problem wasn’t that they were disinterested in the category - it was they initially insisted on using expensive IBM-proprietary parts (often shared technology with their mainframe/midrange/minicomputer systems and peripherals), which resulted in a price that made the machine unaffordable for everyone except large businesses, governments, universities (and even those customers often balked at the price tag). The secret of the IBM PC’s success is they told the design team to use commercial off-the-shelf chips from vendors such as Intel and Motorola instead of IBM’s own silicon.
Google has been working on Fuchsia, a new open source OS which in theory can replace Linux as the base of ChromeOS and Android
But it is unclear how committed they still are to this. Some suggest it was just a “keep our options open” project or a “stop smart people from doing it for our competitors” project. They are actually using it in anger on their Nest Hub devices, but we don’t know if they still plan to take it any further than that
reply