Hacker Newsnew | past | comments | ask | show | jobs | submit | thelastquestion's commentslogin

I don't think this will happen in the near future, but "ever"? Almost certainly unless humanity stops working on AI systems for some reason. The current approaches to AI have poor reasoning ability. This could possibly be due to a mechanistic flaw or simply be a matter of iteration, i.e., repeatedly trying things and getting feedback about why they are wrong, possibly internally. It's not clear that when we reason, our brains aren't implicitly and explicitly trying all sorts of things and discounting things that seem wrong (lots of neurochemical mechanisms are unclear here). Naive LLM inference isn't doing this, but can iterate within a system; it could be that a proof of a complex theorem requires an extreme version of that kind of iteration. Humans don't tend to spit out nontrivial proofs in one-shot either, so there's currently a significant asymmetry between the amount of human brain computation that goes into a proof of a complex theorem and silicon computation that goes into responding to a prompt.


General infrastructure that benefits humanity in a goal agnostic way, e.g., related to energy or compute seems like a meaningful thing to get behind. This could be novel technology or logistics and efficiency improvements, but these are things that can augment everyone's ability to do the things they want.

Socially, there are a lot of things that seem "silly" (euphemistically), and it's a blight on civilization that they are tolerated. That people in my country (USA, but can apply to many countries in the world) are homeless or starving is silly; if you were running a country, ensuring the citizenry have food and shelter might be an obvious top priority. The world can seem really complex at times and our systems become so convoluted that people rationalize why the things that seem obviously silly are too difficult to solve or worthwhile tradeoffs. I think it's generally a good heuristic to avoid doing things that seem obviously silly and fix the things that are (that is, it's often better to be naive about it!).

This portion of the comment is not a direct answer to the question but a related thought others may have further insights about. Practically, a situation in which you don't need money rarely materializes instantaneously; it usually arises from circumstances that have constituted a great deal of your life and identity. As a consequence of this, I think ego can become a real challenge that prevents people from pursuing possible "ideals". If you've been in a certain kind of position for a long time, there can be psychological barriers to pursuing something in a way in which, e.g., you are a true beginner or have less control. This tends to be something that can dissipate with age but can be especially difficult for people who've achieved financial success well before standard retirement age.


You’re absolute right in some respects on your last point. You need to seize the day. So many people delay retirement too long for fear of not being financially protected to age they probably won’t last to, meanwhile their physical and mental capacity are typically deteriorating.


I don't think the US's strategic goal is to solve homelessness. In that between maintaining world dominance and solving homelessness, homelessness is way less important. Nations choose their sacrifices based on their priorities same as individuals.


Let’s assume that world dominance is the root priority. Your statement implies that solving homelessness wouldn’t be in service of world domination. That might be true, but there’s clearly a minimum effort here: an entirely homeless and starving population would disrupt production, military, research, global economic power in a way that would be detrimental to the goal of world domination. Further, if the population gets too unhappy, revolts will occur that would hinder world domination efforts of those currently in power. So where’s the inflection point and why? To say the inflection point is the status quo feels intuitively wrong since what are the odds we happen to be in the optimal place with respect to homelessness, food insecurity, and civilian unrest? I tend to believe that it’s likely further investment into the population will produce a citizenry base that would aid in sustained world domination efforts. If you let the citizenry fall behind or become too unhappy, you’ll be overtaken by other countries. Curious what you see as the opportunity cost of that investment.


(Largely, but almost certainly not completely) solving homelessness would also require solving a massive amount of mental health issues. But I still think that fits your argument (one I happen to agree with). If you want your country to be stronger and more productive it seems like basic common sense to (attempt to) solve hunger, education, health, etc. issues, so your citizenry is more prosperous and productive. I can't think of any way in which the 'rising tide lifts all ships' doesn't at least somewhat benefit most citizens of a country, including those already in power.

But of course that requires a significant level of investment, which is generally the limiting factor, as people who build fortunes are generally disinclined to part with any portion of them. Even for those so inclined, it is hard to get people to agree on what the 'greater good' really is, much more so these days when vast amounts of information is thrown at us often seemingly only in service of dividing us.


For the crowd that tends to be frequenting places like HN, I still believe college to very much be worth it. There are many viable paths to almost any outcome, but college is a rare time where you are constantly surrounded by opportunity with minimal effort: these opportunities include being exposed to new ideas, meeting new people as peers, meeting new people as mentors, and jobs that can make you money or jobs that can prove exceptionally meaningful. Not to mention that it’s a generically good tool for passing a baseline for many people in many circumstances (job applications, cold reach outs, meeting the parents — any circumstance where you are a stranger, it signals that you are more likely to be responsible than not knowing whether you went to college).

Of course, it’s possible to eschew these opportunities, but if you ever have a moment of clarity to try and live life a little better, the opportunities are there within reach. At any later point in life, these things can be hard to come by serendipitously, and they tend to require a relatively steep active effort.

In general, upfront life investment is exceptionally valuable, and the (all encompassing) human gestation period easily extends through college age in modern society. I think the issue of its worth is typically for people that were underserved in their grade school years, which is probably a decently large percentage of the country.


Neuralink has previously discussed their implantations in monkeys, in which they claim implantations for > 1 year with increasing performance (e.g., in their deep dive on youtube [1]). That would seem to indicate that this wasn't seen in animal studies (maybe just a matter of degree). Figure 04 they have in the article suggests a steady state may have been reached with respect to some factors, but I guess they could continue to improve things on the model side while still experiencing degradation if the improvements outpace the effect of degradation. Since this doesn't seem to impact safety, it doesn't seem like a follow-up surgery would be needed as long as the device continues to be useful; if it essentially became a brick then the patient would probably want it removed (which requires another surgery). They've never talked about any surgeries for "maintenance" before, and given the size of the threads, I doubt the same previously implanted ones could be reinserted.

1. Neuralink Show and Tell, Fall 2022 (https://www.youtube.com/watch?v=YreDYmXTYi4)


The version of the device described on Neuralink’s website for the current trial only reads signals from the brain to control external software. So this is just for providing functionality and daily use to individuals with quadriplegia, not augmenting the brain itself.


You might be happy to hear that while it was low-key, Neuralink did have some presence at SfN and had a joint poster there with University College London.


They can always be reduced, but sometimes with an unacceptably large (e.g., non-polynomial) increase in time complexity.

These are the complexity classes of Function P (FP) and Function NP (FNP), which are the function problem extensions of the decision problem classes, and require finding the value, not just answering yes or no.

A simple example of a decision problem in P but whose search problem is not known to be in FP: For a given integer x, “does there exist a non-trivial prime factor of x?” vs. “find a non-trivial prime factor of x”.


As of right now, there is a lot more safety data in humans supporting the use of stents in vasculature, and because it is outside the brain (in the vessels), the blood-brain barrier is never broken. There also a lot of safety data regarding putting devices in the fleshy parts of your chest/abdomen.

So I think it’s fair to say it’s less invasive than opening the skull and breaking the blood-brain barrier. That may or may not translate to riskier/safer, but with the data that exists right now I’d say it does carry less risk than a more invasive technology. At the end of the day, it will come down to risk vs. benefit, where invasive technologies have demonstrated far greater efficacy thus far.


Yes, they have IDE approval in the US for a clinical trial, after first conducting a trial in Australia.


There are a lot of factors going into an approval — a couple that may be relevant here:

(i) Synchron had clinical (in human) results from Australia before submitting to the FDA.

(ii) Synchron’s underlying mechanical device is a stent. A million Americans per year get a stent installed, so there is a lot of existing data demonstrating the safety of stents.

(iii) the procedure is basically a typical stent installation and less risky in many ways than open brain surgery. Notably Neuralink built a surgical robot to aid in the installation of their implant but that is yet another product that needs to be approved. Thus, I’d imagine Neuralink’s submission is vastly more complicated.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: