Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Could it be feasible to use the "brainlike" model for practical hardware and reap benefits from avoiding the vN bottleneck?

My intuition is that such a system would be much harder to reason about -- and therefore harder for compilers to emit efficient machine code for -- but I'm assuming someone here knows the topic pretty well?

(Before you give me "the lecture", yes, I'm aware that in general it's not a good idea to simply mimic the kludges that evolution came up with.)



Unless we made some big advances I'm not aware of, as far as I know we really don't know much about how the brain works. I doubt anyone can answer your question.


My question doesn't depend on knowing exactly how the brain works; it's a question about mixing memory and CPU, which happens to be similar to (how we believe) the brain works.


> which happens to be similar to (how we believe) the brain works.

Can you link sources supporting that ?


The claim you quoted is actually a null hypothesis—the negation (that the brain separates memory and processing) would require us to posit new types of as-yet-undiscovered neurochemical interactions (which may certainly exist, but we can't say!), and thus the burden of evidence would fall on proving that claim. Or, un-negated, for this claim, the burden of evidence is on disproving it.


If there is nothing supporting the claim there is nothing to disprove though. I'm genuinely curious about sources because I've never heard of that.

As far as I know any comparison between a computer and a brain is flawed from the get go.

edit: and to be clear my initial comment was just hinting that we can't develop a "brainlike" computer architecture because we simply don't know how the brain works at all.


I don't think you're wrong per se. But people aren't "claiming" the brain works this way. People are simply stating that maybe the brain functions like this and we could adopt similar designs based on our perceived impressions of how the brain may function.

We could totally botch it though and end up with an even worse computer and turns out it works nothing like the brain. Who knows. But right now many people "believe" the brain may function in the manner described.


To be clear, what I'm saying here is that this claim maps into our model of neurology as the equivalent of a claim like "the earth isn't a triangle." Whether or not it's supported by any evidence, what it actually is is a positively-phrased restatement of part of our set of axiomatic priors.

We know that the Earth can only be one shape; we know there are an infinite variety of shapes that something can be; and so our priors contain a set of all the claims like "the Earth is potato-shaped" and "the Earth is a doughnut" all with extremely low probability, before we encounter any evidence at all, just because the probability-mass has to get spread out among all those infinitely-many claims.

Assuming continued lack of evidence either way, a claim like "the Earth is not [one particular shape]", then, doesn't require argumentative support to be taken as a default assumption (as you might in e.g. the opening of a journal paper.) The probability of it being any particular shape started very low, and we've never encountered any evidence to raise that probability, so it's stayed very low.

(Yes, that even applies to the specific claim that "the Earth is not an oblate spheroid." If we never encountered any evidence to suggest that claim, then it'd have just as low a default confidence as any of the other claims it competes with.)

For claims with no evidence either for or against them, the analytical priors derived from the facts about the classes of claims to which the claim belongs, determine where the burden of proof lays. Low-confidence priors? Burden to prove. High-confidence priors? Burden to disprove.

In this case, we already know that neurons can do several things, and AFAIK we've never encountered any evidence of neurons having specialized functionality, or any evidence that neurons don't have specialized functionality. Our tools just aren't up to telling us whether they do or not. But, because one claim actually factors out to several claims (neuron specialization → lots of different ways neurons could specialize) while the other claim doesn't (neuron generality → just neuron generality), the probability-mass ends up on the neuron-generality side. (This is another way to state Occam's Razor.)

Mind you, this might be entirely down to our inability to study neuronal dynamics in vivo in fine-enough detail. In this case, our lack of evidence doesn't imply a lack of facts to be found, because we have no evidence for or against this hypothesis. Instead, it just determines what our model should be in the absence of such evidence, until such time as we can gather evidence that does directly prove or disprove the specific hypothesis.

Or, to put that another way: if humans only ever studied bees from a distance, the default hypothesis should be that all bees do all bee jobs. The burden of proof is on the claim that bees specialize. Later, when we get up-close to a beehive, we'd learn that bees do specialize. But that doesn't mean that we were incorrect to believe the opposite before. Both our belief before the evidence, and our belief after the evidence, were the "correct" belief given our knowledge.


I was just going off the claims at the top of the thread:

https://news.ycombinator.com/item?id=21641721

I have no special expertise otherwise; if you want more substantiation or wish to dispute that point, you could post a reply where the claim was originally introduced.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: