What is a reasoning machine though ? And why is there an assumption that one can exist without flaws? It's not like any of the natural examples exist this way. How would you even navigate the real world without the flexibility to make mistakes ? I'm not saying people shouldn't try but you need to be practical. I'll take the General Intelligence with flaws over the fictional one without any day.
>Having deeply flawed machines in the sense that they perform their tasks regularly poorly seems like an odd choice to pursue.
State of the art ANNs are generally mostly right though. Even LLMs are mostly right, that's why hallucinations are particularly annoying.
Not my usage experience with LLMs. But that aside, poorly performing general intelligence might just not be very valuable compared to highly performing narrow or even zero intelligence.
Well LLMs are very useful and valuable to me and many others today so it's not really a hypothetical future. I'm very glad they exist and there's no narrow intelligence available that is a sufficient substitute.
I see. Well as far as I'm concerned, they already reason with the standards we apply to ourselves.
People do seem to have higher standards for machines but you can't eat your cake and have it. You can't call what you do reasoning and turn around and call the same thing something else because of preconceived notions of what "true" reasoning should be.
Having deeply flawed machines in the sense that they perform their tasks regularly poorly seems like an odd choice to pursue.