In his books he is rather humble about his knowledge of the subject. Yes, he proposes this hypothesis and supports it, but stops way short of "i know more then the specialists". It's a lot more like "from what i know, it seems very probable".
I also think he's wrong, but at least he's not infatuated about it.
edit: I love his books. Well worth the read, and for those concerned he puts his speculations in chapters very clearly marked "speculations". From his books I first learned the details about Turing machines and lambda calculus, and a lot more.
The core of his argument is that humans can know truths that cannot be discovered computationally. If that is true then it implies two things 1. Conventional AI will never reach human levels of intelligence. 2. To understand how human intelligence works we need to discover new physics - most likely in the area of quantum mechanics.
It all hangs, not on quantum mechanics, but on his initial assertion. I've read his explanation in Shadows of the Mind, but the lightbulb didn't go on for me. For McArthy to dismiss the question offhand was a little disappointing. I'd expect him to have a deeper insight.
We as a species have a long history of explaining the unknown with magic. So we should tread carefully around this kind of explanations, simply because we should be aware of such a strong bias towards them.
What the quantum intelligence does is take one unresolved problem (how do we think) and replace it with another (we think with quantum computing, but we don't know exactly how). Its only result is to take away the unknown and replace it with an incomplete explanation.
I also don't think Penrose would have come to this conclusion now. Cognitive psychology has taken enormous strides in the past years, and it's already pretty clear it's on the right track.
Take a look for example at "The Emotion Machine" by Marvin Minsky (who by the way is on par with McCarthy in AI but chose cognitive psychology as a main field: http://en.wikipedia.org/wiki/Marvin_Minsky).
I admire McCarthy a lot, but I'm with you there in being a bit disappointed. As for humans being able to know truths that cannot be discovered computationally, there is some evidence [1] supporting the hypothesis that animal brains do what they do using analogue information processing. So it may be that there are "thruths" which can only be "known" using analogue processes, in which case, as digital reasoners, computers would always remain at a disadvantage when compared against humans in terms of "intelligence".
[1] Spivey, M., Grosjean, M. & Knoblich, G. (2005). Continuous attraction toward phonological competitors. Proceedings of the National Academy of Sciences, 102(29), 10393-10398.
Hypothetically speaking, if we develop a sufficient understanding of analogue information processing, couldn't we build either (a) some kind of analog co-processor that operates in this manner and interfaces with the computer, or even (b) a sufficiently precise digital simulation of a system that can use analogue processes?
Maybe we can. I hope we'll invest in finding out, and soon. Then again, John McCarthy doesn't seem to agree, as this answer suggests:
Q. Is there anything in principle that would prevent a computer from thinking as a human would?
A. No
IOW, there's still no recognition today, on the side of the purveyors of "classical" AI, that anything except digital processing might be needed for a computer to think "as a human would". So the big money is likely to continue being thrown at attempts to emulate animal brains using purely digital means. And I suspect that these funds might largely be better spent elsewhere.
I tried to find a free PDF version of that paper, but no such luck. However, I found an earlier one by Michael Spivey and Rick Dale, ON THE CONTINUITY OF MIND: TOWARD A
DYNAMICAL ACCOUNT OF COGNITION (59 pages, <http://www.cogstud.cornell.edu/spiveylab/PLM.pdf>;).
I also think he's wrong, but at least he's not infatuated about it.
edit: I love his books. Well worth the read, and for those concerned he puts his speculations in chapters very clearly marked "speculations". From his books I first learned the details about Turing machines and lambda calculus, and a lot more.