One can only express so much in a short sentence, but McCarthy's reply does sum up one version of a traditional cognitivist understanding of perception. That is the idea that knowledge is raw data, and that thought is processing. It seems to me that much of traditional AI is unfairly dismissed, just as behaviorism was largely unfairly dismissed by the AI people, such is the peril of science being ruled by trends in the absence of strong findings (real science as in physics for example, the paradigm of real science).
But I also think there are some very important ideas from the more recent work that began with connectionism and led to embodied/ enactivist approaches. That has led to a definition something along the lines of knowing being an organism's ability to interact effectively with its environment (which would involve being able to predict correctly the results of actions performed on the object). So that would imply that both the organism and the environment are involved in the knowledge.
The camera has no knowledge of the table because it has not had the experience of lifting the table and feeling its weight, being aware of its ability to throw it (and how far), to set it down (and how its weight will affect how quickly it will hit the floor), its surface as a stable place for setting other objects, etc. All of these interactions lead to the perceptual skills necessary to know and understand the table, which is to say to have a trained neural network controlling, planning behavior, categorizing experience with these trained expectations.
(That seems to imply some guiding principles for implementing AI: 1. basic locomotion and physical interaction with the world is an important and non-trivial problem 2. there needs to be a linking theory to extend basic-level knowledge to novel, abstract categories of knowledge grounded in the earlier type. For 1. lots of neuroscience work is relevant including constructivist/ modeling approaches, including the behavior-based AI paradigm and for 2. one such major linking paradigm is the one that started with Rosch/ Lakoff /Faconnier/ Gibbs etc currently under the headings of conceptual metaphor and blending theory, cognitive linguistics, embodied cognitive science)
But I also think there are some very important ideas from the more recent work that began with connectionism and led to embodied/ enactivist approaches. That has led to a definition something along the lines of knowing being an organism's ability to interact effectively with its environment (which would involve being able to predict correctly the results of actions performed on the object). So that would imply that both the organism and the environment are involved in the knowledge.
The camera has no knowledge of the table because it has not had the experience of lifting the table and feeling its weight, being aware of its ability to throw it (and how far), to set it down (and how its weight will affect how quickly it will hit the floor), its surface as a stable place for setting other objects, etc. All of these interactions lead to the perceptual skills necessary to know and understand the table, which is to say to have a trained neural network controlling, planning behavior, categorizing experience with these trained expectations.
(That seems to imply some guiding principles for implementing AI: 1. basic locomotion and physical interaction with the world is an important and non-trivial problem 2. there needs to be a linking theory to extend basic-level knowledge to novel, abstract categories of knowledge grounded in the earlier type. For 1. lots of neuroscience work is relevant including constructivist/ modeling approaches, including the behavior-based AI paradigm and for 2. one such major linking paradigm is the one that started with Rosch/ Lakoff /Faconnier/ Gibbs etc currently under the headings of conceptual metaphor and blending theory, cognitive linguistics, embodied cognitive science)