Computers are not introspective

Thought requires the capacity for introspective episodic memories. Such memories of life episodes play a significant role in human thinking but are completely lacking in computer thinking.

Endel Tulving (1983).

Even if the machine passes the test we should be cautious of ascribing intelligence to it. For humanlike thinking an introspective perspective is necessary. Humans have "episodic memories" which require a point of view which computers lack. Such memories of "life episodes" play a highly significant role in human thinking but are completely lacking in computer thinking.

The Tulving argument

Joseph Rychlak summarizes Tulving's argument very well:

"Tulving (1983) notes that human beings have 'episodic' memories, by which he means the more introspectively conceived, highly personal events and interpretations of 'life episodes' that play such a significant role in the human experience. Since computers never live according to the introspective perspective within which such life episodes are framed, it follows that there is a significant impasse between what human and machine cognition involves. In discussing this issue, Tulving reminds me of Searle. Thus he notes that if we gave computers pseudo-episodic memories they would, in recalling such events, simply be 'manipulating certain symbols according to certain rules, they would be talking only about words, rather about original experiences organized temporally in their personal past and related to their sense of personal identity, or continuity in subjective time' (p. 53)" (Rychlak, 1991, p.5).

Tulving himself writes:

"Although some theorists are optimistic about the prospects of endowing computers with episodic memories (e.g., Schank and Kolodner 1979) I am skeptical. I rather doubt that computers' memories can ever even approximate people's rememberings of personal events. Consider, for instance, Schank and Kolodner's wish to converse with the computer about its visits to a museum and meeting an important person there. We can certainly tell computers a great deal about visiting museums, and meeting important people, , and we can even tell them that they themselves have done both. Eventually we may be able to do it so well that the computer could hold an intelligent conversation with people about the topics. Nevertheless, we know that computers would only be manipulating certain symbols according to certain rules, they would be talking only about words, rather than about original experiences organized temporally in their personal past and related to their sense of personal identity, or continuity in subjective time" (Tulving, 1983, p. 53).

References

Rychlak, Joseph. 1991. Artificial Intelligence and Human Reason. New York: Columbia University Press.

Tulving, Endel. 1983. Elements of Episodic Memory. New York: Oxford University Press.

RELATED ARTICLESExplain
Artificial Intelligence
Can computers think? [1]
No: computers can't be conscious [6]
Computers are not introspective
Can never have a conscious experience
Computers can't have feelings
Consciousness excluded by definition
Consciousness is necessary to thought
Mechanisms can't possess consciousness
Consciousness is physical
Higher-order representational structures
Implementable in functional system
Let's just say robots are conscious
Rejection has undesirable consequences
Rule following doesn't deny consciousness
Graph of this discussion
Enter the title of your article


Enter a short (max 500 characters) summation of your article
Enter the main body of your article
Lock
+Comments (0)
+Citations (0)
+About
Enter comment

Select article text to quote
welcome text

First name   Last name 

Email

Skip