The Chinese Room Argument [4]

Instantiation of a formal program isn't enough to produce semantic understanding or intentionality. A man who doesn't understand Chinese, can answer written Chinese questions using an English rulebook telling him how to manipulate Chinese symbols.

Robert Horn Map 4: Can Chinese Rooms Think?

The Chinese Room Argument was proposed by John Searle (1980 & 1990):


Imagine that a man who does not speak Chinese sits in a room and is passed Chinese symbols through a slot in the door. To him, the symbols are just so many squiggles and squoggles. But he reads an English-language rule book that tells him how to manipulate the symbols and which ones to send back out.

To the Chinese speakers outside, whoever (or whatever) is in the room is carrying on an intelligent conversation. But the man in the Chinese room does not understand Chinese; he is merely manipulating symbols according to a rulebook. He is instantiating a formal program, which passes the Turing test for intelligence, but nevertheless he does not understand Chinese. This shows that the instantiation of a formal program is not enough to produce semantic understanding or intentionality.

Note: for more on Turing tests, see Map 2. For more on formal programmes and instantiation, see the "Is the brain a computer arguments?" on Map 1, the "Can functional states generate consciousness?" arguments on Map 6, and "Formal systems:an overview" on Map 7.

Intentionality: The property (in reference to a mental state) of being directed at a state of affairs in the world. For example, the belief that Sally is in front of me is directed at a person, Sally, in the world.

Although there are important and subtle distinctions in the definitions of the words, intentionality is sometimes taken in this debate to be synonymous with representation, understanding, consciousness, meaning, and semantics.
RELATED ARTICLESExplain
Artificial Intelligence
Can computers think? [1]
Yes: physical symbol systems can think [3]
The Chinese Room Argument [4]
The Syntax-Semantics Barrier
Only minds are intrinsically intentional
Understanding arises from right causal powers
Can't process symbols predicationally or oppositionally
Chinese Room refutes strong AI not weak AI
The Combination Reply
The Systems Reply
Robot reply: Robots can think
The Brain Simulator Reply
The Many Mansions Reply
The Pseudorealisation Fallacy
Searle's Chinese Room is trapped in a dilemma
Chinese Room more than a simulation
Man in Chinese Room doesn't instantiate a progam
Chinese-speaking too limited a counterexample
The Chinese Room makes a modularity assumption
Man in Room understands some Chinese questions
The Chinese Room argument is circular
There are questions the Chinese Room can't answer
Yes: because a brain is a computer
Conscious agents can instantiate computer programs
John Searle
Can Chinese Rooms think? [4] 
Overt behavior doesn't demonstrate understanding
Thermostats can have beliefs
Humans learn by adding symbolic data to knowledge base
SOAR (an implemented model)
The Biological Assumption
The Disembodied Mind Assumption
The Heuristic Search Assumption
The Knowledge Base Assumption
The language of thought
The Representationalist Assumption
The Rule-Following Assumption
The Symbolic Data Assumption
The Universal Conceptual Framework Assumption
AI programs are brittle
The critique of artificial reason
The Lighthill Report
Symbol systems can't think dialectically
Graph of this discussion
Enter the title of your article


Enter a short (max 500 characters) summation of your article
Enter the main body of your article
Lock
+Comments (0)
+Citations (0)
+About
Enter comment

Select article text to quote
welcome text

First name   Last name 

Email

Skip