comments
Respond
Comment on the article
Add a citation
Reply with an article
Start a new topic
Edit
Edit article
Delete article
Share
Invite
Link
Embed
Social media
Avatar
View
Graph
Explorer
Focus
Down
Load 1 level
Load 2 levels
Load 3 levels
Load 4 levels
Load all levels
All
Dagre
Focus
Down
Load 1 level
Load 2 levels
Load 3 levels
Load 4 level
Load all levels
All
Tree
SpaceTree
Focus
Expanding
Load 1 level
Load 2 levels
Load 3 levels
Down
All
Down
Radial
Focus
Expanding
Load 1 level
Load 2 levels
Load 3 levels
Down
All
Down
Box
Focus
Expanding
Down
Up
All
Down
Article ✓
Outline
Document
Down
All
Page
Canvas
Time
Timeline
Calendar
Updates
Subscribe to updates
Get updates
Past 24 hours
Past week
Past month
Past year
Pause updates
Contact us
John Searle
Arguments advanced by John Searle.
RELATED ARTICLES
Explain
⌅
Artificial Intelligence
Artificial Intelligence☜A collaboratively editable version of Robert Horns brilliant and pioneering debate map Can Computers Think?—exploring 50 years of philosophical argument about the possibility of computer thought.☜F1CEB7
⌃
Protagonists
Protagonists☜The contributions of over 300 protagonists can be explored via a surname search, or using the growing list developing here.☜D3B8AB
■
John Searle
John Searle☜Arguments advanced by John Searle.☜D3B8AB
⇤
The Connection Principle
The Connection Principle☜Theres a necessary connection between consciousness and mentality. All thinking—as a form of mentality—is a least in principle accessible to consciousness. Even unconscious thoughts involve consciousness, as they can potentially become conscious.☜FFFACD
⇤
Mental states have aspectual shape
Mental states have aspectual shape☜A thought about water differs in aspectual shape from a thought about H2O, even though the two thoughts are about the same thing. The most we can say is that brain states have the causal capacity to produce aspectually shaped experiences.☜FFFACD
⇤
Green Bay Packers Backup Quarterback
Green Bay Packers Backup Quarterback ☜Two beliefs—one unconscious, one conscious—are identical if they possess the same mode and content, and are based on the same causal mechanisms.☜FFFACD
⇤
Consciousness-pill unhelpful
Consciousness-pill unhelpful ☜A consciousness pill doesnt act as a means of differentiating the mental from the non mental—an essential benefit of the connection principle. Hence, the pill isnt an extreme version of the connection principle and isnt helpful.☜FFFACD
⇤
Water-thought must be accessible to consciousness
Water-thought must be accessible to consciousness☜The purported counterexample to the connection principle requires the connection principle.☜FFFACD
⇤
Unconscious intentional zombies are impossible
Unconscious intentional zombies are impossible☜An unconscious zombies thoughts cant become conscious, so cant have a perspective—aspectual shape—and hence lack intentionality. So, there cant be a unconscious zombie also capable of having a thought about something—an intentional state.☜FFFACD
⇤
Normal ontological reductionism doesn't work
Normal ontological reductionism doesn't work☜In explanations of consciousness, the subjective, first person components cant be reduced away as nothing but physiological activity, because then the essential feature of consciousness will be left out.☜FFFACD
⇤
Causal reductionism
Causal reductionism☜Mental states can at the same time be physical states. Physical brain states cause mental states in the same way that a lattice structure of molecules causes liquidity. Mental states are causally—but not ontologically—reducible to physical state. ☜FFFACD
⇤
Nothing is intrinsically a digital computer
Nothing is intrinsically a digital computer☜The syntactic structues that define computers are realisable in any physical system. Hence, the question is a brain a digital computer? is ill-defined because the relevant syntax can be ascribed to any sufficiently complex system.☜FFFACD
⇤
Connectionist networks are formal systems
Connectionist networks are formal systems☜Arguments used against the formal character of symbol manipulators apply equally well to connectionist networks [CNs]. Functions computed on a CN can also be computed on a serial machine and CNs can implement classical serial processing.☜FFFACD
⇤
The Chinese Gym Argument
The Chinese Gym Argument ☜The Chinese Gym Argument (see detailed text) shows that instantiating a connectionist network is not enough to produce an understanding of Chinese.☜FFFACD
⇤
The Chinese Room Argument [4]
The Chinese Room Argument [4]☜Instantiation of a formal program isnt enough to produce semantic understanding or intentionality. A man who doesnt understand Chinese, can answer written Chinese questions using an English rulebook telling him how to manipulate Chinese symbols.☜FFFACD
⇤
Chinese Room refutes strong AI not weak AI
Chinese Room refutes strong AI not weak AI☜The Chinese Room argument refutes strong AI by showing that running a program is not enough to generate any real understanding or intentionality. But it doesnt refute weak AI, which simply uses computers as tools to study the mind. ☜FFFACD
⇤
Read what they've written
Read what they've written☜There are lots of cognitive scientists who are adherents of strong AI—just read what people have written. They have identified themselves in their published commentaries on the Chinese Room argument.☜FFFACD
⇤
Only minds are intrinsically intentional
Only minds are intrinsically intentional☜Mental states alone are intrinsically intentional. Computational systems are only intentional relative to some observer who treats them as if they had intentional states.☜FFFACD
⇤
3rd axiom is a logical not an empirical truth
3rd axiom is a logical not an empirical truth☜That syntax is not sufficient for semantics is a logical truth, not an empirical question. To see this, notice that its converse raises inconsistencies (described in the detailed text).☜FFFACD
⇤
Luminous Room isn't the same as Chinese Room
Luminous Room isn't the same as Chinese Room☜The luminous room argument exploits a problematic analogy between syntax and electromagnetism. The problem with this analogy is set out in the detailed text.☜FFFACD
⇤
Understanding arises from right causal powers
Understanding arises from right causal powers☜Systems capable of semantic understanding and intentionality must have at least the same causal powers as brains. Brains have sufficient causal powers to produce understanding: its an open empirical question whether other materials (eg silicon) do.☜FFFACD
⇤
Biological Naturalism
Biological Naturalism☜Consciousness and intentionality are caused by and realised in the brain. The brain has the right causal powers to produce intentionality.☜FFFACD
⇤
Intentionality is both abstract and biological
Intentionality is both abstract and biological☜Jacquette fails to recognise that intentionality can be both biological and abstract. Jacquettes position is a result of a lingering dualism that disembodies the abstract features of intentionality.☜FFFACD
⇤
Chinese Room doesn't assume locus of control
Chinese Room doesn't assume locus of control☜Searle gives examples of programs—beer-can systems, water pipes, etc—that dont need a central locus of control. His only Chinese Room assumptions are programs are syntactical, syntax is not sufficient for semantics, and minds have semantics.☜FFFACD
⇤
No level of formal program can produce semantics
No level of formal program can produce semantics☜Searle accepts a program could be a mind, if implemented in a medium with the right causal powers. The key point though is that no program, no matter how low level it is, could ever produce semantics only by virtue of syntactic symbol manipulations.☜FFFACD
⇤
Chinese Water Pipe Brain Simulator
Chinese Water Pipe Brain Simulator☜Chinese Room mans given a new rulebook noting which valves to switch on/off in a vast system of waterpipes in response to Chinese inputs. The pipes simulate the neural properties of a Chinese speakers brain, but the man doesnt understand Chinese.☜FFFACD
⇤
Causal powers maintained if simulation low-level
Causal powers maintained if simulation low-level☜The person whose neurotransmitters have been replaced still understands Chinese because she still has the right causal powers; she just needs some help from the demon.☜FFFACD
⇤
The Chinese Water-pipe brain simulator
The Chinese Water-pipe brain simulator☜Chinese Room man given a new rulebook indicating which valves to switch on/off in vast system of waterpipes in response to Chinese inputs. The pipes simulate the neural properties of a Chinese speakers brain but the man doesnt understand Chinese.☜FFFACD
⇤
Knowing how the robot works
Knowing how the robot works☜If a robot looked and behaved like a human, it would be rational to explain its actions in terms of intentionality. But if we knew it acted on the basis of formal symbol manipulations wed no longer appeal to intentionality to explain its behaviour.☜FFFACD
⇤
Conscious agents can instantiate computer programs
Conscious agents can instantiate computer programs ☜Fodors strengthened notion of instantiation doesnt rule out the possibility of a conscious agent performing the symbol manipulations. The causal connections Fodor asks for are supplied by the man in the room.☜FFFACD
⇤
The many mansions reply trivialises strong AI
The many mansions reply trivialises strong AI☜Its entirely possible that someday some device will possess intentionality, but thats irrelevant because the Chinese Room argument is meant to refute the more specific thesis that formal computational processes can produce intentionality.☜FFFACD
⇤
Chinese Room can be placed inside the robot
Chinese Room can be placed inside the robot☜Merely putting a computer program in a robot is inadequate because we can put the Chinese Room insde a robot and apply the same thought experiement as before.☜FFFACD
⇤
Symbols and causal connections insufficient
Symbols and causal connections insufficient☜Symbols and causal connections arent sufficient to produce meaning (or intentionality). If theres meaning in a system, it can only be because theres more to the system than just the fact of a symbol and the fact of a causal connection.☜FFFACD
⇤
The Internalisation Reply
The Internalisation Reply☜Suppose the man in the Room memorizes the rule book and all the symbols and does the matching in his head not on paper. He incorporates the entire Room system, but still doesnt understand Chinese. So the whole system doesnt understand Chinese.☜FFFACD
⇤
The Chinese Room passes the test
The Chinese Room passes the test☜The Chinese Room thought experiment involves a system that passes the Turing test by speaking Chinese but fails to understand Chinese.☜FFFACD
⇤
Simulations are not duplications
Simulations are not duplications☜A simulation of a fire will not burn. A simulation of metabolism in digestion will not nourish. A simulation of an automobile engine will not get anywhere. Similarly, a simulation of the understanding is not actual understanding.☜FFFACD
⇤
Overt behavior doesn't demonstrate understanding
Overt behavior doesn't demonstrate understanding☜The Chinese Room argument shows that a system can engage in Chinese-speaking behavior yet fail to understand Chinese. The Chinese Room is a system that could past the Turing Test without thinking in Chinese.☜FFFACD
□
Alan Turing
Alan Turing☜Arguments advanced by Alan Turing.☜D3B8AB
□
Daniel Dennett
Daniel Dennett☜Arguments advanced by Daniel Dennett.☜D3B8AB
□
David Chalmers
David Chalmers☜Distinguished Professor of Philosophy and director of the Centre for Consciousness at ANU, and Professor of Philosophy and co-director of the Center for Mind, Brain, and Consciousness at NYU.☜D3B8AB
□
David Cole
David Cole☜Arguments advanced by David Cole.☜D3B8AB
□
David Rumelhart
David Rumelhart☜Arguments advanced by David Rumelhart.☜D3B8AB
□
Douglas Hofstadter
Douglas Hofstadter☜Arguments advanced by Douglas Hofstadter.☜D3B8AB
□
George Lakoff
George Lakoff☜Arguments advanced by George Lakoff.☜D3B8AB
□
Georges Rey
Georges Rey☜Arguments advanced by Georges Rey.☜D3B8AB
□
Herbert Simon
Herbert Simon☜Arguments advanced by Herbert Simon.☜D3B8AB
□
Hilary Putnam
Hilary Putnam☜Arguments advanced by Hilary Putnam.☜D3B8AB
□
Hubert Dreyfus
Hubert Dreyfus☜Arguments advanced by Hubert Dreyfus.☜D3B8AB
□
Hugh Loebner
Hugh Loebner☜Arguments advanced by Hugh Loebner.☜D3B8AB
□
Jack Copeland
Jack Copeland☜Arguments advanced by Jack Copeland.☜D3B8AB
□
James McClelland
James McClelland☜Arguments advanced by James McClelland.☜D3B8AB
□
James Moor
James Moor☜Arguments advanced by James Moor.☜D3B8AB
□
Jerry Fodor
Jerry Fodor☜Arguments advanced by Jerry Fodor.☜D3B8AB
□
John Lucas
John Lucas☜Arguments advanced by John Lucas.☜D3B8AB
□
Joseph F. Rychlak
Joseph F. Rychlak☜Arguments advanced by Joseph F. Rychlak.☜D3B8AB
□
Keith Gunderson
Keith Gunderson☜Arguments advanced by Keith Gunderson.☜D3B8AB
□
L.J. Landau
L.J. Landau☜☜D3B8AB
□
Ned Block
Ned Block☜Arguments advanced by Ned Block.☜D3B8AB
□
Robert French
Robert French☜Arguments advanced by Robert French.☜D3B8AB
□
Roger Penrose
Roger Penrose☜Arguments advanced by Roger Penrose.☜D3B8AB
□
Selmer Bringsjord
Selmer Bringsjord☜Arguments advanced by Selmer Bringsjord.☜D3B8AB
□
Stephen Kosslyn
Stephen Kosslyn☜Arguments advanced by Stephen Kosslyn.☜D3B8AB
□
Zenon Pylyshyn
Zenon Pylyshyn☜Arguments advanced by Zenon Pylyshyn.☜D3B8AB
□
Graph of this discussion
Graph of this discussion☜Click this to see the whole debate, excluding comments, in graphical form☜dcdcdc
Enter the title of your article
Enter a short (max 500 characters) summation of your article
Click the button to enter task scheduling information
Open
Enter the main body of your article
Prefer more work space? Try the
big editor
Enter task details
Message text
Select assignee(s)
Due date (click calendar)
RadDatePicker
RadDatePicker
Open the calendar popup.
Calendar
Title and navigation
Title and navigation
<<
<
November 2024
>
<<
November 2024
S
M
T
W
T
F
S
44
27
28
29
30
31
1
2
45
3
4
5
6
7
8
9
46
10
11
12
13
14
15
16
47
17
18
19
20
21
22
23
48
24
25
26
27
28
29
30
49
1
2
3
4
5
6
7
Reminder
No reminder
1 day before due
2 days before due
3 days before due
1 week before due
Ready to post
Copy to text
Enter
Cancel
Task assignment(s) have been emailed and cannot now be altered
Lock
Cancel
Save
Comment graphing options
Choose comments:
Comment only
Whole thread
All comments
Choose location:
To a new map
To this map
New map options
Select map ontology
Options
Standard (default) ontology
College debate ontology
Hypothesis ontology
Influence diagram ontology
Story ontology
Graph to private map
Cancel
Proceed
+Comments (
0
)
- Comments
Add a comment
Newest first
Oldest first
Show threads
+Citations (
0
)
- Citations
Add new citation
List by:
Citerank
Map
+About
- About
Entered by:-
David Price
NodeID:
#2769
Node type:
Protagonist
Entry date (GMT):
7/20/2007 6:03:00 PM
Last edit date (GMT):
7/20/2007 6:03:00 PM
Show other editors
Incoming cross-relations:
0
Outgoing cross-relations:
34
Average rating:
0
by
0
users
Enter comment
Select article text to quote
Cancel
Enter
welcome text
First name
Last name
Email
Skip
Join
x
Select file to upload