Searle assumes a central locus of control

Searle presupposes any simulation of a native Chinese speaker will involve a central locus of control that manipulate symbols without understanding Chinese—but hasn't shown that a model without a central locus of control wouldn't understand Chinese.

Jacquette cites a version of a computational system that simulates the brain's microlevel functional structure as an example of a model that would lack any central locus of control.


Dale Jacquette, 1989.
RELATED ARTICLESExplain
Artificial Intelligence
Can computers think? [1]
Yes: physical symbol systems can think [3]
The Chinese Room Argument [4]
Understanding arises from right causal powers
Brain's causal powers reproduced by a computer
Searle assumes a central locus of control
Chinese Room doesn't assume locus of control
Brain has a von Neumann architecture
No level of formal program can produce semantics
Graph of this discussion
Enter the title of your article


Enter a short (max 500 characters) summation of your article
Enter the main body of your article
Lock
+Comments (0)
+Citations (0)
+About
Enter comment

Select article text to quote
welcome text

First name   Last name 

Email

Skip