Searle assumes a central locus of control
Searle presupposes any simulation of a native Chinese speaker will involve a central locus of control that manipulate symbols without understanding Chinese—but hasn't shown that a model without a central locus of control wouldn't understand Chinese.
Jacquette cites a version of a computational system that simulates the brain's microlevel functional structure as an example of a model that would lack any central locus of control.


Dale Jacquette, 1989.
Immediately related elementsHow this works
-
Artificial Intelligence »Artificial Intelligence
Can computers think? [1] »Can computers think? [1]
Yes: physical symbol systems can think [3] »Yes: physical symbol systems can think [3]
The Chinese Room Argument [4] »The Chinese Room Argument [4]
Understanding arises from right causal powers »Understanding arises from right causal powers
Brain's causal powers reproduced by a computer »Brain's causal powers reproduced by a computer
Searle assumes a central locus of control
Chinese Room doesn't assume locus of control »Chinese Room doesn't assume locus of control
Brain has a von Neumann architecture »Brain has a von Neumann architecture
+Kommentare (0)
+Verweise (0)
+About