Neurons can't represent rules of ordinary language
AI assumes rules are ultimately represented in the brain. But neurons don't provide a symbolic medium in which rules can be inspected & modified, so they aren't appropriate as a medium for the formulation of rules for ordinary language.
We don't use neurons like we use rules.

Graham Button, Jeff Coulter, John R. E. Lee, and Wes Sharrock (1995).

Note: See also on this map:

  • the "Do humans use rules as physical symbol systems do?" arguments,
  • and the sidebar "Postulates of Ordinary Language" on this map.
PAGE NAVIGATOR(Help)
-
Artificial Intelligence »Artificial Intelligence
Can computers think? [1] »Can computers think? [1]
Yes: physical symbol systems can think [3] »Yes: physical symbol systems can think [3]
The Biological Assumption »The Biological Assumption
Neurons can't represent rules of ordinary language
We do not know if neurons do or do not "provide a symbolic medium."  »We do not know if neurons do or do not "provide a symbolic medium."
+Commentaar (0)
+Citaten (0)
+About