Neurons can't represent rules of ordinary language
AI assumes rules are ultimately represented in the brain. But neurons don't provide a symbolic medium in which rules can be inspected & modified, so they aren't appropriate as a medium for the formulation of rules for ordinary language.
We don't use neurons like we use rules.

Graham Button, Jeff Coulter, John R. E. Lee, and Wes Sharrock (1995).

Note: See also on this map:

  • the "Do humans use rules as physical symbol systems do?" arguments,
  • and the sidebar "Postulates of Ordinary Language" on this map.
Immediately related elementsHow this works
-
Artificial Intelligence Â»Artificial Intelligence
Can computers think? [1] Â»Can computers think? [1]
Yes: physical symbol systems can think [3] Â»Yes: physical symbol systems can think [3]
The Biological Assumption Â»The Biological Assumption
Neurons can't represent rules of ordinary language
We do not know if neurons do or do not "provide a symbolic medium."  Â»We do not know if neurons do or do not "provide a symbolic medium."
+Kommentare (0)
+Verweise (0)
+About