Connectionist networks can’t elaborate capacities quickly

Humans and AI systems are "elaboration tolerant"; they can quickly extend their abilities to take into account new phenomena—connectionist networks aren't and can't.

Whereas humans (and AI systems) can quickly elaborate their capacities, connectionist networks can’t.

For example, when an English-speaking human is given a simple rule of Chinese pronunciation (e.g. say "ch" when you see "q", he can immediately use the rule to speak differently. This kind of learning couldn't occur in a connectionist network, which would have to instantly adjust thousands of connection weights.

John McCarthy, 1988.
RELATED ARTICLESExplain
Artificial Intelligence
Can computers think? [1]
Yes: connectionist networks can think [5a]
The Subsymbolic Paradigm
Connectionist networks can’t elaborate capacities quickly
Conscious rule-interpreter can elaborate its capacities
Common sense and connectionism
The sensorimotor system
Between-module structures
Smolensky's treatment of levels is problematic
Statistical rationality needed
Symbolic processing in specially crafted networks
Too much representation, not enough dynamics
Graph of this discussion
Enter the title of your article


Enter a short (max 500 characters) summation of your article
Enter the main body of your article
Lock
+Comments (0)
+Citations (0)
+About
Enter comment

Select article text to quote
welcome text

First name   Last name 

Email

Skip