AI programs are brittle

Symbolic AI programs use rigid rules & data structures and can't adapt to fluidly to changing environments & ambiguous circumstances. Symbol structures are brittle—they break apart under the pressure of a novel or ambiguous situation.

Note: Versions of this claim are widely discussed in the literature and on these maps. For example, brittleness is discussed by Hofstadter (see "The front-end assumption is dubious", Map 1, Box 74), Brooks (see sidebar, "Postulates of subsumption architecture", on this map), Dreyfus (see sidebar "Postulates of Dreideggereanism" on this map), and the connectionists (see Map 5). It is also raised in the context of fuzzy logic.
RELATED ARTICLESExplain
Artificial Intelligence
Can computers think? [1]
Yes: physical symbol systems can think [3]
AI programs are brittle
Thermostats can have beliefs
Humans learn by adding symbolic data to knowledge base
SOAR (an implemented model)
The Biological Assumption
The Disembodied Mind Assumption
The Heuristic Search Assumption
The Knowledge Base Assumption
The language of thought
The Representationalist Assumption
The Rule-Following Assumption
The Symbolic Data Assumption
The Universal Conceptual Framework Assumption
The Chinese Room Argument [4]
The critique of artificial reason
The Lighthill Report
Symbol systems can't think dialectically
Graph of this discussion
Enter the title of your article


Enter a short (max 500 characters) summation of your article
Enter the main body of your article
Lock
+Comments (0)
+Citations (0)
+About
Enter comment

Select article text to quote
welcome text

First name   Last name 

Email

Skip