Connectionist machines possess compositional semantics

The claim that no connectionist model could have a compositional semantics is false, as connectionist implementations of classical machines do. Fodor and Pylyshyn haven't fully grasped the difference between local and distributed representations.

David Chalmers (1993).

Constituent Structure of Mental Representations

Classical symbolic theories postulate a language of thought—See "The language of thought", Map 3, Box 68—according to which complex mental representations are built up out of more simple representations. Complex representations can themselves be combined to form higher order representations.

When a complex representation explicitly contains its parts, those parts are said to be tokened in the complex representation. The parts of a complex representation are usually referred to as constituents.

Classical theories claim that mental processes of inference, transformation, composition and so forth are structure sensitive, which means they operate directly on the constituent structure of representations.

Other ways of referring to constituent structure include:

  • compositional structure;
  • compositional semantics;
  • composition; syntactic structure;
  • common editorial syntax and semantics; and,
  • language of full thought.
Connectionist Representations

Like AI researchers, connectionists are concerned with the issue of how mental states represent the world. Several kinds of connectionist representation are commonly distinguished:

Local representations: In local representational schemes each role in a network represents what some concept.

Distributed representations: in a distributed representation, a pattern of activity over the whole set of nodes represents a concept.

Feature representations: another form of representational scheme is microfeatural. Each node in a network represents some low-level feature of the high-level concept (see the "Coffee Story").

Patterns of activity: it's usually assumed that connectionist representations correspond to patterns of activity—individual activity values in local representations, distributed activity values in distributed representations. As a result, connectionist representations are constantly changing while a network runs, and are thus highly context sensitive. These are sometimes called active representations.

Weight representations: Sometimes the weights in the connectionist network are taken to represent the world. Wait representations come to reflect aspects of the world as the network learns. Weight representations change much more slowly than patterns of activity do, and are sometimes called passive representations (see "Weight representations avoid the regress").
RELATED ARTICLESExplain
Artificial Intelligence
Can computers think? [1]
Yes: connectionist networks can think [5a]
The Connectionist Dilemma
Connectionist machines possess compositional semantics
No semantics at the connectionist level
David Chalmers
Burden of proof is on connectionism
Connectionism is associationism
Cognition isn’t always systematic
Connectionist representations avoid the dilemma
Systematicity explained by natural selection
Systematicity is a conceptual not empirical law
Systematicity not enough to argue for classicism
The Three-Concept Monte
Graph of this discussion
Enter the title of your article


Enter a short (max 500 characters) summation of your article
Enter the main body of your article
Lock
+Comments (0)
+Citations (0)
+About
Enter comment

Select article text to quote
welcome text

First name   Last name 

Email

Skip