Connectionist machines possess compositional semantics
The claim that no connectionist model could have a compositional semantics is false, as connectionist implementations of classical machines do. Fodor and Pylyshyn haven't fully grasped the difference between local and distributed representations.
David Chalmers (1993).
Constituent Structure of Mental Representations
Classical symbolic theories postulate a language
of thought—See "
The language of thought", Map 3, Box 68—according to which complex mental representations are built up out of more simple representations. Complex representations can themselves be combined to form higher order representations.
When a complex representation explicitly contains its parts, those parts are said to be
tokened in the complex representation. The parts of a complex representation are usually referred to as
constituents.
Classical theories claim that mental processes of inference, transformation, composition and so forth are
structure sensitive, which means they operate directly on the constituent structure of representations.
Other ways of referring to constituent structure include:
- compositional structure;
- compositional semantics;
- composition; syntactic structure;
- common editorial syntax and semantics; and,
- language of full thought.
Connectionist RepresentationsLike AI researchers, connectionists are concerned with the issue of how mental states represent the world. Several kinds of connectionist representation are commonly distinguished:
Local representations: In local representational schemes each role in a network represents what some concept.
Distributed representations: in a distributed representation, a pattern of activity over the whole set of nodes represents a concept.
Feature representations: another form of representational scheme is microfeatural. Each node in a network represents some low-level feature of the high-level concept (see the "
Coffee Story").
Patterns of activity: it's usually assumed that connectionist representations correspond to patterns of activity—individual activity values in local representations, distributed activity values in distributed representations. As a result, connectionist representations are constantly changing while a network runs, and are thus highly context sensitive. These are sometimes called active representations.
Weight representations: Sometimes the weights in the connectionist network are taken to represent the world. Wait representations come to reflect aspects of the world as the network learns. Weight representations change much more slowly than patterns of activity do, and are sometimes called passive representations (see "
Weight representations avoid the regress").