comments
Respond
Comment on the article
Add a citation
Reply with an article
Start a new topic
Edit
Edit article
Delete article
Share
Invite
Link
Embed
Social media
Avatar
View
Graph
Explorer
Focus
Down
Load 1 level
Load 2 levels
Load 3 levels
Load 4 levels
Load all levels
All
Dagre
Focus
Down
Load 1 level
Load 2 levels
Load 3 levels
Load 4 level
Load all levels
All
Tree
SpaceTree
Focus
Expanding
Load 1 level
Load 2 levels
Load 3 levels
Down
All
Down
Radial
Focus
Expanding
Load 1 level
Load 2 levels
Load 3 levels
Down
All
Down
Box
Focus
Expanding
Down
Up
All
Down
Article ✓
Outline
Document
Down
All
Page
Canvas
Time
Timeline
Calendar
Updates
Subscribe to updates
Get updates
Past 24 hours
Past week
Past month
Past year
Pause updates
Contact us
Zenon Pylyshyn
Arguments advanced by Zenon Pylyshyn.
RELATED ARTICLES
Explain
⌅
Artificial Intelligence
Artificial Intelligence☜A collaboratively editable version of Robert Horns brilliant and pioneering debate map Can Computers Think?—exploring 50 years of philosophical argument about the possibility of computer thought.☜F1CEB7
⌃
Protagonists
Protagonists☜The contributions of over 300 protagonists can be explored via a surname search, or using the growing list developing here.☜D3B8AB
■
Zenon Pylyshyn
Zenon Pylyshyn☜Arguments advanced by Zenon Pylyshyn.☜D3B8AB
⇤
Implementable in functional system
Implementable in functional system☜Properly organized functional states generate consciousness. Such organization exists in the brain and can be built into computers as well.☜FFFACD
⇤
Analogue images can’t encode knowledge
Analogue images can’t encode knowledge☜The analogue interpretation of images makes them too specific to encode knowledge. Knowledge has a generality that can only be captured by propositions.☜FFFACD
⇤
Images are secondary to propositions
Images are secondary to propositions☜Knowledge is encoded in an unconscious propositional medium that lies beneath both language and imagery. Imagery, by itself, is not of interest to cognitive science, because it can’t explain human knowledge.☜FFFACD
⇤
Dual codes are too indeterminate
Dual codes are too indeterminate☜Dual codes are inadequate as theyre too ambiguous to provide for correspondences between pictures and words. Moving from pictures to words and vice versa requires an underlying intermediary code, or interlingua, to mediate the two realms.☜FFFACD
⇤
Images can't encode knowledge
Images can't encode knowledge☜Knowledge consists of information that applies to a range of possible situations. Analogue images, however, only carry information about the situations from which they rose; in themselves, they lack generality of application.☜FFFACD
⇤
Images aren't primitive explanatory concepts
Images aren't primitive explanatory concepts☜To be explanatory, images must play a role in causal explanations. Experience suggests they do not. Images in themselves (prior to interpretation) are epiphenomena that ride above a causal substrate of propositions, like foam rides atop a wave.☜FFFACD
⇤
Images are cognitively penetrable
Images are cognitively penetrable☜Images are cognitively penetrable, in that they can be altered in various ways by what a subject thinks. They cant be explanatory primitives as they break down into simpler parts depending on context.☜FFFACD
⇤
The definition of image is too vague
The definition of image is too vague☜The notion of image has no clear meaning except by association with the commonsense notion of a picture—which misleads the study of mind as it implies a spatial geometric figure is somehow actually present in the brain when we perceive an image.☜FFFACD
⇤
Picture-in-the-head metaphor influences covertly
Picture-in-the-head metaphor influences covertly☜Even if no one takes the picture-in-the-head metaphor seriously, the metaphor is implicitly drawn on when image theorists appeal to the spatial properties of images.☜FFFACD
⇤
Brain-style modelling can be misleading
Brain-style modelling can be misleading ☜Basing psychological theories on facts about the brain can be misleading. Neural inspiration seems useful, but its led to a revival of such weak psychological theories as: associationism; microfeature analysis; and statistically based learning.☜FFFACD
⇤
Facts about the brain may be irrelevant
Facts about the brain may be irrelevant☜Structures at different levels of organisation are often dissimilar. Thinking may have little in common with the neural structures its implemented in. Basing a theory of cognitive architecture on a theory about the brain requires care.☜FFFACD
⇤
The Connectionist Dilemma
The Connectionist Dilemma☜The connectionist approach to cognitive science is impaled on the horns of a dilemma: it is either inadequate as a theory of mind, or else it is an implementation of the classical architecture (see detailed text).☜FFFACD
⇤
Connectionism is associationism
Connectionism is associationism☜Processing units in a connectionist network are connected by associative links. Associationist theories cant account for systematicity and related phenomena: symbolic AI can via structured representations and structure sensitive thought processes.☜FFFACD
⇤
Yes: physical symbol systems can think [3]
Yes: physical symbol systems can think [3]☜Thinking is a rule governed manipulation of symbolic representational structures. In humans, symbol systems are instantiated in the brain, but the same symbol systems can also be instantiated in a computer. ☜FFFACD
⇤
Analogue systems can't represent general concepts
Analogue systems can't represent general concepts☜Analogue devices only capture particular sensory patterns. They cannot (by themselves) be used to recognize and process unversal concepts.☜FFFACD
⇤
Analogue machines lack flexibility of digital machines
Analogue machines lack flexibility of digital machines☜Analogue devices cant make contingent if-then branches—ie analogue devices cant do one thing under one set of circumstances and a completely different thing under a discretely different set of circumstances.☜FFFACD
⇤
Symbol structures can be distributed
Symbol structures can be distributed☜A classicial symbol processor can be physically distributed in memory, and can thereby exhibit graceful degradation. So distributed systems like connectionist networks dont have any principled advatange over physical symbol systems.☜FFFACD
⇤
The 100-step constraint
The 100-step constraint☜Alogrithms modelling cognitive processes must meet the 100-step constraint for performing complex tasks imposed by the brains timescale. Classical sequential algorithms, which run in millions of time-steps now, seem unlikely to meet the constraint.☜FFFACD
⇤
Symbol processing can take place in parallel
Symbol processing can take place in parallel☜A classical system can be implemented in a parallel architecture: eg by executing multiple symbolic processes at the same time. So parallel processing systems, like connectionist networks, have no principled advantage over classical symbol systems.☜FFFACD
⇤
Body isn't essential to intelligence in its final form
Body isn't essential to intelligence in its final form☜The body is important to the development of intelligence (as Jean Piaget has shown) but not to its ultimate form. By the time a human reaches adulthood, the body is no longer essential—an adult quadriplegic is intelligent.☜FFFACD
⇤
Best heuristics aren't just trial and error
Best heuristics aren't just trial and error☜Heuristic searches dont necessarily rely on trial and error. Programs can be structured so that a heuristic method moves the search ever closer to a problem solution without the need for redundant backtracking.☜FFFACD
⇤
Affordances are trivial
Affordances are trivial☜Affordances are just another name for whatever it is in the environment that makes an organism respond as it does. But such a notion cant provide a substantial explanation of perception and adds nothing new to our knowledge of perception mechanism☜FFFACD
⇤
Classicists aren't committed to explicit rules
Classicists aren't committed to explicit rules☜The possibility of implicit rules doesnt argue against the classical symbolic framework, because theres a body of work within the classicist camp that shows how implicit rules can be modeled.☜FFFACD
⇥
Zenon Pylyshyn
Zenon Pylyshyn☜Arguments advanced by Zenon Pylyshyn☜FFFACD
□
Alan Turing
Alan Turing☜Arguments advanced by Alan Turing.☜D3B8AB
□
Daniel Dennett
Daniel Dennett☜Arguments advanced by Daniel Dennett.☜D3B8AB
□
David Chalmers
David Chalmers☜Distinguished Professor of Philosophy and director of the Centre for Consciousness at ANU, and Professor of Philosophy and co-director of the Center for Mind, Brain, and Consciousness at NYU.☜D3B8AB
□
David Cole
David Cole☜Arguments advanced by David Cole.☜D3B8AB
□
David Rumelhart
David Rumelhart☜Arguments advanced by David Rumelhart.☜D3B8AB
□
Douglas Hofstadter
Douglas Hofstadter☜Arguments advanced by Douglas Hofstadter.☜D3B8AB
□
George Lakoff
George Lakoff☜Arguments advanced by George Lakoff.☜D3B8AB
□
Georges Rey
Georges Rey☜Arguments advanced by Georges Rey.☜D3B8AB
□
Herbert Simon
Herbert Simon☜Arguments advanced by Herbert Simon.☜D3B8AB
□
Hilary Putnam
Hilary Putnam☜Arguments advanced by Hilary Putnam.☜D3B8AB
□
Hubert Dreyfus
Hubert Dreyfus☜Arguments advanced by Hubert Dreyfus.☜D3B8AB
□
Hugh Loebner
Hugh Loebner☜Arguments advanced by Hugh Loebner.☜D3B8AB
□
Jack Copeland
Jack Copeland☜Arguments advanced by Jack Copeland.☜D3B8AB
□
James McClelland
James McClelland☜Arguments advanced by James McClelland.☜D3B8AB
□
James Moor
James Moor☜Arguments advanced by James Moor.☜D3B8AB
□
Jerry Fodor
Jerry Fodor☜Arguments advanced by Jerry Fodor.☜D3B8AB
□
John Lucas
John Lucas☜Arguments advanced by John Lucas.☜D3B8AB
□
John Searle
John Searle☜Arguments advanced by John Searle.☜D3B8AB
□
Joseph F. Rychlak
Joseph F. Rychlak☜Arguments advanced by Joseph F. Rychlak.☜D3B8AB
□
Keith Gunderson
Keith Gunderson☜Arguments advanced by Keith Gunderson.☜D3B8AB
□
L.J. Landau
L.J. Landau☜☜D3B8AB
□
Ned Block
Ned Block☜Arguments advanced by Ned Block.☜D3B8AB
□
Robert French
Robert French☜Arguments advanced by Robert French.☜D3B8AB
□
Roger Penrose
Roger Penrose☜Arguments advanced by Roger Penrose.☜D3B8AB
□
Selmer Bringsjord
Selmer Bringsjord☜Arguments advanced by Selmer Bringsjord.☜D3B8AB
□
Stephen Kosslyn
Stephen Kosslyn☜Arguments advanced by Stephen Kosslyn.☜D3B8AB
□
Graph of this discussion
Graph of this discussion☜Click this to see the whole debate, excluding comments, in graphical form☜dcdcdc
Enter the title of your article
Enter a short (max 500 characters) summation of your article
Click the button to enter task scheduling information
Open
Enter the main body of your article
Prefer more work space? Try the
big editor
Enter task details
Message text
Select assignee(s)
Due date (click calendar)
RadDatePicker
RadDatePicker
Open the calendar popup.
Calendar
Title and navigation
Title and navigation
<<
<
November 2024
>
<<
November 2024
S
M
T
W
T
F
S
44
27
28
29
30
31
1
2
45
3
4
5
6
7
8
9
46
10
11
12
13
14
15
16
47
17
18
19
20
21
22
23
48
24
25
26
27
28
29
30
49
1
2
3
4
5
6
7
Reminder
No reminder
1 day before due
2 days before due
3 days before due
1 week before due
Ready to post
Copy to text
Enter
Cancel
Task assignment(s) have been emailed and cannot now be altered
Lock
Cancel
Save
Comment graphing options
Choose comments:
Comment only
Whole thread
All comments
Choose location:
To a new map
To this map
New map options
Select map ontology
Options
Standard (default) ontology
College debate ontology
Hypothesis ontology
Influence diagram ontology
Story ontology
Graph to private map
Cancel
Proceed
+Comments (
0
)
- Comments
Add a comment
Newest first
Oldest first
Show threads
+Citations (
0
)
- Citations
Add new citation
List by:
Citerank
Map
+About
- About
Entered by:-
David Price
NodeID:
#2773
Node type:
Protagonist
Entry date (GMT):
7/20/2007 6:07:00 PM
Last edit date (GMT):
7/20/2007 6:07:00 PM
Show other editors
Incoming cross-relations:
1
Outgoing cross-relations:
23
Average rating:
0
by
0
users
Enter comment
Select article text to quote
Cancel
Enter
welcome text
First name
Last name
Email
Skip
Join
x
Select file to upload