Internal semantics in syntactic networks that learn
A network of appropriately connected syntactical symbols—which can learn by deriving consequences from inputs—possesses internal semantics and can be said to understand as human understanding seems to result from the same kind of internal semantics.
William Rapaport, 1988.
CONTEXT(Help)
-
Artificial Intelligence »Artificial Intelligence
Can computers think? [1] »Can computers think? [1]
Yes: physical symbol systems can think [3] »Yes: physical symbol systems can think [3]
The Chinese Room Argument [4] »The Chinese Room Argument [4]
The Syntax-Semantics Barrier »The Syntax-Semantics Barrier
Programs that learn can overcome the barrier »Programs that learn can overcome the barrier
Internal semantics in syntactic networks that learn
The Korean Room thought experiment »The Korean Room thought experiment
Human understanding isn't reducible to internal semantics »Human understanding isn't reducible to internal semantics
+Comments (0)
+Citations (0)
+About