AI rules can't explain ordinary language
Ordinary language: 1) is too open-ended to be captured by a set of explicit AI rules; 2) is embedded in, and adapting to, the context of use; 3) gets its meaning from practical language games not parsable gramatical structures.
Graham Button, Jeff Coulter, John R.E. Lee, & Wes Sturrock (1995).

The project of accounting for ordinary human lanaguage in terms of explicit AI rules runs into the following problems:

  • Our linguistic practices are too open-ended to be captured by an explicit set of rules.
  • Human language is essentially embodied embedded in a context of use, and it continually adapts to this context of use.
  • Ordinary language gets its meaning from practical involvement in "language games" not from its parsable grammatical structures.

Postulates of Ordinary Language

1. The purpose of philosophy is the resolution of conceptual confusion through the analysis of ordinary language.

2. Ordinary language is our primary medium of thought.

3. Problems of philosophy arise through the misuse or incomplete understanding of ordinary language.

4. The proper method for analysing language is through the explication of commonsense everyday use of language.

5. Language can be studied in terms of "language games" in which rules of usage are analysed as if they were the rules of the game.

6. Language can only be understood semantically from within ordinary language and language games. This means there is no formal metalanguage relevant to philosophy.

7. The rules of ordinary language are adapted to the various practical purposes of human conduct, and as such they must be open to examination and qualification.

8. Rules for the use of language are like rules fpr playing games; they are not like the parsing rules posited by AI researchers. Ordinary language rules provide guides and signposts for the use of language rather than a description of how language is generated.

Proponents include: Ludwig Wittgenstein; Graham Butten, Jeff Coulter, John R.E. Lee, and Wes Sharrock; Charles Karelis (Map 2); Hans Obermeier (Map 4); and Gilbert Ryle (Map 2 & 6). Other notable proponents include: John L. Austin, John Wisdom, and Stanley Cavell.

CONTEXT(Help)
-
Artificial Intelligence »Artificial Intelligence
Can computers think? [1] »Can computers think? [1]
Yes: physical symbol systems can think [3] »Yes: physical symbol systems can think [3]
The Rule-Following Assumption »The Rule-Following Assumption
AI rules can't explain ordinary language
+Comments (0)
+Citations (0)
+About