|
No: computers can't be creative Pozicija1 #142 Computers can never be creative. They only do what they are programmed to do. They have no originality or creative powers. | 97
Note: Similar debates play out in the "free will" arguments. |
+Citavimą (1) - CitavimąPridėti citatąList by: CiterankMapLink[1] Computing Machinery and Intelligence
Cituoja: Alan Turinig Publication info: 1950 Cituojamas: Price, David 10:52 AM 18 November 2007 GMT Citerank: (5) 160No: the implications too hard to faceThe consequences of machine thought are too dreadful to accept, therefore we should 'stick our heads in the sand' and hope that machines will never be able to think or have souls.959C6EF, 162No: God gave souls to humans not machinesThe theological objection, anticipated by Turing, that only entities with immortal souls can think. God has given souls to humans, but not to machines. Therefore, humans can think and computers can't.959C6EF, 207No: computers are inherently disabled?Machines can never do X, where X is any variety of abilities that are regarded as distinctly human—e.g. being friendly, having a sense of humour, making mistakes, or thinking about oneself. 959C6EF, 214Computers can be subject of own thoughtsWhen a computer solves equations, the equations can be said to be the object of its thought. Similarly when a computer is used to predict its own behaviour or to modify its own program, we can say that it's the object of its own thoughts.13EF597B, 265No: ESP would confound the testExtrasensory Perception could invalidate the Turing Test in a variety of ways—eg if a competitor with ESP could "listen in" on the judges & gain an unfair advantage, or a judge with ESP could easily discern humans from machines by clairvoyance.959C6EF URL: | Ištrauka - (6) Lady Lovelace's Objection
Our most detailed information of Babbage's Analytical Engine comes from a memoir by Lady Lovelace ( 1842). In it she states, "The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform" (her italics). This statement is quoted by Hartree ( 1949) who adds: "This does not imply that it may not be possible to construct electronic equipment which will 'think for itself,' or in which, in biological terms, one could set up a conditioned reflex, which would serve as a basis for 'learning.' Whether this is possible in principle or not is a stimulating and exciting question, suggested by some of these recent developments But it did not seem that the machines constructed or projected at the time had this property."
I am in thorough agreement with Hartree over this. It will be noticed that he does not assert that the machines in question had not got the property, but rather that the evidence available to Lady Lovelace did not encourage her to believe that they had it. It is quite possible that the machines in question had in a sense got this property. For suppose that some discrete-state machine has the property. The Analytical Engine was a universal digital computer, so that, if its storage capacity and speed were adequate, it could by suitable programming be made to mimic the machine in question. Probably this argument did not occur to the Countess or to Babbage. In any case there was no obligation on them to claim all that could be claimed.
This whole question will be considered again under the heading of learning machines.
A variant of Lady Lovelace's objection states that a machine can "never do anything really new." This may be parried for a moment with the saw, "There is nothing new under the sun." Who can be certain that "original work" that he has done was not simply the growth of the seed planted in him by teaching, or the effect of following well-known general principles. A better variant of the objection says that a machine can never "take us by surprise." This statement is a more direct challenge and can be met directly. Machines take me by surprise with great frequency. This is largely because I do not do sufficient calculation to decide what to expect them to do, or rather because, although I do a calculation, I do it in a hurried, slipshod fashion, taking risks. Perhaps I say to myself, "I suppose the Voltage here ought to he the same as there: anyway let's assume it is." Naturally I am often wrong, and the result is a surprise for me for by the time the experiment is done these assumptions have been forgotten. These admissions lay me open to lectures on the subject of my vicious ways, but do not throw any doubt on my credibility when I testify to the surprises I experience.
I do not expect this reply to silence my critic. He will probably say that h surprises are due to some creative mental act on my part, and reflect no credit on the machine. This leads us back to the argument from consciousness, and far from the idea of surprise. It is a line of argument we must consider closed, but it is perhaps worth remarking that the appreciation of something as surprising requires as much of a "creative mental act" whether the surprising event originates from a man, a book, a machine or anything else.
The view that machines cannot give rise to surprises is due, I believe, to a fallacy to which philosophers and mathematicians are particularly subject. This is the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it. It is a very useful assumption under many circumstances, but one too easily forgets that it is false. A natural consequence of doing so is that one then assumes that there is no virtue in the mere working out of consequences from data and general principles. |
|
|