A block in the cog
Computationalism contradicts itself even if we imagine two machines of tremendous complexity running the consciousness program (see detailed text).
Imagine that in this case that both machines allow all the proper counterfactuals. The second machine, however, has an additional component—a block suspended in mid-air in one of the (never-activated) "counterfactual gears".

This block prevents counterfactual states, and so the second machine violates the nontriviality condition and is therefore not conscious.

So we have the same contradiction as before: only one machine is conscious, though by the supervenience thesis both should be.

What's more, it seems odd to claim that suspending or not suspending a block mid-air in a never activated part of the machine should make the difference between a nonconscious and a conscious machine.

Tim Maudlin (1989).

Computationalism still violates the supervenience thesis when we imagine two machines of tremendous complexity, each allowing all the proper counterfactuals, and each running a consciousness program. The second machine has an additional component, a block suspended in mid-air in one of the (never- activated) "counterfactual gears," which blocks any counterfactual states. The two systems manifest identical physical activity, yet the computationalist must declare the first machine conscious and the other (blocked machine) not consciousness because it violates the Non-triviality condition.

The Maudlin argument

Maudlin writes:

"Because of the immense quantity of machinery involved, one might misgive that its removal would necessitate some considerable change in the physical happenings associated with the machine. To alleviate such doubts, here are two cases in which the support can be neutralized by changes that can hardly be construed as altering the physical activities present.

Maudlin's first case is

"An Argument by Addition: Suppose we run Olympia, fully connected, on t [Greek symbol theta] so that (according to the supervenience thesis) the conscious stateo [Greek symbol phi] occurs. Now we reset her (and the tape) to run again, but we add a secondary block to each of the copies of Klara. The second block might be a thin piece of metal suspended between the frozen gear teeth. It need not be in physical contact with any part of the machinery."

Maudlin's second case:

"Now, however, were the first block to be removed (which will not, of course, happen when we run Olympia on t [Greek theta] from S[0] to S[n]), the gears would contact the second block and jam. The copies of Klara no longer support the right counterfactuals, so on the second run Olympia is not conscious. But, given that the second blocks in fact never even touch any part of the machinery, exerting no influence or force at all, how could the physical activity taking place in Olympia during the first run be said to differ from that in the second? Speaking loosely, how could the rest of the system know that the blocks are even there?" (T. Maudlin, 1989, p. 425).

References

Maudlin, Tim. 1989. Computation and Consciousness. The Journal of Philosophy, vol. LXXXVI, no. 8. p407-432.
Immediately related elementsHow this works
-
Artificial Intelligence Â»Artificial Intelligence
Can computers think? [1] Â»Can computers think? [1]
No: computers can't be conscious [6] Â»No: computers can't be conscious [6]
Implementable in functional system Â»Implementable in functional system
Computationalism Â»Computationalism
Computationalism contradicts itself Â»Computationalism contradicts itself
There would be differences between the complex conscious computers Â»There would be differences between the complex conscious computers
A block in the cog
+Kommentare (0)
+Verweise (0)
+About