Randomness and responsibility can be compatible
An agent that choose randomly in a morally nil-preference situation is still responsible for its actions. For example, a gunman may randomly choose 1 of 5 hostages, but is still responsible for the death of the one chosen.

13

Random choice ony revokes responsibility if the choice is between alternatives of differing ethical value.

Agents that choose randomly from equally preferred options are responsible for their ensuing actions.

The Copeland Argument

According to Copeland:

"...It [Ayer's argument] trades illegitimately on the connotations of phrases like 'matter of pure chance' and 'done purely by accident'...Imagine that a mysterious beam of energy passes through your arm while you are drinking a cup of coffee. The curious effect of this is to make it a matter of pure chance whether you continue drinking normally or jerk the contents of the cup into my lap. In these circumstances you would obviously not be responsible if you ended up tipping your coffee over me -- any more than you would be if a third party bumped into your arm. So Ayer's argument fits this sort of case perfectly. But now suppose that the randomizer bean passes through the head of a hijacker, Pernod, who is just about to shoot one or other of his hostages, Kirsch and Campari, and does not much care which. The effect of the beam is to make it a matter of pure chance which of Kirsch or Campari is selected. In fact it is Kirsch who gets the bullet. Would you want to say that Pernod is not responsible for killing him? In my view, Pernod is clearly responsible for the death of Kirsch, even though it was 'a matter of pure chance' that the option shoot Kirsch got selected in preference to the option shoot Campari. After all, Pernod had already decided to shoot one or the other of the two and the fact that the final choice between the two was made at random seems neither here nor there. The moral of the tale is that the considerations presented in Ayer's argument do not apply to a random choice made under the conditions that obtain in a nil preference situation" (J. Copeland, 1993, pp. 146-147).

References

Copeland, Jack. 1993. Artificial Intelligence: A Philosophical Introduction. Blackwell Publishers.

Immediately related elementsHow this works
-
Artificial Intelligence Â»Artificial Intelligence
Can computers think? [1] Â»Can computers think? [1]
No: computers can't have free will Â»No: computers can't have free will
Random selection produces free will Â»Random selection produces free will
Randomization eliminates free will Â»Randomization eliminates free will
Randomness eliminates moral responsibility Â»Randomness eliminates moral responsibility
Randomness and responsibility can be compatible
+Kommentare (0)
+Verweise (0)
+About