The fading qualia thought experiment
Chalmers imagines an experiment in which a person's neurons are slowly replaced one by one with silicon chips; arguing that the person's qualia—the subjective feelings that accompany perceptions and sensations—remain the same (see detailed text).
 
Imagine that your neurons are slowly replaced, one by one, by silicon chips. There are three alternatives:
  1. Qualitative conscious experiences (qualia) will disappear suddenly when some particular neuron is replaced.
  2. Qualia will gradually fade away.
  3. Qualia will remain the same.
In the first instance, it seems impossible that one neron will make such a difference that qualia will disappear suddenly (and completely)—so the first alternative is ruled out.

In the second instance, you will make statements about your conscious experiences that are incorrect, given the fading of qualia. For example, you might claim that sounds are loud even though the sounds are actually barely audible. But given what we know about consciousness, it is not likely that such misguided beliefs could occur—so the second alternative is ruled out.

So, it must be the case that your qualia will remain the same.

David Chalmers (1996).

Qualia: the subjective feelings that accompany perceptions, feelings, and sensations. Examples include: the smell of a rose, the feeling of anger, and the sensation of an itch. It is controversial whether more subtle states, such as beliefs, are always accompanied by qualia.

Note: compare to the "Can computers have the right causal powers?" arguments on Map 4.

The Chalmers argument

David Chalmers describes his thought experiment as follows:
 
"The argument takes the form of a reductio ad absurdum. Assume that absent qualia are naturally possible. Then there could be a system with the same functional organization as a conscious system (say, me), but which lacks conscious experience entirely. Without loss of generality, assume that this is because the system is made of silicon chips instead of neurons. I will show how the argument can be extended to other kinds of isomorphs later. Call this functional isomorph Robot. The causal patterns in Robot's cognitive system are the same as they are in mine, but he has the consciousness of a zombie."

With these premises set up, Chalmers continues with slicing the situation into smaller and smaller intermediate situations:
 
"Given this situation, we can construct a series of cases intermediate between me and the Robot such that there is only a very small change at each step and such that functional organization is preserved throughout. We can imagine, for instance, replacing a certain number of my neurons by silicon chips. In the first such case, only a single neuron is replaced. Its replacement is a silicon ship that performs precisely the same local function as the neuron. Where the neurons connected to other neurons, the chip is connected to inputs and chemical signals, the silicon ship is sensitive to the same. We might imagine that is comes equipped with tiny transducers that take in electrical signals and chemical ions, relaying a digital signal to the rest of the chip..." 

This is just the beginning. Chalmers then proposes a recursive set of replacements:
 
"In the second case, we replace two neurons with silicon chips... Later cases proceed in the obvious fashion. In each succeeding case a larger group of neighboring neurons has been replaced by silicon chips.... Between me and Robot, there will be many intermediate cases. Question: What is it like to be them? What, if anything, are they experiencing? As we move along the spectrum, how does conscious experience vary? Presumably the very early cases have experiences much like mine, and the very late cases have little or no experience, but what of the intermediate cases?"

The problem, then, is: what happens to consciousness in these situations? Chalmers suggests:
 
"Given that the system at the other end of the spectrum (Robot) is not conscious, it seems that one of two things must happen along the way. Either (1) consciousness gradually fades over the series of cases, before eventually disappearing, or (2) somewhere along the way consciousness suddenly blinks out, although the preceding case had rich conscious experiences. Call the first possibility fading qualia and the second suddenly disappearing qualia. 

But the suddenly "disappearing qualia fork" seems implausible to Chalmers:
 
"It is not difficult to rule out suddenly disappearing qualia. On this hypothesis, the replacement of a single neuron (leaving everything else constant) could be responsible for the vanishing of an entire field of conscious experience. This seems extremely implausible, if not entirely bizarre...

Chalmers then turns his attention to the fading qualia:
 
"If suddenly disappearing qualia are ruled out, we are left with fading qualia. To get a fix on this scenario, consider a system halfway along the spectrum between me and Robot, after consciousness has degraded considerably but before it has gone altogether. Call this system Joe. What is it like to be Joe? ... The crucial feature here is that Joe is systematically wrong about everything that he is experiencing... In short, Joe is utterly out of touch with his conscious experience, and is incapable of getting in touch.
 
He concludes about the fading qualia:
 
"This seems to be quite implausible. Here we have a being whose rational processes are functioning and who is in fact conscious, but who is utterly wrong about his own conscious experiences. In every case with which we are familiar, conscious beings are generally capable of forming accurate judgments about their experience, in the absence of distraction and irrationality. We have little reason to believe that consciousness is such an ill-behaved phenomenon, and good reason to believe otherwise... A much more reasonable hypothesis is therefore that when neurons are replaced, qualia do not fade at all. A system like Joe, in practice, will have conscious experiences just as rich as mine. If so, then our original assumption was wrong, and the original isomorph, Robot, has conscious experiences" (D. Chalmers, 1996, pp. 253-259).
 
References

Chalmers, David. 1996. The Conscious Mind. New York: Oxford University Press.
CONTEXT(Help)
-
Artificial Intelligence »Artificial Intelligence
Can computers think? [1] »Can computers think? [1]
No: computers can't be conscious [6] »No: computers can't be conscious [6]
Implementable in functional system »Implementable in functional system
The fading qualia thought experiment
People can be mistaken about experiences »People can be mistaken about experiences
David Chalmers »David Chalmers
+Comments (0)
+Citations (1)
+About