Artificial consciousness
I tend to think of Artificial Intelligence as artificial subjectivity, not just artificial reasoning. So, I wouldn't mind building a machine that would have a minimal degree of subjective feelings. What would it be like to be such a machine?
The essential point about conscious experiences is that they are subjective.
I like Thomas Nagel's definition of consciousness, and his question "
What is it like to be a bat?"
"Fundamentally an organism has conscious mental states if and only if there is something that it is to
be that organism—something it is like
for the organism."
Bats are the sort of things to which we ascribe some "internal life", unlike thermostats, calculators and computers (even when they beat us at chess). Even if we have a hard time imagining what it is like to be a bat, we believe that, for a bat, it must somehow
feel like something to be a bat. And we believe it would somehow be
impossible that, for a computer, it would feel like something to be a computer.
Is it consistent to hold both that
1) we, as humans, together with other living entities, have evolved naturally from non-living and non-conscious things
and
2) we are (in principle) incapable and will - for ever - remain incapable of building subjectivity into an artefact?
I find this extremely puzzling...