No: computers can't be persons
Machines can never be persons. They lack ethical status and cannot bear responsibility for their actions. At best, they can display person-like behaviour.
Note: Many other arguments about computers being persons permeate the map, but have been placed in other regions to emphasize what specific aspects of machinehood or personhood is in question.
Many contemporary and historical debates have dealt with the concept of personhood. The abortion debate deals with the status of the foetus as a person. Animal rights of theorists ask whether various species of animals are persons or not. The emancipation of slaves was won when the Supreme Court was convinced that African-Americans were people and not property.
The question of whether robots are persons has been asked since it at least the release of Karel Kapek's play R.U.R. (Rossum's Universal Robots) in the 1920s. This play -- from which they named robots to rise -- is about the struggle of intelligent robots to gain their civil liberties.
In the debate over artificial intelligence, personhood again becomes an issue, because if computers are able to think, then their ethical status may have to be upgraded. Moreover, many artificial intelligence researchers hold the dream of creating artificial life in the form of an artificial person, in part because the concept of intelligence is closely related to the concept of personhood. Some think that the thinking computer would be straight-off a person, because we know that thinking is (to some degree) part of being a person. Some think a robot can't think unless it's a genuine person, because otherwise there would be no 'one' doing any thinking.