Some guy at Google got in trouble for claiming that their AI has become sentient. (Is it possible for AI to achieve sentience?)
For starters, in this context “sentience” means “having first-person experience and emotions.” E.g., the program’s fear of being turned off. None of us would care if the program was mimicking emotions. We’d have no moral problem with switching it on and off. But if it’s genuinely experiencing emotions, that’s a different thing.
I don’t know whether it’s possible to answer the question whether AI can be sentient in that sense, but we’ll probably have to settle for a rule like this: “if it’s indistinguishable from sentience, it’s sentient.” And then we’ll have to decide what legal rights sentient programs have, etc.
However that goes, I am pretty sure of this. If AI can be sentient, we have to face the possibility that we’re AI.
The reason is simple. If we can make sentient programs, then it’s a pretty sure thing we’ll make lots of them. For example, someone will want to run multiple simulations of the Battle of Gettysburg, to see how things would have turned out if Jackson had been there, etc., and it will be a better simulation if all the characters have sentience.
Examples can easily be multiplied, and it’s child’s play to see that the ratio of AI people to “real” people will increase dramatically. Especially if we posit that AI people can also create these simulations.
If these simulated characters believe they’re real humans, on what grounds can we assert that we’re “real” and not AI, especially given the probable ratios?
There are ways out of this, of course. We could realize the danger of programmatic sentience and put severe limitations on it. Or we could believe that once the programs become sentient, that will be the end, so there will never come a time when college kids are creating AI people to do their historical analyses, play their games, etc.