How would the major religions approach sentient AI's?

Assuming for the purposes of this thread that sometime in the not too distant future what for all purposes appear to be genuine sentient Artificial Intelligence has been created.*

How would the major world religions deal with the AI’s themselves, for mainstream Christianity for example would they treat it as a ‘person’, someone capable of sin, who can be saved, etc I do recall reading a tongue-in-cheek short story once where an android was elected Pope.

If you want to bring science-fiction into it imagine someone like Data from Star Trek, though an AI of course wouldn’t necessarily have a human form, or any physical form at all. I imagine it would make a great difference if they were of roughly human intelligence or significantly smarter than humans.

*an attempt to sidestep the whole ‘but how do we know they’re really self-aware?’ discussion…also I’ve mangled that first sentence but I’m too tired to find a more elegant way of writing it.

“It doesn’t have a soul” would likely be a common starting point.

Kill it or tithe it.

And (within the context) a reasonable one; cows, for example, are sentient, but are not considered to have souls, or to be the moral equivalent of humans.

I think the question raised by the OP doesn’t present a real challenge to most religious traditions unless we reframe it; suppose we develop artificial life which share the characteristics which the religious believe to distinguish humanity from other life forms, and to give humanity transcendent significance. (Lets assume that all religions do believe at least that, in one form or another.) But of course the answer is going to depend on what those characteristics are, and that may vary from religion to religion.

Either they’d say it didn’t have a soul and didn’t count, or they’d become very anti-science.

I seriously doubt that any stored program computer can become ‘sentient’. The religious community would most likely simply challenge such a claim.

What test would demonstrate sentiency?

I think the question assumes that each religion would have a somewhat unified response. I don’t expect that would happen.

Except this, which has already been said more than once: Unless it was absolutely self-evident, and indisputable even from a hostile point of view, that the AIs were exactly like humans in every way that counts to a religious person, then the majority of religions would simply ignore the whole thing.

I would guess the Catholic view would be the same as for aliens: AIs didn’t fall from grace so don’t require jesus to save them.

That’s assuming the AI is considered to be a person

Slight problem: The reason AIs can’t fall from God’s grace is that a human is their Creator. :slight_smile:

I think I’m going to have to disagree with this. Human beings might be the creators of the material body of the computer, but a soul would have to come from God - just as my human parents didn’t create my soul (assuming I have one).

I think Christians would treat them largely as they do animals. Animals are unmentioned regarding their eternal reward or lack thereof, so an individual is free to believe what they want about it. The hope is that God is a merciful God and that AIs by their nature are incapable of sin, so it is not inconceivable that God could provide them with an eternity with Him, but there is no assurance of that fact, only a hope.

Don’t know about other religions, but my guess is that Christianity would consider sentient AI to still be nothing but machines. Not truly alive; just very remarkable and advanced technology and computers.

What test do you administer to humans to determine that humans are sentient?

We have some pretty rocking computers, and there is absolutely nothing to indicate that they are moving towards sentience. They’re simply getting faster and more sophisticated at what they do. Nothing as yet indicates that getting ever faster is going to trigger some kind of magical transformation. Of course, there is always technology waiting to be invented. Much as in the Terminator, we may yet create our own devil.

I’m not completely sure you could devise a test to determine your own sentience. How does one determine that their own viewpoint is a subjective viewpoint? It seems almost axiomatic and I’m not sure one can prove it. Conciousness - which is really what we’re talking about when we say sentience - is something that I think is beyond the purview of physicalist explanations. Why in the world do a bunch of single-cell organisms that decided to work together as a single unit (which is at a basic level what we are) produce a being that is capable of pondering its own existence? As much as I love the words ‘emergent system’ I just don’t think we’re anywhere even close to the ballpark of an answer and I’m not completely convinced we ever will be.

All this does is reveal how vague concepts like “consciousness” or “sentience” or “thinking” or “subjective” “feeling” can be. But they don’t have to be.

I hereby reject the notion of philosophical zombies. That is, people who act just like regular people, who walk and talk and act as if they had subjective internal states, but don’t actually have those internal states. And this is because if you can’t tell the difference between two systems, then it makes no sense to assert that there is a difference between the two systems.

It is surely possible that there really is a difference. So in a pitch black room someone hands you two swatches of cloth and asks you to tell which is the red on and which is the green one. You can’t tell, but you do know that if you turned on the goddam lights you would be able to tell. But in the philosophical zombie argument every attempt to turn on the goddam lights is ruled impossible, because the zombies are defined to behave exactly like regular people. Except in this one particular way, which is impossible to define or detect or explain.

And I claim balderdash. What exactly does it mean to be conscious? It means that you not only have thoughts, but you know you have thoughts. You don’t just react, you model your own internal states as you react. You get mad when someone steps on your toe, but you also know that you’ll get mad if someone steps on your toe. You’re able to understand yourself, to some extent at least. When you talk to a human being, and threaten to step on their toe, they can explain back to you what might happen if you try to step on their toe.

So how is the philosophical zombie different? It has to be able to act AS IF it knows its own mental states. And how exactly is this different than actually knowing its own mental states? It just is, in some ineffable unknowable sense? Again, balderdash. If there’s no difference, there’s no difference.

And this is why bullshit concepts like “The Chinese Room” are bullshit. A Chinese Room that could fool you into thinking you’re talking to a person would have to have some way of remembering the conversation. It can’t just be a bunch of arbitrary tables with some random factors thrown in. It can’t just be quadrillions of transcripts of every possible human conversation, because such a thing couldn’t be contained in a room, it would take a solar system full of filing cabinets. Every day people are saying and writing sentences that no human on Earth has every said or written before. You can’t just start writing down every possible sentence that could be input into your goddam Chinese Room along with every possible output sentence, because the sun would grow cold before you finished.

And suppose you really could do it. All you’ve done is proven you can emulate one computational system on another vastly slower computation system. Congratulations Alan Turing! You were right all along.

Again, my whole complaint about this line of reasoning is that the questions aren’t well thought out. Could a computer have a soul? First tell me how you could tell the difference between a human with a soul and a human with no soul, and I could begin to answer your question. But if you can’t talk about souls in such a way that it would be possible to tell the difference, then my contention is that “soul” is a word that doesn’t actually mean anything.

Of course soul means something. It means “humans are special and will be ported/copied/cloned into another world, a lovely special world where potato chips are free (but which is otherwise identical to Schenectady).” This is a difference that can’t be detected now but will be detected by all the people to appear in this place.

The position will presumably be that when a robot dies/is crushed/is turned off/momentarily shuts off and is restored from a backup, that robot will not (also?) be restored into the ‘heaven’ simulation. This is assumed to be the case with no evidence because no evidence is available now but it will presumably be available someday. In the meantime their presumed lack of a soul makes them second class citizens and thus we have a moral imperative to crush them all with steamrollers. Really slowly crush them, so they really feel it.

If there’s no such thing as Silicon Heaven, then where do all the calculators go?

Religions would go through their usual process.

Denial. There’s no such thing, a machine can’t be sentient, it has no soul.
Anger. Kill them, or at least deny them equal rights. Don’t let them undermine family values.
Bargaining. Can we make them separate but equal?
Depression. Why are you persecuting us for exercising our freedom to hold the religious belief that AIs should have no rights?
Acceptance. It turns out that all the parts of the [insert religious text here] that we previously relied upon in persecuting AIs are metaphorical.

Judaism has already addressed this question. The consensus amongs the learned is ‘Computers do not and cannot have souls’. I disagree based on the same reasoning as Truthseeker3. The great sages of Judaism hold that a soul (created by G-d) descends from Heaven to enter the body at birth. Why then could an AI not receive a soul in the same way upon activation?

BTW Judaism has a legend about the Golem. A Golem is a being made from clay and through the use of prayer and sacred inscriptions, given a semblance of life. They are very strong and useful for certain things. But they are merely tools. They have no true life and no souls. The first supercomputer built in Israel was named the Golem.

Last Rites is a wonderful story by Charles Beaumont. It has a priest called to the bed of a dying friend. Though it’s never made 100 % positive, it’s strongly hinted that the dying man is in fact an android. He’s terrified to die without absolution from the Mother Church. ‘Would such a being have a soul? Would you perform extreme unction?’ etc