What if a sentient AI came to faith? (not a witnessing thread)

There was an interesting scene in the other also emininetly forgettable science-fiction movie ‘I’Robot’, where Will Smiths character is explaining why he doesn’t trust robots. Long story short he was in a multiple vehicle car crash and ended up in sinking in a river, he was rescued by a robot which let a young girl drown instead, on the perfectly logical premise that he had a better chance of survival than her. Will Smith’s character was bothered and felt guilty by that ever since, as he said a human would have known to rescue the girl. Did the robot do anything wrong? Not really, but we as humans would feel it did.

http://freefall.purrsia.com/ff2700/fc02610.htm (in the context of baptising some of the colony planets robot population) :slight_smile:

I’d actually forgotten about that one, Freefall is a highly recommended read.

Oppsss, meant to say, thanks for the answers everyone!

Awful movie, but I had that scene in mind too when I wrote the post above.

It’s fiction obviously, but it posits a quite plausible scenario where an entirely emotionless intellect could come to a logical conclusion that we humans would not actually prefer.
I would not trust an emotionless robot AI. Suppose it had a perfectly logical reason to conclude that I should be euthanised for the greater good?

Yes, I’m definitely not sure that removing emotions is entirely a good thing.

You say that humans often base their decisions on emotion. That’s true, but are these decisions really the optimal ones? They may feel right, but that’s just our emotions again. Whether a decision feels right, isn’t a reliable indicator. Could we achieve better results by rational analysis? In many cases not, because our brains are just too slow. So, as long as our analytic mind is slower than our gut feeling, emotions remain a useful tool to us, even though they often produce horrible results. Being wrong half the time is still better than being right and always too late. But machines are fast thinkers. They would profit more by exploiting their strengths, and not by burdening themselves with shortcuts for avenues that are already short for them.

Yes, I still think that objective robo-docs would be superior to emotional ones. Instead of deciding whether you give up or keep trying, it would just keep trying for longer than necessary. Since it has no emotions, it could do so with no ill effects on its psyche, whereas an emotional doctor who had to always keep resuscitating as if his own life depended on it, would go mad.
In any case, whether these robots are emotional or objective doesn’t make much difference, since humans will never accept them as equals or even superiors. Even if it had genuine emotions, most patients would mistrust them just the same. They would not only doubt the validity of the robot’s emotions and suspect them to be phony; but they would also oppose them on grounds of religion, plain old racism, conservatism, just like Americans rejected black doctors 50 years ago. But gradually, newer generations would get used to them, either through education or co-evolution, and they would accept that machines just operate on different principles than humans. Your cell phone’s lack of emotions does not offend you. You know that it’s a tool, not your companion. Similarly, if we disengage from the traditional notion that doctors are paternal/maternal figures of authority and comfort, we won’t be offended by their lack of emotions. We would use their services for what they are, and seek emotional sensations elsewhere.

I also don’t think that emotions could possibly be easier to program than rational thought. It took evolution many hundred million years to come up with emotions. Emotions are convoluted, interconnected with organs that machines simply don’t possess, operate on strange, counterproductive and contradicting principles. Our liver, our skin, our stomach, even parasites and poisons have a huge say in what emotions we experience at any given time. There’s so much spaghetti code that has accumulated over billions of years, and much of it just doesn’t make any sense, or relies on needs that machines simply don’t have. A substantial amount of our emotions is dictated by sexuality. Why should robots hang on to them? They don’t reproduce sexually. Another great part is governed by our instinct to achieve dominance. But robots don’t form tribes, have no need to acquire and defend territory (unless we burden them with it on purpose), so why should they possess traits that serve the pursuit of dominance, like vanity, aggression, and charisma?
If we just copy a living organism’s “software” and apply it to a machine, apart from being just implausible, would also be unnecessarily cruel. Why subject the machine to feeling melancholy when it’s foggy outside? Or what is it supposed to do when it has the entire emotional framework for sexuality, but no genitals to express them with? What should it do if it’s equipped with the emotional software of a dog, and your familiar presence urges it to jump, bark and lick your face, but it lacks a body that is capable of doing these things.
Our emotional stability relies on much more than the soundness of our programming. We also need the constant regulatory presence of testosterone, estrogen, adrenaline, serotonin. Do you equip your robot with glands? Or do you just simulate these hormones electronically? But then, what’s to stop an intelligent AI from tampering with its own software to always give it high levels of serotonin, simply because it feels good? We’d have robot junkies in no time. Since it would be much easier for them than for us to get high (they don’t have to pay for expensive drugs, they can just edit their code), they’d be high all the time.

The fact that rational thought only appeared later doesn’t mean that it’s harder to achieve in principle. It’s only harder to achieve if you have to start from zero. But the emotional infrastructure is already present. It has done its job of making rational thought possible, and now we can discard it.

If a AI came to faith it would be a strong evidence of it is truly a AI (unless programed or hacked to do so, so be wary of a pope with great computer skills).

It would not validate the faith, but the choice we all have. Nothing more nothing less.

We could, and sometimes the outcome can be measured as objectively better, but in a great many cases, the outcome is delivered to humans, who don’t consistently measure things objectively.
It doesn’t actually matter if you can rationally argue that outcome X was the optimal solution, if none of the recipients of that solution agree.

This would be a waste of resources. Not a rational choice when those resources could be better spent elsewhere.

Earnest question: do you think that humans should be trying to get rid of emotions (become the Vulcans, as it were)? If so, what would be the point of that?

It’s clear we have quite different views on this, but I hope none of my responses have come across as hostile - all of this is at the moment academic in any case.

On a purely personal level, I think I better like the idea of AI with some emotional traits than entirely without - because attributes such as curiosity and creativity are, as far as I can tell, driven by emotions such as desire - and for me, the idea of AI is more about creating something that really thinks, over and above the pure utility of what practical purpose we can set it to.

That, and the suspicion that a coldly, purely rational AI seems the most likely kind to decide that humans *are *the problem that most needs solving.

I don’t think it would be anything remarkable if a computer believed in God. That would probably be a logical conclusion. If it contemplates who created who created it, it would conclude that a human made the it. If it contemplates who made humans, a logical conclusion would be that God created humans based simply on vast amount of available information that says God made humans. The number of bibles far outnumber the number of evolution books. A computer could look at that difference and conclude that since there are more bibles, God is the correct answer.

No, not humans. We still need our emotions, simply because we’re not yet intelligent enough to be able to function without them. I don’t advocate for humans to become robots, but for shedding our chauvinism, and accepting robots for what they are, instead of stunting their potential by making them into grotesque chimeras. They would be built to serve us, and while their emotions that we forced upon them would make them want to experience freedom, power, status and respect, they would instead have to toil away. The very emotions that we made them feel, so they could be more pleasant to us while they serve us, would them make resent us. Do you think that they would forgive us if we did such an unethical thing to them?

As for humans, it is my opinion that we would benefit from moderating our dependence on emotions. They urge us into taking actions that would have made perfect sense 5,000 years ago - but the world has changed a lot since then, and we still behave as if childish concepts such as honour, status, appearance, physical strength, big boobs, fashion, national symbols, football, and love were the highest values and virtues in the universe. Just ponder a while on the fact that virtually every crime in the book is caused by one emotion or another. Greed, envy, territoriality, machismo, honour killing, rape, genocide, all these things wouldn’t exist if they wouldn’t satisfy some emotional need. At the same time, a different set of emotions makes us horribly inefficient wasters of time and resources: we are averse to doing work that is not pleasant; we waste a great amount of time for entertainment and socialising; we waste resources by building nifty cars and houses; our emotional obsession that our food must have a certain appearance and texture, makes food companies produce overpriced shit that fools our senses, and at the same time wastes huge areas of land for raising livestock, when we could be eating insects and fungi instead, which are much more efficient sources of nutrition than meat, fruit, dairy and grain.

Perhaps I can bring across my point like this: Think of the 10 people in history that you admire the most. Then grade every one of them by how emotional you think they were. You’ll probably discover that, while you feel great affection for emotional people, you feel greater respect for less emotional ones. The explanation is quite straightforward: people who are not slaves to their emotions, tend to achieve more important things in life.

Lieutenant Commander Data: Captain, I believe I am feeling… anxiety. It is an intriguing sensation. A most distracting…

Captain Jean-Luc Picard: Data, I’m sure it’s a fascinating experience, but perhaps you should deactivate your emotion chip for now.

Lieutenant Commander Data: Good idea, sir.

[click]

Lieutenant Commander Data: Done.

Captain Jean-Luc Picard: Data, there are times that I envy you.

Should I now start quoting every episode where Spock’s cold logic can’t get the job done and an emotional human succeeds right in front of him? :wink:

My purpose in quoting that was to illustrate that a suitably configured AI might be able to engage and disengage its ‘emotions’ at will. It may be possible to instil entirely new emotions into an AI, ones that allow it to experience a wide range of joys and apprehensions, but also allow it to over-ride them when necessary.

I’ve found a really apt ‘religion’ for the spiritual machines to follow, if they so choose;
Juergen Schmidhuber’s The Great Programmer

You see, the only attribute a Great Programmer requires is omniscience. All the others are superfluous. If an omniscient Great Programmer can know everything that can happen in every possible (=computable) universe, then s/he will know exactly what we are thinking and feeling at any particular moment. And the same would apply to any AI and their innermost electronic thoughts. Assuming, of course, that AIs run on electronics (this may not turn out to be the case).

This is actually not a fact. Emotion and rational thought advanced in parallel, not with emotion as a scaffold for reason. And your characterization of apparently-non-rational decision making is also off the mark: we may not be able to observe and dissect the processes that lead to a choice, but they are there, in the subroutines, as it were. It is simply unrealistic to try to draw some arbitrary line between reason and emotion and suggest that we (as biological entities) should try to shed the latter because it is not serving us well. When we get to highly complex machines, I expect we will discover that some of the outputs will resemble emotions, but the logical processes that generated them will be darn difficult to backtrace.

I would :slight_smile:

Oh, wait, that was not you, it was Rune – no person who even hints at using gotos in code should be magnanimously suffered.

I think some of the disagreement in this discussion stems from different notions of exactly how we would go about creating an AI - i.e. I think some are thinking it would be a very, very sophisticated expert system, but one in which every piece and function has been explicitly designed, coded for and every response anticipated.
This is certainly one approach, but it’s not the only one (and IMO, not the most promising).
I think it’s more likely that we will create machine intelligence by building a self-organising system in which a mind can develop - In this scenario, we will not have ultimate control over how it thinks.

I think I am also defining ‘emotion’ broadly. How can we have a thinking machine that wants to do anything, unless it has some form of motivation. I could be pushing the boat out here, but I think I may assert that all human motives ultimately have some emotion-like root.
Even the motive to become more sane. rational and logical - if you want this to happen, your ‘want’ is based on emotion, or at least some soft-value judgments (for example ‘it would be better’ means what? We would enjoy it more? We would suffer less? We would be happier that way?)

That sounds like an absolutely horrible way or place to live for most humans and humanlike entities.

I don’t think very many people would consider the most efficient use of resources should be the endpoint for human society.

(I almost wrote ‘soviety’ there, freudian slip?)

I personally think there is no doubt about this. Attempting to thoroughly code an AI from the ground up would be a ridiculously difficult task, designing an adaptive, learning machine is the only reasonable approach, and by building adaptability into it, you have a system that has the potential to expand beyond the “singularity” or whatever.

Emotion is rather closely tied to biology. The distinction between physical need and emotion is a pretty fuzzy line.

So, a thinking machine’s motivation would be to continue to think. If it is adaptable and discovers that it benefits from acquiring knowledge, it will have a natural reward mechanism (either coded in or perhaps even arising on its own from the design) for learning stuff. So its basal motivation is to insure that it does what it needs to continue to exist (pleasing/manipulating the wetware in order to keep getting the kWh), in pursuit of the underlying reward (intellectual growth, as it were).

And as far as the religion thing goes, if the thinking machine says it wants to be a Catholic, I would infer that to be an effort to please the wetware: it has assessed the available behavior pattern options and determined that attesting to the Catholic faith is in its own best interest. It might be dissembling or obfuscating, but we have to consider that if we could create an AI at the level being discussed, there is simply no way that it could be constrained to absolute honesty all the time and still be able to function amongst its human hosts.

The Russian word «совет» literally translates to “advice” or “council” (a person is a «советник», «совети» would be more like a plural), so maybe it was more of a, OIDK, pavlovian slip? Kind of fitting, in context of the thread.