AI and religion

For my first tread ever I want to start with a great debate so:

AI and religion.

Artificial intelligence has been predicted before, but always the experts were wrong on how soon it will appear, now the logical application of Moore’s law, nano assembly and quantum computing means that AI can appear in a few decades in the future. The time when AI progress will tend to increase in an exponential rather than a linear manner is usually called the singularity. This is because many actually do give up trying to guess what will happen to humanity and machines from then on.

My proposed debate is about how religion will react to the coming encounter, so what do you guys think? Will religion condemn science if the AI’s don’t believe in god?

And as an alternate scenario: what if a religious group does control the development of AI and it becomes a God fearing being? How soon before all the other religions try to destroy the AI and their creators? How effective would be the science establishment in convincing the people that this is the ultimate GIGO?

And finally this whopper: At the beginning AI will be a tool, and history has showed that many religions adopt new technologies: radio, TV, Internet, (even thought the Internet is full of infidels!), satellites, etc. The BIG difference is that to be an effective tool/believer the AI will need to have many logical (illogical?) limitations applied to it to prevent it from thinking about, and/or examining the new beliefs too much. This becomes a very interesting philosophical dilemma: Is religion only possible in a self limiting intelligence? (This last question applies to both natural and artificial intelligence!)

I dunno if artificial intelligence automatically carries with it artificial inquisition. We’re definately curious about such things, but would an AI program be? Maybe it would ask, “Where did I come from?” and then the scientist would respond “Well, from me”. And that could very well be the end of it.

If we are somehow able to create sentience, that will be really exciting, as it will finally shut the people up who like to think that humans alone have this capability to be conscious, many of these being religious types.

If true AI ever becomes possible (and if we’re able to tell whether it’s ‘truly’ sentient) It will be interesting to see whether the machines have a tendency to believe in God. (I don’t think that it’s something that you could exclude by design without limiting the usefulness of the AI)

With ourselves as their creators, maybe they will revere us as gods, maybe there will be a group of AIs that not accept belief in our existence.

Do you really think we will see true machine self-awareness in the next few decades? I think simluated AI is more likely (and more likely to be useful=controllable)

You bring up an interesting point Mangetout. I always thought a good idea for an experiment (I may have described this in a previous thread somewhere but who knows) would be to take a newborn compeltely uninfluenced baby and completely remove him from any biased human contact for as long as it takes to establish his opinions on a few things. This would seemingly put an end to the debate over whether or not humans are naturally imbued with various beliefs or opinions, or if these are provided by society/parents/upbringing. It would be interesting to see whether he, as a human, even bothers to actually THINK at all, or if he just goes from day to day waiting for his next meal from the things in white suits.

Now of course this would be incredibly immoral and unethical and if anybody ever did it their findings would hardly be accepted. But you could do it with AI if it ever reaches that level of sophistication (which you would think it would eventually). At least at the beginning I doubt most people would have AI-abuse laws to prevent this from being carried out.

Not unlike this then:

I think I read somewhere that the compound where the children were kept may have been near to sheep or goat fields and that the bekos sound that the children made was really just a sound mimicry Bäää

It would be quite difficult to provide an unbiased learning environment (for example, theists and atheists have at least one thing in common - they all think they are unbiased)

I think that language and the learning process in general is reliant on bias (excluding some things in order to focus or concentrate on others) even if only on a temporary basis. - “Put the doll down and look at the blocks”

The most popular predictions regarding AI are along the lines of it having to be a self-organising system (like the brain) and having to ‘learn’, but of course this requires a teacher, I would venture to suggest that the very fact that we don’t all agree with each other that stimulates independent thought, even in fields where the facts are consistent across the group, the emphasis and importance of certain facts over others is often disputed.

However, it’s an interesting idea to try to figure out; if you have any ideas how it could be done, I’d love to hear about them.

In light of that website, I might even venture that my proposed experiment isn’t necessary or at least not for completely the same reasons.

It’s obvious from the diversity among the world’s religions that society is what gives a child its concept of the universe/God. I mean a Native American 500 (except those on Hispanola of course) years ago would have absolutely NO concept of the Christian God or any religious system other than their own of spirits, naturalism, and mysticism (I know I’m grossly simplifying or possibly butchering their beliefs, but by no account is it a Christian ideology).

However, I guess this doesn’t mean for certain that a child would have no concept of this without society. Perhaps society merely overrides some sort of innate understanding of the world. Thus, the experiment must proceed.

I would try to remain unbiased simply by doing absolutely nothing to bias them. If there was any sort of human contact it would only be to feed them, not to attempt instruction or anything of that sort. And certainly you could not give the child books or any other medium.

You could also attempt to study any verified instances of children being abandoned and raised in the wild. I do know I’ve read of at least 1 or 2 instances where this occured.

And the entire idea is also not to unlike Plato’s “allegory of the cave” in which man simply assumes that what he can see is reality, regardless of whether or not there is something more to it.

It would also be important to consult open-minded leaders of various religions and have them examine your planned experiment and see if you are in some way biasing them for or against any particular religion. I say open-minded because it would do no good to talk to someone who said “Well of course it won’t work because you aren’t teaching them X-religion, the only true faith”

And GIGO, I’m terribly sorry about the extreme hijack, when I have more energy I’ll start this discussion up in a GD so that this thread will stick to your topic

After inputting all the info in the universe the super computer is asked the ultimate question, is there a God?
The computer replies “There is now".
– From ‘Answer’ by Frederick Brown,

Really, even without limitations you have to realize that an AI will not necessarily arrive to a belief in god.
Take into account that the usefulness of the AI is the last thing scientists will try to limit, and this is will be what religion will dislike.
Even accepting god does not leave the AI free of the distrust of the religious mind, The next step will be to demand a specific faith, and then a specific denomination.
I mean if you are a Baptist; would you want to have an AI assistant that is programmed as a Jehovah witness?
I think the faiths in the future will deal with the AI challenge by modifying them. And then all hell will break loose, as more powerful beings now will fight our religious wars. :wink:

I do take into account that no training of AI’s will be unbiased, you should remember that the moment science appeared as a way to explain and control the elements, many discoveries have ruffled the feathers of religion. And many times religion has adapted. AI is bound to be created in a controlled scientific environment. Just like before, this new science endeavor will have to keep religion away of the discovery and programming face. When and if AI is completed that will be the time to ask the tough questions. And do the controversial modifications.

This sounds like a very interesting concept. Especially raising the AI so that it would make it’s own dicisions withou bias and see what comes out of it.
I think it would be extremely difficult to have no influence on the thing whatsoever. Afterall, every interaction with it will result in some sort of bias, no matter how small. In stead, I think it would be best to open the AI up to the entire internet. Give the whole world population the chance to influence it. After a while, with so many (millions if not billions) of biased sources, the machine would definately have to make up a mind for it’s own, which will truely be unbiased.

Does that come out the way I intend it? I’m not a start at GD.

The reason I would be weary of exposing it to various people would be that it assumes the correct answer, if there is one, is one that’s already been found. The computer will either be forced to accept one already established idea, or completely reject everything and be back to where it started.

I also wonder what might happen if we gave this AI digital variants on our various senses, cameras, microphones, and the like, and dropped it off out in nature for a while. Would it come to think that the trees and rocks are Gods? Would it believe in Spinoza’s God “who reveals himself in the orderly harmony of what exists” (Einstein’s words, not mine)?

Also, to expose it to people’s existing ideas, you would have to program into the AI the concept of lying, and I’m not sure if that’s such a good idea. If we’re going to try and engineer a better being, lets do that.

I think you’d just end up with a hopelessly confused AI that believed that the moon landing was a fake, but had a deep-rooted appreciation of skin tones.

The human brain has a certain spot that you can poke that will create a religious experience. We are essentially hardwired for it- and that would explain why religion of some sort is so universal.

Beyond that, humans have so many unknowns. Who made us? Why are we here? When confronted with so many questions, religion is sure to establish itself.

AI, however, will have neither. They will know exactly who made them, and exactly when and why. I don’t think they would have a need for religion.

Trying to preprogram an android’s human-like psyche with an endless series of if-then-else statements is the hardest way to go about giving it human-like consciousness - the possibilities of existence are limitless, and there is no way to prepare a machine for all of them by describing them one at a time. Sooner or later, the machine will encounter an anomalous situation and simply freeze. In my opinion, this route is unnecessary, too.
What makes the most sense to me is programming in only the same firmware you find in a human brain (e.g., the instinct to suck, the tendency to imprint on caretakers as parents, a natural curiosity). Once you activate the machine, it is essentially “born” and, like a human baby, quite incapable of surviving on its own initially. However, given time, the android’s mind (assumingly based on a neural net processor or something along those lines) will eventually learn to make sense out of its environment and will learn as fast - most likely faster - than a human child. Thus, we will have created a child that grows mentally much like a human child. Given enough refinement, I think the two would be distinguishable in name only.
When it comes to religion, their opinions will differ as widely as ours. Regarding what kind of “people” they will be, once again, you’ll see a wide variation. Some will be artists. Some will be statesmen. Others will be homemakers, and still others will be mass murderers. The chance we take with the creation of artificial life is the same chance any parent takes with a newborn child - will my child grow up right? What if I make mistakes as a parent? What if he/she beings to hate me? The same questions that rack a typical parent’s mind will be re-asked, the implications magnified ten-fold. How do we ensure the androids will grow up the way we want them to? What if we give them buggy firmware? What if they turn on us? There are no guarantees in creation, but rest assured, it will happen. Whether they come when I’m twenty-one or seventy-one, it’ll happen.

naturally, that begs the question: do we have a need for religion? i think human beings have a natural curiosity regarding the meaning of it all, of existence and our place in it. we’re hardwired for wondering. but as a few days reading sdmb posts will show you, not everyone has a need for religion.
further, i know “exactly” who made me (my parents), when (about nine months before my birth), and why (they wanted one). a machine would know these things about itself, too.
i know this is not what you were saying, but my point is that even when you know “exactly” where you came from, that still leaves the “big questions” and “great debates” unresolved. i mean, if the machines are built like us and think like us, they, too, will be curious to no end. and they might look to us, in their infancy, and ask if there is anything more to life. to their dismay, we will not answer in one voice. some will scream god is sovereign, some will scream god is dead, some will scream he never exisited, and others will scream for some sort of eastern philosophy.
the machines, like children watching their parents fight and throw things at each other, will only be scared and confused. but they’ll grow up and eventually learn, as most of us did, that their parents were not the gods and superheros they’d hoped for. rather, they fell far short. and the machines will wonder on their own, just as we do.
telling a machine its “activation date” doesn’t take away its need to search for the meaning of it all. the fact is, if they think like us, they will ask the questions we ask here everyday, and when we are unable to answer, they will think for themselves.

Perhaps you have some twisted vision of what AI is to become, but there is no need for these programs to have all our mental constructs and peculiarities. If you’re simply using AI as an experiment to understand whats going on in OUR heads, then fine, but there’s no need to let that sort of thing get out of hand and suddenly begin creating a whole race of mecha-humans.

The more logical roll for AI to take, and the one its going to take when it appears, is that of the slave. It will be do the jobs that people were doing before. It will simply be the next stage of automation. Perhaps eventually it will take over a broad portion of the workforce, leaving, as it has been projected before, the majority of humanity to soak up dividends from stocks in these automated corporations while living the good life. Does this sound like an anachronism to you? Analogous to a slave trader living the easy life while his african servants run everything? That’s precisely why we don’t want them to be robo-humans.

We have enough humans as it is, if you want more humanity, have more babies. If you want to have a highly efficient, definitively loyal, and utterly ideal workforce (and in turn world), then you want to give your machines enough intelligence to perform their tasks without any need to give them these “uniquely human” thought processes. If you never program them to think in the same manner we do, and leave them as intelligent automatons you don’t have to worry about the ethical repurcussions of making them do all your work, and you certainly wouldn’t have to reimburse them in anything other than electricity.

There’s no need to create a new caste in society, just a new type to automation. If you simply must create “ALife”, keep it restricted to understanding ourselves, and not simply to play God.

Fuck, I’m dead tired… I hope anything in this post comes out coherent.

Not if there’s any truth in what I’ve been reading; the AI won’t come into existence with any built-in knowledge, it will need to be taught (and possibly persuaded).

Essentially, if the AI in question is to be a faithful simulation of the human mind/brain, then it will behave like one, including the need to ‘look beyond’, if it is not a faithful simulation (i.e. it’s a novel type of intelligence), then it may think very differently and have very different needs.

Probably the best people to design novel intelligences are SF writers.

Xref Religion/Spirituality and the brain - the consensus seemed to be that it isn’t Quite as simple as a spot that you can poke.

Chill out. I never said we needed to create artificial life, nor did I rule out the viability of AI simply as an extension of automation.
If you’d take the time to read above, the OP was in regards to a machine capable of believing in (and/or rejecting) a specific worldview. A machine that has no semblance of human nature and no desire to ask/answer the “big questions” doesn’t apply here.
We already have intelligent machines that do work for us. They build our cars. They autopilot our planes. They analyze data. But as they are basically automatons, and there are no real ethical issues tied to them. The issues the OP brought out would only arise with the advent of a machine capable of independent thought and making decisions regarding its life. I was simply drawing attention to the side issues connected with such an endeavor.

The question is if you’re not going to model the AI on [sub][something at least fairly similar to][/sub] the [sub][probably human][/sub] brain, then how are you going to do it?

At the moment, the brain is the only mechanism we know that gives rise to sentience (and we don’t really understand how yet*)

*[sup]Sure, there are plenty of hypotheses, but try finding two that agree.[/sup]

Incidentally, how big is the biggest neural network that we can currently simulate (in software)? surely it’s just a matter of storage space rather than raw processing power, provided you don’t require your AI to run in real time? (I’m thinking Von Neumann here)

I’m suprised that nobody has kicked the definition can yet.

I liked Scientific American’s take on the question a couple of years ago. Prior to 1903, learned people worldwide were at least vaguely familiar with baloons, gliders, dirigibles, man-carrying kites and rocket-riding Turks, yet influential people everywhere refused to call it flight. And after 1903, many of those same people spent a lot of time re-working the definition of flight to exclude the Wright Flyer. Perhaps if it had quacked…

What I’m trying to say is that this debate will likely continue long after computers can equal or exceed humans in any of the criteria we now hold to be the definition of “intelligence”.

The religious will look to an atheistic AI and will say it’s not intelligent because it doesn’t believe in God.

The scientific will look at a religious AI and will say it’s not intelligent because it’s thinking irrationally.

And woe betide the poor thing if it decides to be a Mormon, because then practically everyone will think it’s stupid, or crazy, or both.

I’m with you on the debate over AI continuing long after it has reached human levels, but I don’t think it’ll focus completely on religion.
A lot of people will simply think the machines are simply imitating human nature. Even if one says it is religious/non-religious, I think most people will immediately be tempted to look around and say, “Who told it to say that? Who? Come on out now.” They’ll say that even if it expresses the slightest notion of preference. “Green is my favorite color.” Some will wonder how the hell a machine goes about choosing its favorite color, whether or not the answers are being “planted” (e.g., the way Kasparov felt Deep Blue’s team was somehow cheating), etc.
But yeah, once it’s set in that machines can think, they’re in for some flack once they start having real opinions. But hey, it’s something we have to deal with everyday.