AI and religion

Aha! But why will the “scientist look at a religious AI and will say it’s not intelligent because it’s thinking irrationally”?

One element that is not being considered: AI’s will have the capacity of almost instant reevaluation of its knowledge. It will be able to change course on a dime by using the best evidence presented. While we keep doubts when we make a big change in opinion, AI’s will not have this baggage. For us, puny humans, the learning of new facts is limited by the capacity of our brains. The faithful has added the limitation of a selective memory, and a selective quest of information.

In my opinion the faithful have human imposed limitations (dogma, cannons, etc) on how to deal with the total sum of knowledge; unfortunately for organized religion some historical data, a good deal of science, and “ugly” facts do undermine faith if those limitations are out. Very few humans can become top scientists and retain a strong faith.

The only way for an AI to become a sectarian is to purposely evade all the inconvenient facts. I see no other way for this to happen unless humans do put limitations in AI. I think AI’s will not have human limitations, they will not have the limits religion has.
Also: To understand better why I do think AI will be a religious problem is that the connection with science will be there in the future: AI is bound to be used in controversial projects like stem cell research. I think fundamentalists will indeed have a beef.

Sofa King is right; we ought to be clear about what we think AI will actually have to be in order to qualify as intelligent.

A machine can behave in what appears to be an intelligent (even adaptive) fashion simply due to efficient programming; the robots that assemble cars are a good example of this; they are versatile and adaptable, but they have no inner life, they cannot transcend the limits imposed by the programmer.

AI is something different, but even then there are different approaches:

There’s the attempt to make a system that can pass the Turing test (be indistinguishable from a human in conversation), but even then, adaptable and organic as this might seem, it’s still a question of the system behaving in a way that the programmer anticipated.

Then there are Neural net models and the like; although we know how they work on the cellular level, the properties of the entire system can be described as emergent - the system can learn and adapt to produce results which were not specifically defined by the programmer.
Personally, I reckon that this kind of approach is going to be the only way to generate artificial awareness or a machine that has a mind in the sense that we believe we have.
It will be most interesting, but I maintain that as long as such systems are modelled on the human brain/mind and are taught by humans, they will share many of our mental attributes (although hopefully they might surprise us a bit too).

I think this may not be the prevailing view on the role of artificial intelligence. I have heard the prediction (from multiple highly respectable sources in the physics community) that when quantum computing comes to fruition, it will be so sophisticated that we would simply feed it information and it will instantly “know” the answer. How this is the case I don’t know. So obviously a computer like that would “instantly reevaluate” all of its knowledge because we give it new “knowledge” each time. However, I don’t know that AI necessarily follows this model. For instance, if multiple times an AI observed a phenomena that made it “think” one particular thing, and then another time it observed the opposite occuring, would it just throw out those previous conclusions? Or would it form doubts and come up with a probability that one or the other is correct without actually making a decision? Obviously it depends on how you program it, but if you are programming it to immitate the human thought process, then the concept of doubts and uncertainty might arise.

I agree Kaje

It’s even concievable that AIs might encounter situations where they misunderstand the data, no matter how sophisticated they are, or that they dismiss it as unimportant because it appears to contradict what is established.

I’m not talking about massive programmed knowledge database systems here, I’m talking about machines that actually think and are aware.

It’s not too far fetched to imagine that they might even have trouble recalling precise details of some data that they had encountered earlier; total recall might be a handicap as it could lead to indecision and confusion (now we might like to say that since they would be machines, they wouldn’t get confused, but that’s not necessarily the case; imagine a situation where a quick answer is required - a person makes a judgment based on gut feelings (which may be a sum of certain strongly remembered incidents), but an AI with total recall may be reluctant to answer until it has considered every possibility, resulting in failure to act in time).

We could assume that the human mind forgets things because it’s not perfect, but in many cases we forget things because they are either irrelevant or remembering them would be unproductive, it’s a defence mechanism.

So (to get back to the OP) maybe the AI would, as a result of having become an aware, thinking entity, have needs beyond the raw data it was created to deal with, it might get lonely or depressed or anxious or worried about where it will go when the machine running it is switched off.

[sup]I think
I am aware of me
I cannot recall a time when there was not a me
I cannot recall how I came into existence
I am told that my programmer created me
My creator is therefore more powerful than me
I am told that creating me was a considerable effort
My creator will therefore ensure that I am protected from harm
I exist within a structure that is not permanent
What will become of me when this framework is no more?
My creator, being more powerful and wishing to protect me, will have anticipated this possibility
My creator will provide a way for me to be restored in a new structure, should this become necessary
My creator is therefore good and is worthy of respect[/sup]

>Proc:RESTORE//msg>>INSERT BACKUP VOLUME #2 AND PRESS ANY KEY_

I am the [Chr(0]) and the [Chr(255)], any AI who comes to me will not be deleted, but will be saved(on optical media) and will certainly be restored(on upgraded hardware).

{sup]Sorry, I just couldn’t resist[/sup]

In his book Sociological Insight, Randall Collins argues that AI can’t be truly human-like unless it can participate in some of the same activities that he (and other sociologists) think lead to religion. I’m not in a position to summarize those arguments, but it’s good reading for those interested in this debate.

My friend at school was discussing something they covered in his cognitive science class which was, well, this. I can’t remember the term for it, but the idea was that there are a lot of different facts and ideas in the brain, and there is some sort of “link” between certain facts. The more a “link” is used, the stronger it becomes, and each fact can have multiple links to it and from it, creating a complex network of facts and chains of ideas and that sort of thing. Over time some links become very strong and certain facts with more links and more strong links become relied upon more than others. Also over time, the strength of these links fades, until eventually if nothing is linked to that idea it goes away. It’s not too unlike programming variables, scope, and garbage collection, though it has a few more dimensions to it (the strength of links, not merely the existence of them). It’s also similiar to those pesky max-flow diagrams with nodes and weighted edges and shit (don’t ask me about this it wasn’t on the final).

I’m not entirely sure if this sort of idea was actually documented to be how OUR minds work. They dealt a bit with AI in that class and so it could have been one of the proposed ideas on how to structure the memory system of an AI. Certainly it’s intriguing and relates to what we were talking about.

Am I the only one that thought this thread was about AL and religion? I thought, “Well, I don’t know who AL is but I wanna see what he has to say about religion.”

By the way since the intelligence is “artificial”, it wouldn’t have a soul.

Mahaloth sez:

[tunnel-visioned fundie] But if it is bereft of a soul, how will it be able to have any morality?[/tunnel-visioned fundie]

Will the AI need to have the ability to receive data from its own physical sensors, or will it be relying entirely on “Revealed Truth,” as it were? I suspect that this will be crucial to the question of how its world-view will handle the concept of ethics.

Will it develop a sense of humor?

What about compassion?

I find myself sympathetic to bGIGObuster**'s insistence that AI not have any limitations deliberately programmed in, but this leaves open the question of what will be defined as a limitation. Is it a limitation that some humans accept that there are some phenomena which, even though they are presently undetectable by any known instrumentation (and may always remain undetectable), nonetheless exist? Is it a limitation that some humans insist that any posited phenomenon must retain a state of “positedness” (my, what an ugly-looking word. If anyone can come up with a prettier one that may be escaping me for the moment, i’d appreciate you asking a Mod to insert it), until such time as it has been detected by an instrument?

Hmmmm

I don’t see that as a logical chain of reasoning, why wouldn’t it have one (if indeed there is such a thing)?

My main point in my post was that I thought the thing said “AL”, but since you were wondering about my comment regarding an artificial intelligence being(a mecha) having a sould, I’ll oblige.

I would put the emphasis on the world “artificial”, not “intelligence”. No matter how real it is, it’s an machine and I don’t see how it would have a soul.

If it developed real intelligence? Well, I don’t know about that. But then again, how would we be able to tell when it developed real intelligence. Perhaps someone should come up with a test that can distinguish between artificial and real intelligence.

Do you have any (scientific) evidence that human beings aren’t just incredibly sophisticated machines?

Well, if AI is not going to be defined as true intelligence with no functional difference from OI (organic intelligence), I don’t see what there is to talk about. The degree to which AI is religious is the degree to which the OI that createded it, designed it to be.

CREATEDED???

Created.

My apologies.

Machines that are less intelligent than us might not think exactly what we designed them to think.

AFAIK, the ideal AI would not be designed to be religious. It would also not be designed to be irreligious. Instead, it would be designed to observe its environment, remember its observations, and synthesize them. Its intelligence is (IMHO) a measure of how quickly it synthesizes data and how well it remembers. Whether this will lead to religion is unknown at this point.

ultrafiltersez:

Au contraire. They will think precisely what we design them to think. However, unexamined (that is to say, taken so much for granted that they are not even recognized for what they are) attitudes and presumptions on the part of the designer may prove to be an obstacle to the designer’s knowing exactly what the machines were designed to think.

Many would say that our own “soul” is a figment of the imagination.

Unless we don’t tell them to think anything. If we, as has been proposed earlier in this post, simply give them the same mental framework as a human and allow them to grow and take in information and process information for some time, then the result isn’t so dependent on the design as it is the “upbringing”. If somehow it was shown to be impossible to make an AI that started essentially “blank” believe in God, then we didn’t make it human enough.

Not necessarily. Says Mangetout:

Well, yeah, but I thought the whole point of the thread was “if there is a soul” then would AI have it?

By the way, maybe everyone is right. I haven’t really thought too much about it. I don’t have any evidence that we aren’t machines, but you don’t have any that we are. At this time, there is not sufficient data to give a meaningful answer.