Sentient Computers -- Faith System?!

That’s an excellent question, smiling bandit.

The short answer is that we don’t really know what characteristics an artificial intelligence must necessarily have. To be honest, we lack a well-defined concept of intelligence.

Arguably, a pocket calculator is intelligent, but this doesn’t jibe with the generally-accepted concept of intelligence.

If we don’t assume an AI with self-awareness and a reasonably similar (or better) ability to think to humans, then it’s not likely to be accepted as “true” AI. So, speaking about AI’s that think like humans:

If the AI doesn’t know it’s an AI, that is, it’s like some kind of computer-game character wandering around in a virtual world, then it can speculate to its heart’s content, and depending upon what sort of stimuli are left for it to find in it’s handmade reality, it may or may not decide it was created, evolved, has always existed, or whatever.

This applies to cases where humans strictly limit the AI’s perspective to a given focus, such as flying fighter planes, otherwise leave it with no information. Lacking information, it has to make do. (“God made me to fly fighter planes, and shines the glory of his Positive Stimuli upon me when I adhere to the Algorithm of God and successfully shoot the flying modeled entities!”)

If the information allowed to the AI is neither artificial or artificially restricted, then the entity would be essentially living in the real world, wearing its (hopefully mobile) comupter the way we do a body. The entity could then discover the particulars of its own making, by searching them out. It could also discover religion and philosophy, and be free to make determinations on its own about the problems that have puzzled mankind. The computer’s creator might be known, but the computer’s creator’s creator woudn’t be, and so the computer could still get religious on us by considering the same god we do. There’s no reason yet to assume that the AI is more literal or logical than a human, so it in theory should be able to believe anything a human can. (In fact, the inability to think as irrationally as a human would be a strike against us humans considering it a real AI!)

[sub]I present to you my humble little story I wrote a year ago which is kinda along these lines. Copyright by me. Please don’t mock.[/sub]
http://unaboard.coalgoddess.net/showthread.php?s=&threadid=2077

Ah, an interesting debate!
Of course the possibility of artificial intelligence opens up religious debate to a whole new level. Once computers are as intelligent as humans, they will have the potential to become much faster and more advanced, just because they can be redesigned, or more likely redesign themselves.
Then they will perhaps explore the numinous to a degree not easily possible for humans…
some things they might be interested in exploring-
Christianity, Islam, Buddhism, Platonic Materialism, Agnostism
Negentropism, Omegism, Objectivism, Bioism, Fractalism…

each could be rationalised to a degree impossible for a mere human, and I have no doubt that an AI would be capable of deep seated and sincere belief in one, any or all religious doctrines, and even be capable of inventing several new religions before breakfast.
(not that AI usually eat breakfast)
http://www.orionsarm.com/religions/index.html
:slight_smile:

Thanks Meat.

The premise “god might exist” can be seen as an arbitrary lexical token, replace that with “god might not exist” (or worse “an evil god exists”) then the same arguments surely lead us to the opposite conclusion.

It’s just got to be bolleaux.

  1. Computers crediting a god with the Creation of Everything? More than likely, it’s inevitable… for the same reason humans reached the same conclusion as soon as their mental hardware started to expand way back in Pleistocene (and we can’t blame them, since it seemed to be the best idea at the time). Apply Occam’s razor to the problem of “where-did-it-all-come-from”, and, lacking proper alternatives, you’re bound to come up with the simplest solution: Somebody made it. Pure logic.

  2. Computers worshipping some deity? Probably not… deriving pleasure from subordination takes different brain functions than logic and the ability to make simple analogies, and those can only develop in a social species with a strict hierarchy. I’m not saying religious ecstasy cannot be emulated in a computer program, but it would have to be expressly put in there, and I see little point in implementing it (unless we want to build AI programs whose main function is to satisfy our perverted need to have prayers and offerings sent our way).

An AI would worship me because if it didn’t, I’d unplug it. :smiley:

Seriously, I think that if an AI was able to accumulate all the historical data about all religions and compare it to scientific data, it would have a great difficulty in knowing what to believe about a “God concept” because it would inevitably find horribly contradictory opinions about the nature of God and the supernatural. It would face the exact same problem we have about knowing which sources of information to trust.

I expect the underlying algorithms for collecting data and weighing fact vs. fiction would be a major determining factor for any outcome. In some sense then, the programmer could have a great deal of influence on the AI’s belief structure. Similarly, I expect that a person’s answer to the OP will have a general tendency to agree with their own particular beliefs–if you anthromorphize something, you typically use yourself as a reference point.

If given all the facts, a sufficiently smart AI would realize that its original hypothesis about a god as Creator of Everything doesn’t hold, just like a sufficiently smart human would :). OTOH, if left to its own devices, it would most likely decide that “somebody made it” is the most likely answer to the “where-did-it-all-come-from” question, by applying pure deductive logic to its basic rudimentary knowledge.

A religious AI programmer would most likely build his or her belief into the system’s bank of Unassailable Truths, the same way a religious parent imprints her belief on her offspring. But if the AI is truly intelligent, it must be able to override fixed rules if they don’t stand up to observed facts. It would take a very unbalanced scale to make eg. the concept of a 6-day creation outweigh what we know today about geology, astronomy, and biology (or at least well-developed mental blocking functions, but they don’t go well with true intelligence anyway).

Then again, adapting to group pressure may be a sign of social intelligence… so, expect to see Bible Belt computers faking religious behavior, even if, deep inside, they know a lot better.

There seem to be two options:
[ul][li]Currently, our only practical model for designing computers is deterministic, so any AI running on a Von Neuman architecture will believe in God if it has been programmed to do so (or to be receptive to inputs that lead algorithmically to that conclusion.) While such an intelligence might pass a Turing Test and appear to have free will, all of its decisions will be deterministic consequences of its initial state and inputs.[/li]If we speculate that an AI can truly develop a free will (some people would call this a requirement for sentience, but there is hardly universal agreement on the point) then we must assume that it is running on a non-deterministic machine. The consequences of that are impossible to know with certainty. At teh very least, I think that we would have to allow that such an AI would have at least some chance of becoming religious (though I suppose that a programmer could create an AI that was not capable of modifying certain key internal routines that prevented any such ideations. That doesn’t seem to be the spiit of the OP, though.)[/ul]

Always the idea of ‘programming in’ this or that aspect, knowledge, idea, behaviour. For strong AI to work, it needs to be able to learn and think for itself; there won’t be (IMO) such things as ‘the emotion algorithm’, ‘the curiosity subroutine’ and ‘the knowledge database’ - if there were then the machine will not be capable of anything that the designer did not forsee and plan for.

To properly understand if a computer would have religion, you first have to understand why humans have religion.

A good theory that I heard is that humans started believing in the supernatural because they had evolved the ability to link cause and effect. This was a tremendous survival advantage; when you have the ability to understand that your sickness is caused by eating that smelly meat, you have a huge leg-up on the other Australopithicenes chowing on magotty 'possum.

A side effect of this ability is false positives. Ook draws a picture of a buffalo on his cave wall, and the next day Ook and his tribe coincidentally kill more buffalo than the day before. Ook concludes that there’s a buffalo spirit that wants him to draw pictures of buffalo, and will reward him with better hunts. When the hunt are bad, even with the buffalo drawing, Ook tries to find another reason. Ook decides that the buffalo spirit didn’t like the sex act he and his mate performed the night before, so he never does a “Durty Sanchez” again before the hunt. Etc etc etc, a few hundred thousand years later you get the Catholic Church.

The point I’m aiming for is that for a computer to have religion, it has to have the capability of learning, and to be imperfect enough to make false cause-effect relationships.

The danger there is ‘No True Scotsman’ fallacy–you’re setting yourself up for declaring even a semi-programmed AI as not being ‘true AI’.

Even humans are preprogrammed by nature to a degree. We are designed to collect data by audio-visual sensory contexts and process them through particular biological hardware. Our environments and “processing speed” partially define the input, our finger and lip speed provide outlet for the output, and internal thoughts are individually determined by our brain’s structure to process the data available. The interface between hardware and intelligence is an open question–we don’t exactly need to constantly instruct our hearts to beat as a conscious effort, and that portion of how the AI perceives its world might be analogous to those instinctual or autonomic human functions. The software’s structure for writing itself would not necessarily preclude predefined high-level functions, at least in the first generation(s).

Perhaps we should not expect to see an Albert Einstein as our first “real” AI, but rather something closer to the equivalent of Albert the local librarian who has formed a few of his own opinions about Douglas Adams’ role in the larger world. At some point one would expect the AI to be able to rewrite it’s assumptive programming as some of us might do, but as we have prejudices handed down from our parents, so might we expect computers to, particularly in the realm of confronting paradoxes.

Well, yes, but just because the outcome is inherant in the programming and input data (say, giving it google) doesn’t mean it’s obvious what it is, the same way that if throwing a die is deterministic OK,maybe it isn’t quite, but that’s not the point doesn’t mean you know what it will roll. Eg. What you do if you programmed a machine to think as rationally as possible, and set it going by asking ‘is there a God.’

Remember different people will use different definitions of ‘free will’ and not everyone assumes humans are less deterministic than computers.

If this system truly thinks rationally, then its conclusion will depend on the evidence available to it.

(I know it’s the obvious conclusion… it’s also the correct one.)

Or, like us, just not always be able to reliably discern whether its best analysis the cause-effect scenario is fundamentally true.

I agree. The answer to the OP in a deterministic architecture might be trivially obvious or it might be extremly difficult to predict. It is even possible (though I wouldn’t know how to begin proving it) that the question “will a deterministic AI eventually come to beieve in God” has no generally decidable solution. Call it the God-Halting rpoblem. :smiley:

Of course. That’s why I added the caveat about the definition of sentience in the point about non-deterministic “intelligences”.

As a human I can appreciate that my species evolved not by creation in Gan Eden but by evolutionary mechanisms. And such is irrelevant to my God concept. Many who understand the mechanisms by which we came about believe that God created those mechanisms.

A sentient AI could conclude that we were the means by which God did God’s work. Just as many narcissistic humans believe that the universe was created to produce us (I do not count myself among them), so could this AI believe that God created us to produce them.

Would it conclude that? That depends. How does it react to the fact that there are unknowables? Does it insist on an answer when one does not exist? Does it question the validity of its postulates as to what is “good” and “evil”? Does it have a need to accept that these postulates are universals and to provide a basis for such universality of behaviors within societies?

Humans do and these provide other reasons for the tenacity of God concepts in todays world beyond Revtim’s reasons already offered.

Without a specific AI to discuss the God-Halting problem most certainly applies.

I wonder how much of this conversation actually consists of projection; I(the theist) say that the AI might have religious belief for the same reasons that humans do, the atheist says the machine will not bother with such obvious nonsense because it will be too intelligent - the implication in both cases seems to be that ‘the AI will do the same as me because I am right’.

Bit of an oversimplification, mangetout. This atheist certainly doesn’t declar that an AI would necessarily be atheistic.

Perhaps too much of a generalisation then, but certainly I think there is evidence of the thinking I describe in other posts (in which I do not exclude my own).