AI and religion

We have pretty much every reason to believe that we are just very complicated, organic machines. Every behavior we have and every feature we have can be easily explained as an evolutionarily mechanical process.

And the OP asked what religion would think of this AI. If it reached the level of sophistication that our mind possesses, then it could very well be convinced that it has a soul, except that by the time we reach that level of sophistication, we will hopefully realize that what we consider a soul or a consciousness is just electrochemical activity in the brain.

Regardless of whether there is such a thing as a soul, a proper AI may end up believing it has one.

I can’t stress how important it is to understand that we’re talking about machines that, rather than being programmed to do certain things in response to certain stimuli (we have those machines already and they aren’t truly intelligent, just cleverly designed), are systems that organise themselves, they have to learn, they transcend their original design (actually they don’t because their design doesn’t formulate a fixed set of possible actions, or even anything at a higher level; their design gives them the building-blocks of a system that can contain intelligence - i.e. we design an electronic analogue of the physical brain, but we don’t define the thoughts that happen inside it).

It’s as incorrect to say “They will think precisely what we design them to think” as it is to say “My children will always do exactly as they are told” - if the machine only does exactly what someone defined for it, then it isn’t intelligent.

And as to whether we will be able to tell if they have any ‘real’ inner life, we won’t, of course, but this sort of assertion has been made about people by philosophers for agese (can you tell that I have the same kind of inner thought-life that you do?, can you tell if I see colours the same way you do? - no, you can’t, you can only see the outward workings of ‘me’, but I could just be a convincing simulation of a real personality, with no inner identity at all).

I have played a little with neural net models; there are some interesting bits of freeware out there.

One that I played with was set up to do simple character recognition - it had to go through an extensive learning process for each letter, once it had learned to reliably recognise ‘A’, it went on to learn ‘B’ (the program comes with a ‘teacher’ program that isn’t intelligent, but merely rates the accuracy of the answers from the neural net program and provides feedback) - in the process of learning ‘B’, the weighting on the nodes within the network are adjusted and this causes it to ‘forget’ how to recognise ‘A’ (or rather become less accurate at recognising it), so it has to ‘revise’ (go through the learning cycle again) - eventually, it ends up with a set of weights on the nodes that enable it to recognise the whole alphabet.

But here’s the important point: the original programmer only devised the neural net model, he has no idea what the node weight values will be to recognise the Roman typeface, moreover, because the starting values for the node weights are random, the system isn’t deterministic; start it running on two different machines and it will likely end up with a different set of weights on each, even though it’s doing the same macroscopic function.

It isn’t possible to look at the network and understand how it’s coming to the right answers and it isn’t possible to point to a certain group of nodes and say ‘that’s the bit that recognises this or that character’ - the system is truly open-ended design; with the very simplest modification to the teacher program, it could be set up to recognise different alphabets or shapes etc.

Similar experiments have been done with shape recognition in other areas; the neural net model was ‘taught’ simply to give a boolean yes/no answer to whether a shape was aesthetically pleasing or not (the teacher program had been supplied with two sets of shapes that had been defined as ‘nice’ and ‘nasty’ by real humans), once the model was reliable in recognising the full set of shapes, it was shown new shapes and it gave it’s ‘opinion’ and it turned out to be quite reliable in determining how shapes it had never seen before would be recieved by human aesthetics.

Both of these examples were built around neural net models that minuscule in comparison to even a mouse’s brain, but if they can be scaled up to the same number of nodes (or more) as the human brain and give them the right sort of inputs and the results could be very interesting indeed.

We’ll never know if we have a machine that has any inner awareness of itself or if it just looks that way - there will be no way to tell - even watching the values of each and every one of the node weights in the entire model will only be like a minutely detailed brain scan on a human (which will only show processes happening, not ‘thoughts’)

Anyway, if there is such a thing as a soul, there are many different opinions as to how this comes into being, it could be argued that the process of creating an intelligent being gives rise to a soul, or that the process of creating a ‘mind’ establishes a suitable place for an eternally-living soul to come and inhabit, but let’s do that discussion in another thread sometime.

You’re right; it’s not known yet whether humans are machines or something more. Nonetheless, there are philosophical reasons to think that they are.

In computational theory, we use a model called a Turing machine. The exact details of the model aren’t terribly important, save that it can be finitely described. The curious are advised to do a google search. The Turing machine is more powerful (in terms of what it can do–questions of efficiency are irrelevant) than any computer we could ever build.

Now here’s the thing: the original Turing machine model was proposed back in the 1940’s. Over the past 60 or so years, people have proposed many modifications, and some have even created entirely different models (recursive functions, Markov algorithms, lambda calculus, circuits, neural nets, etc.). No one has created a finitely describable model that has more power than a Turing machine.

So, if we are more powerful than a Turing machine, it’s because we are fundamentally different from all the proposed models of computation proposed over the course of 60 years. Now I’m certainly not going to claim that this is proof that humans are merely sophisticated machines, but I’m not holding my breath waiting for a more powerful model of computation, either.

I’ve been pondering a bit more on how/whether we would know that the AI had any ‘inner’ life or was just putting up a very convincing facade; provided that we’re not talking about cleverly programmed machines of the Eliza ilk, but truly self-organising learning systems, I think that I’d be starting to be convinced that the machine had inner life if it spontaneously asked ‘What am I?’ or ‘Why am I here?’ or ‘Is this all there is?’ (all potentially religious questions)

Oh, who cares? It’s all going to end at the Butlerian J’ihad anyway. :slight_smile:

Probably another important difference between the human mind and any AI is that, due to the nature of data processing on digital computers, any process consists of discrete ‘frames’, whereas organic intelligence is analogue/continuous.

Organic intelligence is continuous? I find that a little hard to swallow, although I’d be hard-pressed to argue against it off the top of my head. Got a cite?

I can’t give much of a cite; I remember reading about it someplace, but a google search seems to reveal that there is division on the subject; I should like therefore to revise my above statement to read:

Current neural net models are based on discrete computed ‘frames’, the frequency of which is synchronous across the whole model, the brain, not being a digital computer probably isn’t exactly like that (Synapses don’t have to wait for the next ‘clock cycle’ to transmit their stimulus).

Maybe someone needs to start working on neural net models based on event-driven objects.

That sounds interesting.

Uh…hit the submit button too soon on that one.

What I meant to say next is that there would probably still be some need for synchronization, so that signals don’t get ignored. And the different possibilities there will lead to different models, none of which is necessarily like the brain.