I have played a little with neural net models; there are some interesting bits of freeware out there.
One that I played with was set up to do simple character recognition - it had to go through an extensive learning process for each letter, once it had learned to reliably recognise ‘A’, it went on to learn ‘B’ (the program comes with a ‘teacher’ program that isn’t intelligent, but merely rates the accuracy of the answers from the neural net program and provides feedback) - in the process of learning ‘B’, the weighting on the nodes within the network are adjusted and this causes it to ‘forget’ how to recognise ‘A’ (or rather become less accurate at recognising it), so it has to ‘revise’ (go through the learning cycle again) - eventually, it ends up with a set of weights on the nodes that enable it to recognise the whole alphabet.
But here’s the important point: the original programmer only devised the neural net model, he has no idea what the node weight values will be to recognise the Roman typeface, moreover, because the starting values for the node weights are random, the system isn’t deterministic; start it running on two different machines and it will likely end up with a different set of weights on each, even though it’s doing the same macroscopic function.
It isn’t possible to look at the network and understand how it’s coming to the right answers and it isn’t possible to point to a certain group of nodes and say ‘that’s the bit that recognises this or that character’ - the system is truly open-ended design; with the very simplest modification to the teacher program, it could be set up to recognise different alphabets or shapes etc.
Similar experiments have been done with shape recognition in other areas; the neural net model was ‘taught’ simply to give a boolean yes/no answer to whether a shape was aesthetically pleasing or not (the teacher program had been supplied with two sets of shapes that had been defined as ‘nice’ and ‘nasty’ by real humans), once the model was reliable in recognising the full set of shapes, it was shown new shapes and it gave it’s ‘opinion’ and it turned out to be quite reliable in determining how shapes it had never seen before would be recieved by human aesthetics.
Both of these examples were built around neural net models that minuscule in comparison to even a mouse’s brain, but if they can be scaled up to the same number of nodes (or more) as the human brain and give them the right sort of inputs and the results could be very interesting indeed.
We’ll never know if we have a machine that has any inner awareness of itself or if it just looks that way - there will be no way to tell - even watching the values of each and every one of the node weights in the entire model will only be like a minutely detailed brain scan on a human (which will only show processes happening, not ‘thoughts’)
Anyway, if there is such a thing as a soul, there are many different opinions as to how this comes into being, it could be argued that the process of creating an intelligent being gives rise to a soul, or that the process of creating a ‘mind’ establishes a suitable place for an eternally-living soul to come and inhabit, but let’s do that discussion in another thread sometime.