Dualism is not a valid concept.
Ask me why.
Dualism is not a valid concept.
Ask me why.
I’m inclined to agree with Dakravel - if it walks and talks like a duck, then it’s sensible to assume it’s thinking like a duck. Whether it is doing this with flesh-neurons or metal ones is irrelevant.
Also, going back to the moral question in the OP, isn’t it important to assume sentience until proved wrong, rather than vice versa? History seems to teach this - look at the justification of abuse of “inferior” races and of animals on the grounds that they “don’t have souls” (Descartes I think?) or that they are not “fully human”.
I wonder whether the problem with defining sentience or consciousness lies in the tendency to talk about it like a commodity. Minds don’t just spring into being fully formed, they develop in a complex interaction with the environment. So a plausible AI, as Mangetout suggested, would have to be some sort of learning machine that was introduced to human-type interactions with the environment (it would need a mother!).
I’m in two minds about it myself.
Well, this somehow seems to have become a full-on “consciousness discussion”, and so I will post only those points which I feel are most important.
Firstly, regarding the idea of gradually replacing neurons in order to ultimately turn an organic brain bionic. In an earlier post, I referred to encrypted memory, which is effectively what “our” brains comprise: Consider observing all of the synaptic activity in a normal human brain and trying to guess what was being “thought about”. How the heck could we know?! It would be like observing the activity in a Pentium chip and trying to guess which computer game was being played.
We would need to also know the sensory input which caused the “thought”, or at least the memory of such (since we start with zero memories), and only the “holder of the private key”, ie. the sensory apparatus connected to the memory (“our body”) can release that information. In this way, copying the neuron, no matter how closely, ultimately dispenses with vital information, which is why I doubt whether such a thing will ever be possible.
As to this “cut-off point” then, again, it seems silly to propose a magic number of neuronal connection which suddenly sparks to conscious life. I cannot remember anything of my first year outside the womb, but I would balk at the contention that my “sentience” began with my very first long-term memory (playing ball with my granddad during my brother’s birth when I was two, if you must know!). I would suggest that as the synaptic pathways become more numerous, interconnected and lower in resistance then whatever this illusion is, it becomes “stronger”.
Finally, the Chinese Room. I must say, I have never really understood why it is such a paradox in the first place. The problem is posed such that the prisoner himself does not accrue any memory, yet memory still exists in the entire “system” in order to respond to the inputs. I have always seen the prisoner as being “our larynx”, and none of us ascribe “understanding” or “consciousness” to mere vocal chords.
what, then, if you were the person whose brain was being replaced? presuming the mechanical-brained person was still you after the surgery, what would happen if they took the brain that was replaced (put it all back together and such), and put it in another body? to complicate matters, let’s say this body was an artificial replica of yours. when they wake both of these beings up at the same time, which one is you?
One way round the ‘encrypted memory’ problem (which I’d like to read up on, if you have any cites)
could be training the brain to use direct neural interfacing;
if you connect an interface directly to the neural system it is possible for the brain to learn to communicate with external devices such as cursors and artificial limbs.
Rather than try to decrypt the brain code from scratch, we could perhaps make the brain do the hardest part of the work -
by doing the thing it is best at and adapting to the direct neural interface the individual’s brain itself could provide us with our mental Rosetta stone.
which one is you?
You are the one which scratches your head over the problem.
The other one is very like you, but s/he is someone else.
Okay, “Why?”
Isn’t the validity of dualism exactly the whole crux of this debate? Are we conscious because of the physical configuration of our brains (in which case there seems to be no reason why we couldn’t create an artificial instantiation or emulation which would necessarily have all the consciousness we have), or, is there some other non-physical part of ourselves that is our essence?
Because it’s based on faulty reasoning.
The entire universe is one thing. There’s no fundamental difference between a ray of light and a leaf, and bolt of lightning and a stone. The only difference is in the way they’re arranged.
If we propose something genuinely non-physical, it cannot interact with anything we consider physical, because that is precisely how we define what ‘physicalness’ is: interaction. Ergo, this non-physical thing is not a part of our universe, and cannot be responsible for any aspect of the universe, including ourselves.
If this thing can interact with the world, as it must to be our “essence”, it can affect and be affected by physical things.
no.
most of us are assuming the possibility of ai. we see no reason why it shouldn’t be possible. this is a debate that follows from that assumption.
if strong ai is possible, what then? is consciousness the trait that we possess that makes us human? from that, the question arises: what is consciousness?
At the moment of wakening, they are both me. But as soon as they (I?) get up and have different experiences, I become we. It’s not so different from identical twins, is it?
Precisely. (Although these copies would be far more similar than any “identical” twins could ever be.)
Actually, the only difference between the copies and identical twins is the time of separation. If you copied the brain patterns of a child just out of the womb, it would be as though the child had an identical twin, no more, no less.
Energy is non-physical, yet it exists and can interact with the world. Gravity and electromagnetism have no particular forms, although we can attribute the origin of the phenomena to physical things. Something does not need to be physically tangible to exist in the universe. In the universe, matter is not constant, it can be created and destroyed. It is matter and energy that is constant.
I’m aware that Descartes’ theory of dualism has huge flaws, but it isn’t impossible that there exists some sort of soul or spirit for each person.
** That depends on the nature of the artificial body.
** Energy is not non-physical.
Energy is non-physical in the sense that it is non-corporeal, intangible. Most of it anyway. We can’t touch it, see it, can it, stuff it in a bag, or anything like that. So what’s to say that there isn’t a soul of sorts in each of us that’s formed of an as yet unknown form of energy, or even an amalgamation of energy we already know about? Sorry to continue on this hijack, but the idea of a soul or a spirit shouldn’t be tossed aside so easily, since it can be interesting to think about consciousness in that way as well.
Now, assuming that we put a newborn into the Magical Cloning Machine™, and when we open it there are two of them, then those two will be as identical as any identical twins. The only difference between identical twins and the clones mentioned above will be that the clones will have some memories in common from when they were the same person.
The nature of the artifical body doesn’t really matter, if the new body was different, that would only serve to further differentiate the original and the copy. If it’s a perfect copy, then that is no more alike than two identical twins. The copies just separated from eachother later and therefore have more memories in common.
It matters a whole heaping lot - if it is merely simulating personality, there’s no reason to feel bad about mistreating it, however if it is genuinely capable of feeling anguish at the thought of being switched off, then it deserves rights.
Quite aside from that, I’m not interested in what it may as well be, I’m interested in what it is - The idea of speaking to a ‘real’ AI intrigues me; talking to something that is just a make-believe intelligence (however convincing) has appeal of a different (I think lesser) kind.
Although as everyone (including myself) admits, there’s no way to tell, so the issue is moot.
Everything you can see is energy. Everything you can’t see is also energy.
Identical twins aren’t perfect copies of each other. A perfect copy would be significantly more like an original than identical twins are alike.
If a machine was ever able to talk in such a way that it was indistinguishable from a human … that’s a huge “if”, isn’t it?
In order to do this, as far as I can see, it would have to have every attribute (memory, personality, imagination, empathy, language, self-concept, humour etc etc) that we consider essentially human. So where would the difference lie exactly?
“Simulating” sentience convincingly seems to require sentience. We are what we seem to be, to ourselves and others. I’m not convinced there would need to be anything extra to make the AI “real”.
No necessarily - it would theoretically be possible to program a machine with nothing more than a very long list of appropriate responses for given named stimuli (this is all that Searle’s Chinese Rooms do) - it might be tempting to think that we could differentiate between the machine and a real person by catching it out - presenting the same stimuls twice, but this eventuality could also be anticipated and catered for in the list of stimuli/responses. In the end, you have a machine that is not making any decisions or actually thinking, just switching on a response when required - no different to a room thermostat.
It may well be argued that this is all that humans do, they just aren’t aware of the process, but I’d be skeptical if such a claim were applied completely
I think maybe it is all we do. But calling it a “long list of appropriate responses to stimuli” makes it sound too simple. An “appropriate response” - for a human- is rarely predictable.
Think of the silliest thread you’ve ever seen on here - yes, you can predict some of the responses, but there are always surprises. My point is that the most banal conversation requires so many assumptions, mutual understandings and so on that it would be impossible to mimic convincingly without the full armoury of human capacities. For a start, unlike the thermostat, a conversationalist has to imagine the impact of their responses on the other person, then edit the responses accordingly. Something that gave “if x then y” answers would soon sound rigid and be exposed as a computer.
I still say you couldn’t simulate sentience without being sentient.