Stephen Hawking would disagree with you.
Can we though? There is some recent research that suggests our sense of smell relies on quantum mechanical effects. If this turns out to be accurate, it seems fairly likely that the brain utilises similar tricks. We may need to rethink the idea that the brain is simply a neural net of vast complexity.
But even if it turns out the brain uses quantum computing for some functions why can’t we add quantum computers to our real/fake self aware computer?
And note that quantum effects couldn’t be the magic pixie dust that creates human awareness, since these quantum effects would be present in the nervous systems of nonhuman animals too.
I’m not the one making an extraordinary woo claim here, you are. Making a machine that mimics self awareness doesn’t mean you’ve made a sentient being that is self-aware, it means you’ve made a Turing test.
Not at all. Just because you can program something that replicates self awareness doesn’t mean you’ve made something that is actually self-aware, only that you’ve made something that replicates self-awareness. It has nothing to do with matter or spirit, or material complexity.
So a “divine spark” (your own words) isn’t “woo”?
As for quantum effects, everything relies on quantum effects. What’s that got to do with anything?
“Divine spark” is pure woo. And I’m talking about making a machine that has self awareness not just imitates it; and again, we know that it is possible because we have self awareness. The burden of proof is on you to provide evidence for something existing in humans that we can’t duplicate even theoretically. Not that it’s really hard, or that we don’t know how yet, but that we can’t do so no matter what.
Perhaps, but if it’s required it will make solving the problem of AI much more difficult. It’s relevant to the OP, as he is specifically asking if AI can be achieved using silicon.
I brought it up in direct response to your statement about how we could model brain function by simulating neurons. That may not be sufficient.
I don’t think there are any theoretical barriers to solving the problem of AI, but the practical problems are vast.
So, I think that we can definitely answer a question related to the OP: Will we ever be able to build a computer that can convince **Dissonance **that it is self-aware? The answer is: no.
Really, before pushing on with this question, you need to tackle another related question: *is a dog self-aware? * Why do you say that?
What’s the difference between a machine that can successfully fake being self-aware, and a machine that is actually self-aware?
If you can’t explain what the difference is, then there is no difference.
The notion of an artifical ‘philosophical zombie’ of any significant complexity is a bit absurd. Every time you subject it to a novel situation or stimulus, and it responds in a plausibly and appropriately sentient way, it becomes increasingly less likely that it is a zombie.
What I mean is, if you condition your AI to converse about tea and cakes, and it does so convincingly, then one day, you scream “I’M GOING TO FUCKING KILL YOU!”, and it reacts in apparent terror, it seems most likely (although of course we can never really know) that the reaction is what it appears to be.
Occam’s Razor, really - things tend to be as they appear, and when that appearance is hugely multifaceted, the tendency is multiplied.
I would tend to argue that true self-awareness isn’t really possible. So, in that sense, I agree with Dissonance to some respect; a computer can only ever become complex enough that its ability to appear self-aware improves. But then I think that for we humans, that’s also the case; we’re just sufficiently complex enough that we give the appearance, even to ourselves, of self-awareness in that sense.
Which is a pretty good thing, really. The ability to have truly original thoughts sounds like a terrifying idea.
I don’t believe that not knowing how to test the difference between two things implies that there is no difference between two things. And that really is what it boils down to: there is a difference since I know that I am self-aware. You probably know that you are self-aware. But we can’t test each other. Nor could we test a machine that we built, even thought it might know that it is self-aware.
Is it necessarily even a programming task? It seems as though you’re assuming it would have to be a brute-force simulation of the whole thing of self-awareness. It wouldn’t - we could, for example, program the building blocks of a self-organising system, in which it may be possible to make self-awareness arise as an emergent phenomenon.
Atoms will only ever do what atoms do, and the brain is made of them.
Besides, we’ve already got computer algorithms and systems that do original/creative things - often these are based on neural networks - and whilst it’s true that their low-level computational activity is very well understood, and proceeds all according to design, the systems themselves are capable of producing outputs that could not reasonably have been anticipated by their designers. This isn’t self-awareness, but it does demonstrate that complex systems can do surprising things.
There is not logically and conceptually very much difference between:
Understanding the chemistry and physics of brain matter, but acknowledging the outputs and operation of the thing as complex and intelligent.
Understanding the low-level design of a complex computer system, and being surprised by the complexity of its operation as a whole.
I can’t help thinking we might be operating with different definitions of self-awareness. We’d have to be sentient to be taken in by the illusion of our own sentience.
I know. If self-awareness is only an illusion, then what the fuck is it that’s having the illusion? If I think I’m self-aware, but really only kidding myself, then who am I kidding?
It is true though that our minds often don’t work the way we think they work. If you read Oliver Sacks, you can hear case histories of people with specific brain injuries that certainly challenge our notions of unitary consciousness. Imagine having a stroke and losing your ability to recognize faces, or your ability to feel the position of your limbs.
But that only gets back to my insistence that there can be no difference between simulated self awareness and real self awareness. If our personal self-awareness was only simulated, how would we know? And so I assert that the illusion of consciousness that we human beings experience is what consciousness IS.
Self-awareness is a meaningless noise. I can’t think of any objective way to demonstrate that I am self-aware, much less a computer. Frankly most of the history of AI research, hasn’t actually resulted in intelligence machines, but shown how little of human activity actually requires intelligence.
They are already talking about adapting the Watson system to medical diagnosis and most people think that what doctors do requires intelligence.
Sure, but that’s easily programmable; we have version of it now, such as virus checkers. A computer may be programmed to check itself. I don’t see any difference, aside from complexity, between that and what humans do. A red computer may be programmed to respond “blue” to a query as to what colour it is. That isn’t sentience, yet the computer is taken in.
Exactly, things tend to be exactly what they appear to be. A machine programmed to mimic self-aware behavior is a machine mimicking self-aware behavior. It is an enormous leap to claim that it actually is self-aware.
Sorry, I’m not talking about a literal divine spark from God of the Judeo-Xian bible. I have to wonder if you are even aware that your reasoning is the exact same reasoning used to ‘prove’ god exists; we know that god is possible because we can conceive of the possibility of god, therefore god must exist. The burden is on you to prove that a device designed to mimic self-awareness is actually self-aware, not just a device that mimics self-awareness.
Given that my position is that the human brain is such a device, wouldn’t that mean i’m off the hook as regards proving it?