That’s not what I’m saying at all. I’m not saying that we can create self awareness because we can conceive of it; I’m saying that we can create it because it already exists. Your comparison would only work if there were actual gods wandering around.
Presumably by the time we can build one, we’ll understand how self awareness works enough that I’d (assuming I was an expert on it) be able to point out the self awareness programming, describe how it works, and explain how it compares to the human version. Again, your argument boils down to woo; the claim that there’s some mystical, innately beyond comprehension or duplication something inside of us. You’ve yet to show that any such élan vital exists.
Except of course it wouldn’t appear to be mimicking, because that’s the very definition of mimicry. It would appear to be the real thing.
In any case, I’m not even talking about mimicry, really, or explicitly programming something to resemble self-awareness.
I know you said you’re a programmer, but it sounds to me as though you’re not familiar with computing in the field of AI, and how analogous some of the proposed approaches are to natural systems.
If your point is merely that of philosophical unknowability, then I would agree that I can never prove that a machine is self-aware in the sense that I am self-aware, but I can’t prove that of another person either, so it’s not an argument about computing.
If I were to create an electronic device that adequately simulated the behaviour of a single neuron, would it be reasonable to say that it’s doing the same as a neuron?
If I put two of them together, would it be reasonable to say they were interacting in the same way as two real neurons?
If I assembled a whole brains-worth of these devices, and the resulting entity was capable of developing sentience (Not ‘being programmed to simulate it’), why would that not be real sentience?
And if your argument is still that it would only be growing to mimic sentience, is it not true to say that a human baby also only grows to mimic sentience?
I’m not sure if you have a high standard for a self aware computer, or you’ve just defined it as being impossible a priori. A self-aware computer would be able to report on its internal thought processes (at a higher level than a trace.) But humans aren’t perfectly self aware, as you’d know if you ever had to deal with a tired, bratty kid who yells that she’s not tired and that she’s not bratty.
It truly is astounding. If I were to make the extraordinary claim that I could create sentience via
[ol]
[li]alchemy[/li][li]running electricity through dead flesh[/li][li]writing words on paper and putting them in a clay figurine[/li][/ol]You would no doubt demand extraordinary proof that this was possible. Yet were I to make the claim that this was possible via a computer, the burden of proof drops to the level of science fiction, even in a machine whose express purpose is to feign self-awareness. As I said, people who are normally skeptical will accept an enormous amount of woo when it comes to AI.
What is it with you and “feign”? I’m talking about building a machine that is at least as self aware as we are and probably more so; “feigning” self awareness would be a different project.
And your rather silly examples don’t compare at all, since none of those things are information processors, while the computer like the brain is one.
It isn’t “woo”; you are the one who is claiming magic, not me. And repeatedly ignoring my requests that you provide actual evidence for your claim that there’s something impossible to duplicate in the brain won’t make the request vanish. Your refusal to address it just demonstrates how empty your claims of vitalism are.
Suggesting that a machine designed to develop intelligence is just feigning intelligence is the same as saying a pocket calculator performs feigned addition
No, there isn’t. But at some point we’ll stop considering this a “computer with senses” and think of the gestalt as a “robot”
I doubt it. Sight, hearing etc. are more than just the reception of physical phenomena. Very few ordinary computers also have many of the cognitive components, AFAIK. Maybe some face recognition tech and the like, buty attaching a mike or a webcam to your computer doesn’t mean it sees or hears.
I’d also say that being able to interact is also necessary, so not just passive sensing but an ability to (at least) move, if not touch and grab.
We *have *tests for self-awareness. That’s why I can say that a baby isn’t self-aware. It can’t pass a mirror test (neither can your dog, but a pigeon might and a chimpanzee definitely will)
Or are some of you using “self-awareness” where you mean “sapient”, “sophont” or something else?
Yes, and to future generations, denying that possibility will probably seem as silly as Lord Kelvin’s infamous statement about the impossibility of heavier than air flight seems to us now. I mean, did that man just never see birds?
Computers are perfectly capable of originality even now. Just think of genetic algorithms’ ability to find novel solutions to appropriately posed problems through self-selection, or think of computer-composed music, or hell, think of the Oral-B CrossAction toothbrush, which reportedly was designed by one of Stephen Thaler’s Creativity Machines…
So could your brain, though, falsely reporting its own sentience to itself. Yes, that’s circular, but not any more vicious than, say, a river carving its own bed, which in turn directs its flow, or the gravitational field being a source of the gravitational field.
Just think of a p-zombie computer, a zombot, that outwardly has all the capacities of a human being, i.e. it would pass any test for consciousness you might confront it with, provided a human can, too. Now add a second zombot, and have the first zombot examine it for consciousness – which it of course can do as well as any human. Naturally, the first zombot will judge the second one conscious.
But now, just connect the first zombot’s output to its input, and have it examine itself for consciousness – what’s gonna happen? Well, since as a zombot, it is able to perfectly pass any test for consciousness, it will obviously judge itself to be conscious. And in the end, that’s all you’re doing, too – judging yourself conscious. You can model the process of introspection as asking yourself questions about being conscious; the mental content you experience is generated by and represented as the set of answers to these questions.
So, in fact, if you can build a zombot, you can build a ‘truly’ conscious computer just as well; or at least, something that’ll believe itself to be one. And in the mind, what you believe is what is: you can’t merely believe to experience, say, a headache, for that belief is indistinguishable from the experience.
Well, I’m not sure I can prove anything… I mean, I can’t prove to myself that I actually have the power to see, either. Maybe I’m just plugged into the Matrix or whatever. But I’m experiencing something that I perceive as the power to see. And I’m experiencing something that I perceive as being sentient.
It’s a bit different, though, since while I can see how I could perceive myself as seeing things without it actually being real vision, I don’t really see how I could perceive myself as being sentient without actually being sentient. That’s like saying “I only think that I’m able to think.”
But as far as whether a non-sentient being would know he wasn’t sentient… I’m not sure he’d know anything in the sense that I know things. I see no guarantee that a non-sentient being wouldn’t report himself to be sentient.
But it’s different. When I judge other things, I use my external senses: sight, hearing, etc. I’m judging you to be conscious based on how I observe you behaving. But I’m judging myself to be conscious based on my direct perceptions of my own thoughts – not even perceptions of the neural firings in my brain, but of the actual thoughts they represent. I don’t have direct awareness of that for anyone but myself.
I don’t see how I could be fooling myself. The whole essence of being “sentient” (as I’m using the term), is that I have this sort of direct awareness of my own thoughts. To repeat my reply of a few minutes ago, I can’t say “I only think that I think.” If I think it, then it must be so.
(reply to “The way a brain works and the way a computer works are fundamentally different.”)
The way a brain works and the way a computer works are different, that much is indisputable. Whether that difference is “fundamental” depends on what you consider “fundamental”.
But regardless, Stephen Hawking is not really an expert on computers, and even less so an expert on the brain. Just because he’s a really smart guy and a renowned scientist doesn’t make his opinion authoritative on every subject. (And that’s even assuming you’re right about what he would or wouldn’t disagree with.)
Well, yes, but whether I talk to you, or you see me act, or you ‘perceive your own thoughts’, it’s all really just information flowing through different channels. That there should be a fundamental difference between your self-perception and your sensory perception isn’t clear to me; in particular because your sensory perception is really just self-perception of a model you’ve build in your head of the external world, based on data (or rather, consistent with data) you received from your senses.
Self-perception is, ultimately, what makes you have a self; it’s not that you start out with a self, with which you then perceive yourself. That’s what’s sometimes called the homunculus problem, related to what Dennett calls the Cartesian theater: perception as modelled by some sort of being/system inside your mind/brain, before which is laid out the content of your mind, or the data of your senses, so that it can perceive it. But if that’s how perception works, then how does that being/system itself perceive? Is there another such sub-being inside the first one’s ‘mind’, and on ad infinitum? Here, the regress is vicious, for in order for any perception at all to be made, first the ‘infinitieth’ homunculus would have to make its perception, collapsing the tower downwards.
Well, you’re not fooling yourself, really, or rather, fooling yourself and actually being conscious are the same thing, just like apparently having a headache and actually having a headache are. You actually do think (which I’m sure comes as a relief to you), but so does the zombot; in fact, he probably would have reacted in much the same way as you did to my post.
A piece of paper could have the phrase written on it “I have writing on me”. The paper is telling itself something about itself. Is the paper self-aware?
But then imagine an automaton that “mimics self-awareness”. In order to successfully mimic self-awareness, that automaton would have to have some sort of awareness of its own thoughts, wouldn’t it? Like, if it reports it likes the color red, and later you ask it if it likes the color red, it remembers saying that it likes red and then tells you that it likes red. In order to mimic self-awareness, it has to have a way of storing information (that is, memory) about itself, otherwise it wouldn’t be able to mimic self-awareness.
In what way is this different from you being aware of your own thoughts? In order to mimic self awareness, this automaton must at a minimum be able to process, store and retrieve information about itself. If it can do that at a level which mimics self-awareness, why isn’t it real self-awareness?
But here you’re conflating two things. First, that you don’t know how to test the difference between the two. And next that there is no possible way to test the difference between the two.
I can accept the first position, that there is a difference, we just haven’t yet figured how to test it, but we could if we were clever enough. Just because a kid can’t tell the difference between a gold coin and a gilded lead coin doesn’t mean there isn’t a difference, and as we learn about the world we find there are lots of ways to distinguish between the two.
But I don’t accept the second position, that there can never be a test to distinguish the two, but there is a distinction nonetheless. If I hand you a gold coin and a Lemuralloy coin, and with every test you do down to the subatomic structure you can’t tell the difference between gold and Lemuralloy, then won’t you tell me that Lemuralloy is the same thing as gold? And if I assert that there’s a difference, you’re free to ignore me as a crank unless I can tell you at least one theoretically testable difference? It’s one thing for me to give you a test that you’re physically incapable of doing, like something that requires energies greater than the Large Hadron Collider. It’s another if I say that no concievable test could distinguish them, yet maintain that they are distinguishable.
Hence my re-framing of the OP: when will simulatedly-aware technology become so compelling within certain applications that we accept and inter-relate with the tech as if it was self-aware?
This is already happening, so the question just moves the ball further down that field…
[QUOTE=Half Man Half Wit]
You actually do think (which I’m sure comes as a relief to you), but so does the zombot; in fact, he probably would have reacted in much the same way as you did to my post.
[/QUOTE]
Concur. In order to perfectly resemble a thinking entity, the zombot has to do something tbat can only be properly described as ‘thinking’