The reason a mosquito doesn’t want to be squashed is that mosquitoes that didn’t care if they got squashed didn’t go on to become mommy mosquitoes.
If (assuming you could) design an artificial intelligence from the ground up, there’s absolutely no reason the computer would care whether you switched it off or not, unless you deliberately (or accidentally) programmed in that emotion.
As for the notion that a computer can only mimic self-awareness, well, I could say the same about you. The distinction between fake self awareness and real self awareness is meaningless. If you can fake being self aware, you are self aware.
If you maintain there’s a difference, then it’s up to you to explain what exactly that difference IS. If there’s no way to distinguish between fake and real self awareness then the two things are identical.
We don’t know exactly where we are going or how to get there, but you want to know when we will arrive? Have you considered a career in management?
I think intelligent computers are possible, but whether humans are smart enough to build them is an open question. Is is possible that one of the steps on the PERT chart to AI is developing smarter humans.
Probably not.
I was merely speaking about the current stage of the process.
Reframing the question as whether a computer program might be able to pass a Turing test is more plausible.
We don’t know how what we call ‘sel-awareness’ arises from the complexity of our neural networks. We don’t even have a relatively exact model as to how memory works.
This is not to say that these are intractable problems. They are, hoewver, much more difficult that we thought they were back in the 1970’s (for example).
The position of tim314 most closely represents my position.
The only thing I know is that I am self aware. I can only assume it is so for everyone else (and conceivably not every single person either, people’s brains get broken due to injury or defect all the time and there is no specific reason why this couldn’t be the case with self awareness).
Given that we know almost nothing about what causes self awareness, and that we cannot even be sure that all or most humans are self aware, I don’t see how people can say with such certainty that we can build self aware machines. I’m certainly not ruling it out, just as I’m certainly not ruling out that we will one day understand the mechanism behind self awareness, but right now it is completely unexplained. And even if you thought you could build such a machine, how would you test that you had?!
You must not have been programming for very long, or have been programming anything very interesting. Any program that works by exploring a search space through some sort of heuristics will come up with answers not directly programmed in. There have been cases of programs finding interesting new proofs, designing electronic circuits no one has ever thought of, and I’ve personally seen programs creating tests I’d never have thought of. And this is still very primitive.
This is a particularly good example because we don’t fly by building airplanes that flap their wings. We might start by simulating the brain, but what we learn from doing that, and observing what goes on in a way we can’t do today, might lead in totally different directions.
But I stand on my initial point: What matters more, arguing some concept in abstract, absolute fashion or understanding how it spills out into everyday life? Is “self-awareness” some clearly defined point or is what matters how technology will exhibit increasingly self-aware-like-behavior?
Even as we speak, in very-clearly-delineated activites, we have interactions with simulated-intelligent agents. NOT artificially intelligent - just capable of managing the major discussion branches of a narrowly-branching activity.
How long will it be before we can replicate that in a brute-force way across much more complex activities? And, at that point, is the bright line of self-awareness really the point?
Providing sensory input to computers was among the first thing AI researchers worked on - it was a topic when I took AI 40 years ago, and there has been a ton of progress since. This is the least of the issues involved. In any case, it is very easy to digitize a sample set of sensory inputs and apply them to your AI running on a network for training and testing purposes.
Hell, my phone has a level of pattern recognition only dreamed of 40 years ago.
Well, let’s put it this way: Suppose ST:TNG matter-replicator technology were possible: If you were to replicate a human being, would you inevitably get a dead one who could not be “revived,” because there is some kind of soul or spirit or vital essence that is not replicated because it is not matter? Because that seems to be what you are saying. If not, then it’s purely a problem of material complexity, isn’t it? The closer we can model the natural brain, the closer we can get to something that works like it and shares its self-awareness – right?
Obviously a first step is to build a machine that plausibly seems at first glance to be self aware, we’re not talking about proving that a rock isn’t self aware.
But if there’s a difference between fake self awareness and real self awareness, there has to be a test you could do to tell the difference. Otherwise, I assert that there is no difference. If there is a difference, you should be able to tell me what that difference is. Since you can’t, there isn’t one.
Probably. Brain architecture evolved over time, computer architectures are designed. Fundamentally different technologies mean that it makes no sense to mimic the brain in hardware. We might simulate the brain, or do something completely different, but there are no multiple-core brains (besides Steve Martin’s.)
Computers are self aware now. They aren’t aware of much aside from themselves (or other computers they are connected to in a network). Computers are much simpler than human brains, and self-awareness isn’t that big a deal for them. Humans are minimally self aware compared to computers. We can’t read our underlying ‘programming’ or even determine the line between the ‘hardware’ and the ‘software’.
The issue is I have a test that works only for myself. I have direct awareness of my own sentience. I can’t prove to you I’m sentient, but I know that I am. And therefore, I know that sentience (or “self-awareness” or whatever you want to call it) is something that really exists.
But for anyone outside my self, I don’t have direct awareness of their sentience, and I don’t have any way to distinguish a sentient being from a being who can fake it perfectly.
However, I agree with the point that we haven’t even yet managed to create a being who can fake it perfectly, so perhaps it’s premature to worry about how we can tell if they’re faking it.
What counts as “self-awareness”, though? Consider the following:
That sentence contains a complete description of itself (capitalization and punctuation aside), but does that make it “self-aware”? Not in the sense that I, a human being, am self-aware.
Of course, a computer can do more than tell you its source code. It can accept input and output, it can process information, etc. But I’m not sure that all of those things imply that it’s self-aware or sentient in the sense I am – in the sense I mean when I say “I have direct awareness of my sentience”.
These discussions are complicated by the fact that I can’t really show you an example of what I mean by my own self-awareness. (If I could, I could probably use that example to construct a test of self-awareness.) All I can say is “You know, that awareness we all (presumably) have of thinking and feeling and existing and so forth. That first person perspective that we all (presumably) experience directly.” If you really were a “philosophical zombie”, I’m not sure you’d even understand what I was talking about (but if you were a perfectly deceptive zombie, you might claim that you did).
I think that a lot of people underestimate the size of this hurdle. It wasn’t too long ago that people assumed that glial cells were just glorified scaffolding for neurons. Now it appears that astrocytes play a significant role in cognition, and that “neural network” may be a misnomer.
I don’t disagree with what you are saying about humans, but I’ll point out that computers don’t have those other abilities that you do to be aware of. Computer self awareness is always going to be limited to the ‘self’ of the computer, which is right now very simple. Your sentence is also somewhat self aware, as far as the self of a sentence is concerned. I think the underlying question might be will computers be self aware when they can demonstrate the high level thought capacity of a human brain, and I think the answer will be yes. When a computer can read sentences with the comprehension level and abilities to reason that a human does, I’m pretty sure it will have better self awareness than we do now. But for current computer technology, they are very self aware.
Do you, though? Forget about proving it to us– Can you prove to yourself that you have self-awareness? If you were just a machine which processed inputs into outputs according to some rules, would you be able to tell the difference?
This circularity - which Decartes attempted to break through with Cogito Ergo Sum, and which Turing turned on its ear with his test - is exactly my point. Trying to define self-awareness in a bright-line sort of way is academic sophistry.
And will such a definition matter, if in the meantime we are surrounded by physical and online 'bots that provide us forms of interaction - to serve and to entertain - at levels of sophistication we find…adequate?
That’s an interesting point, TriPolar (re: judging “self-awareness” relative to the complexity of the “self” in question). That’s not how I would have gone about defining self-awareness, but it’s definitely an interesting way to think of it.
“Divine spark”? :rolleyes: Why not say that we can’t make self aware AI because Odin doesn’t will it? When has this sort of vitalism ever turned out to be true?
And also, where is the evidence that we have “original thoughts”, going by the definition that seems to be used here? We have our own biologically derived programming interacting with a great deal of outside input - I see no evidence that any magical force beyond that is necessary to explain human creativity.
I see this a lot in discussions about AI; people say that we can never create AI because computers lack some “X” quality that humans have, without ever bothering to demonstrate that humans really do have that “X” quality.
Because no one has designed it to resist being turned off, for obvious reasons. There are programs that try to resist destruction, in life simulations and such; there are living organisms that destroy themselves according to their biological programming. And of course there are humans who suicide or knowingly set out to their death. A survival drive isn’t what makes life alive, and has nothing directly to do with intelligence/self awareness.