Will a human-built machine constructed of silicone etc. i.e., a traditionally built computer, ever achieve self-awareness (I am just trying to differentiate between some of the biological systems theorized as of late.)?
Not necessarily sapience or reasoning, but self-awareness in particular.
If so or not, why?
Of course it’s possible. There’s nothing a meat brain can do that’s inherently impossible for a silicon brain. There’s a lot of things that we don’t understand how a meat brain does it, but both kinds of brains are based on the same principles of physics.
Will it happen? Probably, eventually, if we don’t stop building computers first. But who knows when.
Yes. It’s just like flight; the existence of birds demonstrated that heavier-than-air flight was possible centuries before we had any idea how to pull it off. By the same token, the existence of self aware organic brains demonstrates that self awareness is possible even if we can’t duplicate it yet. There’s nothing so physically exotic about the brain that there’s any reason to think its function can’t be artificially duplicated; it’s not made of neutronium or strange matter, just common elements.
Run a simulator which knows how to simulate basic physics and chemistry, model the womb and add some sperm, and just let it play out. We don’t need to understand how the brain works to make a brain, we just need to have enough RAM and CPU to be able to model a sufficient number of atoms.
I’m not convinced it’s all in the modelling. I think in order to be self-aware, a computer must have sensory input as well. It needs to learn to be self aware, just like, for instance, human babies aren’t born self-aware but acquire it.
How do you define self-awareness? If a computer is designed to evaluate all of the brute-force alternatives to a given set of events and it comes up with responses that mimic self-aware behaviors, when does mimicking morph into actual self-awareness, if ever?
I guess my point is: plenty of situations are coming where the distinctions will be meaningless.
Given our current approach to computer design we appear to be coming at more complex reasoning in computers in a way that is different than biological brains - we given computers an ability to “out brute force” the inductive approach of bio brains.
But we have so many examples where a digital tech is approaching an organic tech - so that in enough applications, the distinction is negligible.
So, to me, a better question is: what specific activities seem to cross the line into machines acting sufficiently self-aware?
If robot-companions (servants/pets) become sufficiently responsive to an individual’s needs?
If a “net bot construct” designed to facilitate chats and interactivity on a social website actually maintains a lead role vs. just getting the conversation started?
Before I worked in programming computer self-awareness sounded plausible. I’ve since come to the conclusion that a computer is as likely to become sentient as a toaster is. The closest you could come is programming a computer to mimic self-awareness, but being able to do that doesn’t invoke the divine spark.
I disagree. I think a computer is as likely to become sentient as an abacus is.
A computer simply performs operations.
While it might be possible to artificially create something which is sentient, I seriously doubt that it is possible to accidentally create a device with the potential for sentience since it is very very hard to intentionally create a device which is sentient (so hard that humans haven’t managed it yet).
A brain simply performs operations, too. We have pretty much complete understanding of how a neuron works, and we can accurately simulate neurons with computers.
Quoth MrDibble:
OK, I’ll grant that. But there’s no reason a computer can’t be built with sensory input, of most of the same sorts that humans have. The laptop I’m sitting in front of right now can be reasonably said to have sight, hearing, and touch, and if taste and smell are actually essential, then there’s probably some way to arrange those, too.
No one’s proposing that a self-aware machine would be hosted on a Dell.
True AI might very well require things like (1) massive parallel processing, and (2) analog or non-discrete states and signals — the things that natural brains are made of. Also, some like Roger Penrose argue that the brain might depend on quantum mechanical effects, which ordinary computers don’t use (yet).
Yes. A computer is never going to have an original thought - or any thought for that matter. It will only ever do what it has been programmed to do, no matter how complex that may be.
How does any of this lead to sentience? Normally skeptical people accept a lot of woo when it comes to AI; it’s the same desire to create life from the non-living as a golem or Frankenstein’s monster.
Hmmm…has anyone managed to create a computer that gets upset at the idea that you might permanantly turn it off?
I could see the possibility of a computer being self-aware in some way. But the true miracle of the animal kingdom seems to me to be this matter of volition, and it’s certainly not just a human thing. A mosquito, at least at some primitive level of mosquito consciousness and intellect, cares whether I swat it or not, and will try its damnedest to get a meal out of me without being swatted. The computer I’m sitting in front of, not so much.
Yes - hence my re-framing of the OP: at what point will a computer’s ability to replicate self-aware behavior become close enough, within the execution of some functions, that the distinction becomes negligle?
Even if you built a computer that responded exactly like a human being to every possible stimulus, I’m not sure that this would necessarily imply sentience. I.e., I think it’s possible in principle that a computer could be indistinguishable from a sentient being without actually being sentient – a philosophical zombie. I’m not saying it necessarily would be that, but it’s possible (or at least, I can’t say with certainty that it’s impossible).
Some will say “indistinguishable from sentience” = “sentience”, but I’m not convinced. I have direct awareness of my own sentience, but as far as I know everyone else in the world could merely be an automaton who both perfectly duplicates the appearance of sentience, and falsely reports having direct awareness of their own sentience, without actually having it. I assume everyone else is as truly sentient as me, because the alternative is to assume that I occupy a privileged position in the universe, which I think is unlikely. With a computer, whose origins and inner workings are drastically different than my own, I have less reason to trust this assumption.
If I knew the details of that, I could quit my day job.
If you believe that sentience requires some sort of “divine spark”, some non-physical thing in addition to the physical, then yes, you’d conclude man-made AI is impossible. The discussion is pretty much over then.
But aside from appealing to our ignorance about where exactly consciousness comes from, there’s no reason to think that our minds are anything other than the result of brain activity — purely physical brain activity. And if Mother Nature, through natural selection, could find out how to make a sentient being out of physical parts, it’s at least plausible we could figure it out too, especially since we’re going about it deliberately.
I’ll grant you that actually inventing AI might very well turn out to be a monstrous, Frankenstein-ish mistake. Of course that didn’t stop us from inventing nuclear weapons either. We as a species aren’t exactly known for our caution or long-term thinking. I was just addressing what’s possible, not what’s wise or useful.
I’m not so sure there’s any self-awareness in that mosquito. Insects are capable of some pretty elaborate behavior which, when tested, turns out to be very rigid and computer-like. (Witness the digger wasp for example.) A mosquito might be no more aware than a chess game that “wants” to win at chess, and acts in all ways to protect its king from checkmate, but which we can be pretty sure has no real volition.