Individual neurons function in a relatively straightforward way; you can look up the details but basically they combine signals from other neurons to produce an output signal to still other neurons.
Imagine a tiny silicon chip that duplicates the function of a neuron. Replace one of the 86 billion neurons in your brain with this chip. Does anybody think that their mental capacities or consciousness would be affected? OK, replace 2 neurons. Then 1,000. Then a billion. Is there some “threshold” at which consciousness ceases? Why? If not, why can’t a computer be fully conscious?
And then there’s the idea, popular among some researchers, that what we call consciousness is simply a story that we make up after the fact to explain things that have already happened. There is evidence that we only become conscious of our intention to do something AFTER the signals to the motor neurons have begun. Think of a baseball player taking a swing at a 90mph fastball.
But there certainly doesn’t seem to be anything about a neuron than can’t be duplicated by silicon, at least in principle .
OTOH, here’s the problem of consciousness in a nutshell: could (in principle) a programmer create a computer that simulated a human mind in every way but without it having the actual subjectively experience of consciousness? IOW, it would pass the Turing test, being able to produce outputs indistinguishable from that of a conscious being, but only as a simulation, without anyone “on the inside” experiencing qualia or self-awareness. If it isn’t possible, why not? If it is, how would you know, and how would it be different from a truly conscious computer?
Well, the ‘chip’ is just a physical artifact; not itself inherently a computer. If it can duplicate the behavior of a neuron, then it can replace a neuron; but it wouldn’t necessarily compute. What it does, as you said, is produce a certain electrochemical response upon being given a certain electrochemical input.
This can be interpreted as a computation: say, as computing some logical function of a certain input. But therein lies the crux: interpretation is itself something that depends on consciousness (or at least, on intentionality). Nothing computes if it is not interpreted by a mind to compute; consequently, computation cannot lie at the foundation of mind—the idea is circular.
This doesn’t mean you can’t build a conscious brain out of your neuron-like chips; but it wouldn’t be conscious by virtue of the computation it performs, but rather, by virtue of its irreducibly physical properties.
The circularity here is even worse: nothing can make up a story, be deceived into having a false belief, and so on, if it isn’t capable of having beliefs and making up stories in the first place—that is, if it isn’t conscious. Or, as I’ve heard it put, ‘if consciousness is an illusion, then who is it that’s being fooled?’.
Sure the behavior of neurons seems (in principle) mechanistic enough. But afaik no one has claimed that we can model a neuron perfectly; that we know exactly what the output will be for a given input in all circumstances. IOW we haven’t “reverse-engineered” neurons yet.
Perhaps “chip” is a wrong word. IMO, in order to create AI with self-awarness we need to start from artificial life. We don’t need logic circuits, we need analog, electro-chemical computer. Some matter which behaves “lifelike” like droplets of matter in this video maybe.
(Discussions whether an AI computer would become self-aware are pointless - that’s beyond our grasp. Not only because we don’t fully know how our brains work, but because of the sheer magnitude. Is it possible? My intuition tells me “yes”, but with big enough numbers anything is possible in the thought-experinent universe. We don’t need big computers. Just a pile of plastic bottles and 10^gazillion years.)
If you simulate the human mind in every way you have something that is the same as a human mind in every way except for it’s physical manifestation. The human mind is self-booting so you might have to deal with the growth of the human mind starting from some clump of cells until maturity, but assuming you simulate that as well there’s no way that the simulation won’t experience self-awareness as we do, and qualia, if that is even a thing in the manner it is typically described. Unless you believe in magic or that there is some unknown physical property to the human mind then a human mind is just one possible physical configuration for a self-aware machine.
I still can’t figure out the controversial nature of this concept. Why should it matter what the physical configuration of the machine is if it performs the same processes? The machine that believes it is self aware because of it’s thought processes is just as self-aware as we are. I’ve said before that the simple computer I’m typing on now would be self-aware within it’s scope of processing if it were programmed in such a way. What it is awareness of may be limited, it may not be able to expand it’s awareness in what we would consider a useful way, but its awareness wouldn’t be totally different in a qualitative way.
TriPolar, I agree with everything you wrote, but none of it answers my questions. Would it be possible to create a partial simulation that produces the same outputs as your perfect simulation, but *doesn’t * experience self-awareness?
I do not think consciousness is special. It is an emergent behavior of a large number of densely connected units (neurons) that can switch on or off. Neurons can be partially activated as well, but that is not a fundamental change. You can simulate a partially activated neuron using multiple binary neurons. Given enough computing power, we should be able to simulate the individual neurons, and the entire brain.
One of the important features missing from the chip analogy, however, is the ability to form new connections. To the best of our knowledge, learning or remembering something is basically forming new connections and pathways. Artificial neural nets solve this issue by having all-to-all connectivity in each layer and associating a weight of 0 with all these connections. Forming a new connection then means updating this weight. My understanding is that dealing with this O(n^2) or more connections is the limiting factor in simulating brains with current technology.
so when self said computer/robot realizes its alive will its "hello I’m here " be Skynet raining down nukes ? or helpful like a Robocop type of thing helping humanity out ?
If you can program the simulation in a way that would prevent the process of self-awareness then it would be very difficult to create a simulation of the human mind that can pass a Turing test without revealing a lack of self-awareness. I’d probably ask it something about a tortoise ;). I’m not sure quite what you’re asking though, are you only asking about its active self-awareness, for instance the ability to respond as a human would in a Turing test when presented with a hypothetical question about it’s awareness?
Define “self-awareness”. For any definition you come up with, either a machine can have it, or a human can’t have it either. If you think it even makes sense that a human can have it but a machine can’t, then that’s just because you don’t even know what it is.
Can machines ever truly believe the mythologies they create?
We can build a machine that’s creative and we can program a computer to create mythologies … these created mythologies might even be believable … but can the machines themselves ever be said to believe, or must humans program into the machine this belief?
How would a star explain her existence? … how would she explain her essence? … how would she react if the harbor was closed and the town was quarantined with nothing but Elizabeth Olsen movies at the theater?
I’m sticking with trial and error … humans got lucky having conscience … keep rolling the dice and the universe will get lucky again … we have all the time we need after all …
Excellent answer. Link this with the person who pointed out that we really DON’T know completely how neurons work, or even if they are the fundamental and discrete structure of consciousness, and you have the best complete answer.
In short, the question is interesting as far as we’ve gone in understanding ourselves, but it’s entirely spurious overall, because we don’t completely understand ourselves.
Something I like to include in musings like this, is the fact that there is huge disagreement about how conscious non-humans of any kind are. It’s an extension of the old arguments I used to hear about whether or not non-human animals have souls, in the religious context, or for that matter, whether or not a human embryo should count as a person or not.
As for substituting electronics for brain cells, last I heard, we have yet to be able to make electronics which can be directly accessed by existing brain cells. The most we can do, is monitor and amplify some of the brains’ outputs, and train people to use them to control electronic circuits, artificially.
Humans have consciousness (we postulate!), and yet they are indisputably machine-like and animal-like, definitely not transcendent deva. Thus it seems evident that automata and such can be conscious, by our standards.
I take it for granted that kitty cats, pigs, snakes, and squirrels are conscious, that they have feelings (sensations and emotional content) and cognitions (they process things, they recognize patterns, they learn, they anticipate futures based on those learned patterns and they make choices accordingly). That also makes them creatures of free will. I’m less certain that mosquitos, ants, goldfish, earthworms, tics, snails, and slugs are conscious, but I’m way far from certain that they are not. If I could try it out and “come back” immediately as myself, I would be very curious to experience what it is like to be one, to see what degree (or lack thereof) of awareness exists there.
To be clear, I’m a materialist. I have absolutely no doubt that a computer or other machine could be made to be conscious. That doesn’t mean that there is nothing weird about consciousness.
If you define consciousness (or self-awareness or whatever related term you want) as “able to pass a Turing test,” then the weirdness goes away, but that’s not what I mean by any of those terms. What I’m most interested in is experience, subjectivity. Many philosophers refer to consciousness as an “emergent property,” as if that actually answered any questions about it. It certainly appears that consciousness is an emergent property, which is to say that no amount of understanding of the substrate of consciousness, whether biological or silicon, will allow us to calculate the properties of consciousness: as AHunter3 points out, identifying consciousness always involves extrapolating from external observation.
That may not be significant, or even interesting, but it is … I hesitate to say, “mysterious,” but certainly weird. There are plenty of things in nature that we can’t observe directly, but we generally assume that, with the exception of quantum stuff like Heisenberg’s uncertainty principle, those phenomena are observable in principle, just not in practice. Likewise, there is nothing else (with the exception of some quantum weirdness) that is emergent in the same way, nothing that I know of that we think can’t be predicted and understood by calculating information about a more basic level. The common popular examples of emergence are things like the behavior of macroscopic quantities water being unknowable based on knowledge of water molecules, behavior of complex systems like weather patterns and human society being unpredictable based solely on physics and chemistry (even though there is nothing else going on but physics and chemistry). But according to my understanding of chaos theory, while the universe may not have enough bits of information to do the calculation, that’s still a practical problem. Nothing but subjective experience currently appears to be unknowable in principle apart from indirect inference, even given complete knowledge of the physical system. Subjectivity appears to be so by definition! (If it weren’t, it would be objective, not subjective.) And consciousness does seem to involve subjective experience.
Now, some people say that that is so weird that it can’t be the case. I disagree. The things those people posit instead are all even weirder and far less parsimonious or compatible with the evidence. And if you start with the assumption that physical systems can have subjectivity, then the universe we observe seems perfectly consistent with what you would expect. Again, subjectivity by definition can’t be observed in another being, so if it exists, you would expect it to be “weird” in the way I’ve described, even if it is purely physical. The weirdness may be purely epistemic.
But still. We know subjectivity is possible only because we experience it ourselves. (Well, I do, anyway.) An alien with far greater intelligence but no experience of subjectivity would be perfectly justified in rejecting any claims by humans to having subjective experience. (Indeed, such claims would be incomprehensible to said alien.) And no amount of knowledge about the universe would allow the alien to conclude that it was wrong. And again, is seems only to be true of subjectivity. Weird, huh?
I don’t find consciousness or self-awareness weird. Not even all that complicated. Intelligence is extremely complicated, and emotions are often weird for me but not for everyone.
At some lowest level there are machines (biological or otherwise) that respond in some way to sensory inputs. I don’t know if there’s any point in saying that a moth circling a light bulb is acting consciously based on the sensory input, but get a little more complicated and some pretty simple animals and machines analyze sensory input and make ‘decisions’ on how to act. As brains get more advanced the process gets more complicated and moves away from simple deterministic programming. I would consider this lowest level to be consciousness, although I don’t know if that fits any strict definition.
Self-awareness is a bit more complex, it requires some degree of reflection, the ability of the machine to observe and analyze itself, and perhaps to understand it’s individuality in that the rest of what it can observe is not itself and not under it’s control. That is still not something all that complex.
Subjectivity adds a little more complexity, but only requires that the machine is not purely hard-wired in it’s processing. As long as it’s sensory input is processed in a way that affects future decisions based on experience there will be subjectivity. And if the software that processes that input is self-modifying based on the that input the subjectivity becomes apparent.
Emotion is much more difficult to consider. Emotions are that low level kind of reaction to stimulus that lies below the conscious level, but at the human level they have evolved in very intricate ways that balance their low level function to enable survival with the conscious decision making that enable survival in a completely different way. Still, while simulating love in a machine may be a difficult thing to work out, simulating simple motivational emotion based on the need to make optimal decisions isn’t all that difficult. Animals have a hard-wired basis for choosing survival and a machine can be programmed in a similar way.
Intelligence is the real tough one to work out and what trips up machines taking Turing tests. If a machine is intelligent enough and has sufficient resources it can pass the Turing test without much complexity in consciousness, self-awareness, subjectivity, and/or emotion. It simply needs enough information and processing power to pretend that it has those qualities at a human level even if it doesn’t. Humans can do it, a sociopath can pretend to have empathy for other humans, he just needs to know how normal humans exhibit that characteristic.
What seems unlikely to me is that a machine could be intelligent enough to pass a Turing test without being able to use the indications of intelligence it displays to do more than pass a Turing test. Perhaps it could be designed merely for the function of passing a Turing test, having nothing like ‘free will’ to do any more than respond to questions from a remote human for the purpose of passing a Turing test, but even then it is merely lacking a motivational basis to do anything else.