Case 3 is slightly different from case 2 in that there need be no “cause” of our actions. But I tend to agree with you that this can’t be free will.
But in case 4 our brains are the same as they are in case 2. If there is some motivator of free will in case 4, it should also appear in case 2. If you consider case 4 to represent free will, and the other 3 cases not to, then free will simply represents the case that no entity in the universe can predict your future actions.
Is this enough? We can’t predict radioactive decay either, and I don’t think we can call that free will. Does a dog have free will? Is then free will tied to consciousness in some way?
I’m resisting offering a definition since the moment you define this term, you collapse the discussion down to a point where the answer is obvious - and we know it isn’t. I contend that the very term free will - the will part - assumes something is driving us, and that the question is whether this something has reins or not. The term doesn’t encompass the possibility that nothing is driving, that our neurons fire and that some program running on top of the brain reads will into the firings, just as we interpret the random firings of our neurons during sleep as very realistic dreams. Do we actively control our actions in dreams? I don’t. Maybe dreaming is like being an animal - observing, doing, but not doing it consciously.
Other than that’s not my definition of free will (which is some form of compatabilism, I gather), sure.
As has been noted, this all depends on the definition you choose for free will. By my definition, a wind-up toy could be argued to have free will. If you tie it to mere unpredictability, then if a human has free will so does a six-sided dice (unless you claim that the randomity is limited to some specific place like a fictional soul, which would then enable you to arbitrarily decide which things had souls - dogs, dice, caucasions only; whichever).
Well, not having a defintion will indeed tend to prolong the debate - “Free will tastes great” “Say what?” “I was thinking it was a type of cheese, but now I’m thinking it’s a particular hairstyle.”
And, heh, I figure that if something that is “not I” is driving “I”, then “I” then do not have free will. Which may be what you mant by ‘reins’ - but in that case I don’t think that the reins matter quite as much as deciding what qualifies as part of “I”. There’s no shortage of reins, after all; my nervous system pulls on my brain’s reins, which pull on my muscles’ reins, but they don’t count since they’re all part of “I”.
So, are you’re arbitrarily saying that you can’t accept that the complex interactions between firing neurons might be able to generate free will? Why not? I mean, clearly they can generate thoughts…
And this dream business confuses me. Aside from the fact that I occasionally do control my actions in dreams (though I don’t control anything besides my ‘avatar’), I have to wonder how you get from dreaming to a lack of consciousness. If you weren’t conscious in your dreams, you wouldn’t remember them, would you?
And, How do you know animals aren’t conscious, by the way?
(How do you know that interactive computer games aren’t conscious?)
((How do you know that humans besides yourself are conscious?))
In my experience, people having two different definitions for the same term prolongs an argument, since they start talking at cross purposes, and don’t even understand what the other is saying because it makes no sense in the light of the definition they are using. If I define free will to involve intelligence, we could have a big fight about your wind up toy, but we’d really be arguing about our arbitrary definitions. At the moment this discussion is far more productive than that.
Good point. If you put your hand on a hot stove, you take it away by an order from your nervous system, and your “I” convinces itself that it decided to take it away. (For example.) Where is your “I”? If your subconscious solves a problem without conscious thought, did your “I” do it or not? If your answer is that your “I” didn’t solve the problem, then something besides your “I”, but still in your brain, is driving. External factors influence that, but don’t quite control it.
Damon Knight called is subconscious “Fred” and wrote about setting up story problems for Fred, who would then give an answer. He wrote as if “Fred” was outside of his “I”. I feel the same way - I set up a problem, ship it off to my subconscious, and wait for it to go “ding” and hand me the answer. This works very well.
Depends on your definition of free will.
I remember things I do while conscious, I don’t remember most of my dreams, and only a few more then fuzzily. Do you really control your dreams, or do you only dream that you are. If I controlled my dreams I’d be spending a lot less time wandering large hotel lobbies and more time with <name of hot starlet censored here>. I don’t know about you, but in my dreams I blindly accept very absurd situations. So no, I wouldn’t call myself conscious during them.
Some of them? My smart dog is pretty close. But they don’t have the verbal skills to discuss their inner thoughts, and so we’ll never know.
Well, I got no bones about discussing six or eight different definitions of the term at once either - so long as we’re clear which one we mean for any given contestible assertion.
I save myself the trouble and just include everything inside my skull and halfway down my spinal cord as being “I” when I say things like “I decided”, and including everything inside and including my skin when I say things like “I did”. Makes it easier.
If I had a soul, it would only be me if it did not have an awareness of its own that included other things besides my consciousness - as in, if it has its own mind and/or personality, which yawns and goes “ho-hum” when my physical body is in pain for example, then it’s not “I” - it’s a puppeteer.
Fair enough - but if a free-willed consciousness cannot rise from complex interaction within a high-speed data processing system, then I would have to wonder what mechanism a soul could possibly use to generate such a thing. I will not accept that it’s just an indivisible ineffable thing that happens to have consciousness and self-awareness through the magical power of farts and fairies. I consider the human mind (at least my human mind) to be far too complex to be housed in a simple homogonous blob-thing; where there is will, there must be an underlying mechanism with sufficient complexity to support that will. (Like, you know, the human brain.)
Hey now, I just told you I only control the avatar, not the entire dream world. So if my avatar is stuck in a hot-starlet-impaired hotel lobby, that’s where it’s stuck, and there’s not a thing I can do about it, besides wandering around and hoping my slumbering subconscious coughs up a more interesting environment to occupy.
Of course, I also can’t make hot starlets appear in waking reality through force of will either, so I don’t see where having this sort of limitation implies I’m less conscious in my dreams than when awake.
Well, I know they’re not inert. And they’re clearly ‘conscious’ of their surroundings. And self-aware (at least enough to know whose balls to lick). What were the requirements again?
So, what happens if we figure out a way to scan and interpret a living human’s brain waves? (Hmm, isn’t there a thread about that around here somewhere?)
And, what if I don’t let you see the code? Does that make a difference? I can read the binary of an executable, myself…
So… a baby isn’t conscious?
(And what happens if a computer program -whose code you can look at- ever passes one?)
Yeah, I would consider those to still be free will. I don’t consider free will to be the opposite of determinism, I consider it the opposite of being restricted or controlled.
You could argue that knowing what is going to happen in advance is some kind of restriction but someone else knowing what you are going to do is a very different situation than you yourself knowing what you are going to do.
Anyway it’s a ridiculous question, along the lines of “can god make an object so heavy he can’t lift it”. If the person already knows what they are supposed to do then they can make a different choice. There cannot exist a situation where the person both has knowledge of their future and also an inability to change that future. In the situation where it is someone else who knows their future, this is also an absurd question. You can say the same thing about your own past self. You can look at a video of yourself making some choice. From your perspective, that choice has already been made. But that does not invalidate somehow your sense of being able to choose freely at the time the choice was made. And the freeness refers to your personal experience, not the perspective of the space time continuum.
So again, strawman - people are redefining “free will” in such a way as to create a paradox where the original definition did not create one. And sophistry - creating a self contradictory logical situation and pretending there is some legitimate issue at stake where the question itself has no meaning.
I thought of another way to look at it - there are a couple of different ideas being snuck under the fence here:
“Free will” - I see the conventional meaning to simply be “ability to make decisions” plus “self-awareness”. A puppet is under the control of a another entity which is making the decisions, so it doesn’t have free will. Computer programs have the ability to make decisions (if-then trees, etc) but don’t currently have self-awareness so they don’t have free will. Determinism is not in conflict with either decision making or self awareness. Of course, self awareness may just turn out to be a complicated series of decision trees with some kind of feedback loop, but it’s enough of an added complexity for it to be useful to differentiate it from simple decision making.
“Freedom of will” - the ability to act on decisions. Determinism controls what decision you will make, but it doesn’t really affect the ability to act on the decision so there is no conflict here. There of course are other restrictions on freedom of will - you can make the choice to levitate but you don’t have the ability to act on that choice. You can choose to walk through a door, but are limited by whether or not it’s locked. But this is obvious and does not create any kind of obstacle to making the decision itself.
“Hard determinism” - it’s possible to know the initial conditions perfectly despite any quantum issues, either for us, or for some God entity or sentient being outside of our space time. Again, this doesn’t violate decision making or self awareness, and whether it affects freedom of will is a non issue because no one will argue that humans have complete freedom of will.
“Nondeterminism” - some people are just defining free will as “whatever violates determinism” which as far as anyone can tell can only be either “randomness” or “magic paradox which we cannot determine the properties of other than defining it as the ability to defeat determinism”. As for true randomness, I don’t think that exists. And I think that the personal experience of free will is sufficiently explained by decision making and self awareness and there’s no need to add anything else to the equation. The sense that we are somehow changing fate is just an illusion created by our lack of complete knowledge about our fate. The apparent conflict between free will and determinism boils down the idea that it is logically possible for a subset of a system (a person’s knowledge) to completely mirror the entire system (their fate). The problem is with this logical impossibility, not with free will or determinism.
You can also look at it like a logic problem. Decisions are based on knowledge. In the case where knowing the future perfectly would not change the choice you would have made with imperfect knowledge, than the outcome is the same and there’s no issue. In the case where you would have made a different decision with perfect knowledge, again there is no issue because you act on this knowledge.
The only issue that arises is when you are determined to “prove” your free will by making a different decision than the one you would make with perfect knowledge of both the universe in general and of the decision you are going to make. This sets up an impossible feedback logic of “if N then ~N”. What happens there?
You could probably make some sort of mathematical analogy with limits or an equation approaching it as a problem of time delay between iterations of knowledge. Iteration 1 - you make a choice. Iteration 2 - you know what choice you were going to make, so you choose the opposite. Iteration 3 - you know you were going to choose the opposite of your original choice, so you choose the original choice. Etc. When you decrease the times between iterations to zero, it’s the same thing as letting the iterations proceed to infinity. In either case what is the final result? You either have some kind of non choice, a superposition of both choices, or “undetermined” (something like 1 divided by zero).
As you can see, the problem is not a conflict between free will and determinism. It’s in asserting a logically impossible scenario and pretending that the problem is with the things you threw into the paradox, and not with asserting the paradox in the first place.
I think the paradox is between our mental view of what we do (the appearance that we choose different courses of actions) and the notion that 1 state at time t can only result in 1 state at time t+1. (Note: random events are included in this, they are just input into calculating t+1)
Our gut feel says one thing is happening but our logical analysis says it’s not possible (based on what we know of the universe).
Okay. I say “I solved the puzzle” also, since to the external world me and my subconscious are one and the same. But often my conscious mind has nothing to do with it.
I would guess that people who believe in souls think it is their “I” - because if they didn’t, the survival of the soul after death wouldn’t do them much good.
Actually I see no reason why a consciousness just like ours can arise on a computer. Looking at AI for 35 years makes me think we’re not heading in the right direction to do this, but I see no reason it can’t be done. The question is whether this demonstrates making free will on a computer or that we don’t have free will. Those who believe in souls I guess also believe in some “spark of life” - in other words, fairy tales. I’m a pure materialist myself.
If you really controlled your avatar, you could get out. And reality has the disadvantage of not being totally contained in your mind
I was thinking of your specific example. You can look at the “brain waves” of a chip through e-beam probing, or by doing a scan dump. I’ve done that stuff. That doesn’t give you the design. I do think at some point we’re going to be able to read neuronic connections, put them on a brain simulator, and boot a consciousness. I actually think this is going to happen long before we get an AI to pass a Turing test.
But we’ll still have the problem of deciding whether we have created a computer with free will or proven that humans don’t have free will.
As for reading binary, the interesting question is whether the code will be self modifying, the way the first machine language code I wrote was. The first assembler I ever used I wrote myself.
Not really. A small enough baby won’t pass the mirror test, and isn’t really aware that her fingers are part of her. That’s why you don’t remember your earliest days.
Thank you for offering a definition. By your definition, I agree.
Who is “you?” At some point obviously you go from not knowing what you are going to do to knowing. It seems that this point is earlier for the subconscious than the conscious - at least some of the time. Does that count as you knowing what you are going to do? And when the decision got made, is it a result of purely physical, partially random, processes or the result of some sort of will. In either case, by your definition, it is free will.
You’re right, and that is exactly why a bi-omni god is logically impossible. It’s not quite the same as the weight too heavy to lift, since that just concerns the definition of omnipotent, while here either omni is logically possible, just not both.
So again, strawman - people are redefining “free will” in such a way as to create a paradox where the original definition did not create one. And sophistry - creating a self contradictory logical situation and pretending there is some legitimate issue at stake where the question itself has no meaning.
[/QUOTE]
Your definition is fine, and has the advantage of letting us determine if something has free will or not. It doesn’t answer the question of where the decisions come from. I think a lot of people have a problem considering decisions made being solely a function of the environment to be free. That’s why they are uncomfortable with your definition.
Hang out for a while in a discussion about the Problem of Evil sometime.
(Assuming you meant “can**'t** arise” in your first sentence: ) Personally I don’t feel that we need to wait for somebody to create an AI to know that a 'free will’ed mind emerges from complex functionality of something - I don’t think there’s any other manner which it could possibly arise and still have the properties it has. However there’s nothing saying that there can be only one way to generate free will - so all that a free-willed AI would demonstrate is that that our minds might be operating in the same or similar way, not that they actually are. Though, the mere possibility would probably still be enough to disturb some of the faithers.
And yah, the typical religious type seems to be all into indivisible ineffable things with no working parts or functionality. It sort of comes with the genre, I think.
I guess I don’t really control my living body, then, because I can’t teleport myself out of the universe. :rolleyes: Don’t get so dedicated to being right that you lose sight of what you’re arguing.
And yeah, when I dream, I’m playing out my thoughts inside my mind. Of course, as you yourself state, the human brain contains more levels of thought than just the overtly conscious mind. If my subconscious builds little scenarios in my mind and locks me into them for the duration of my dream (perhaps to let the janitorial staff have a crack at the rest of the brain while I’m in there), does that suddenly mean that my otherwie fully functional conscious mind becomes somehow less conscious while so trapped? If so, why?
Er, wouldn’t the act of doing that create an AI that could pass a turing test?
And when a machine AI does occur, I think you’ll see humanity divided into varying camps - those that beleive this demonstrates that free will is compatible with determinism, those that beleive this demonstrates that humans have no free will, and those that don’t think that the AI has ‘real’ free will - or at least not the same kind as ours.
This is partly a dodge and partly just beside the point, which is that a turing test requires by its nature the ability to communicate with language. Are you seriously claiming that you don’t believe people have consciousnesses all the way until they learn to talk?
You’re way too optimistic. Here is what I think will happen once an AI passes a Turing test.
the religious will say that it doesn’t matter what it acts like, it has no soul and can’t be really human and so there. (And don’t ask me to show you this soul.)
Those believing in free will will assume the AI has free will, and thus whether our minds are deterministic or not does not matter - we have it also.
Those who don’t believe in free will will say that the development of an AI just like us, who cannot possibly have free will since we can look at the code, demonstrates that we don’t have free will either.
So absolutely nothing will be resolved. Like always.
If you were god, you could. Your mind is the creator of your dream world, so if you controlled your mind you would be able to control the entire thing, not just the avatar. The avatar has exactly the same problem as we do - do you really decide what it does, or are you fooled into thinking you decide what it does? The only test would be to write down what you were going to make it do before you go to sleep, and write what happened as soon as you woke up.
Because it’s asleep?
I don’t personally call that an AI, since it is a simulation of an existing brain and personality, not an artificial personality. It would certainly pass the Turing Test. YMMV.
No, babies become self aware while still babbling. But the very littlest babies don’t seem to be. It takes some additional brain development. Surely you don’t think embryos are conscious, do you? Why would they become so at birth?
Too optomistic? I actually agree with all of this! (With the caveat that there may be some shifting of people from one camp to another when the news comes out - not everyone is locked into their preconcieved opinions.)
So yeah, I agree with this. I agree with this so much that it actually sounds sort of familiar.
If I were god, I could - and if my dreams were the creation of my conscious mind, I could consciously control them. But they’re not and so I can’t. So, the test you propose wouldn’t be useful or demonstrate anything. (Ask someone who claims to be able to ‘lucid dream’, maybe, but not me.)
It seems reasonable to say that I’m as conscious in my dreams with respect to the dreamed surroundings as I am while awake with respect to reality. Which doesn’t mean that I mightn’t be a puppet both when asleep and awake, but there seems to be no reason to believe that one mode of consciousness is a puppet and the other is not. Heck, the fact that you can even vaguely remember one mode from the other indicates that the same cornitive mechanisms are doing the remembering in both cases.
Chuckle chuckle. Now, I’ll freely admit that there are some states of sleep where the consciousness is deactivated - dreamless sleep for an obvious one. No sense of the passage of time, no memory of thoughts, no nothing -now, that’s unconsciousness. (Or at least, if there’s any sort of consciousness happening, it’s completely separate from our regular consciousness and memory and so we don’t know about it.) But come on - we are consciously aware of our dream while dreaming it. How’s that going to happen if no consciousness is occuring?
MMDoesV. It’s artifical. It’s an intelligence. Q.E.D. And additionally, for any computer-hosted brain-simulation, an identical one could theoretically have been designed and implemented from scratch and you would call that one an AI - the identical thing, simply with a different origin.
Why would they become so after birth, as opposed to sometime before? I’m not arguing fo a magical event at the moment the head pops out; there’s a good number of months between being a single fertilized cell and kicking your daddy in the head when he puts his ear to mommy’s belly.
Personally I’ve never seen a baby that appeared to lack a consciousness, and even with my fairly limited experience with newborn babies I simply can’t believe for an instant that humans are born mindless. Rretty much completely ignorant of everything, including their own bodies, sure. Stupid too, especially as compared to the average forty-year-old. But that’s not the same thing as being non-sentient.
I read your response after I had written mine already. Great minds, etc.
Why are the actions of your avatar any different from the environment? I think we can’t control either.
We certainly dream a lot without remembering it, and only remember the ones we dreamed just before we wake up, with the patterns still in our brain no doubt. Now, our subconscious mind certainly works while we sleep, monitoring for noises and working on problems.
The brains of babies continue to develop after birth - our head are almost too big to get out as it is, and any more growth would be very disadvantageous.
Non-conscious is not the same thing as mindless. Neglecting the difference in mobility, my kids at a few weeks were no more aware than my dog. They started recognizing things, they had grasp and startle reactions, and could eat, but sentience? Not for a while. We tend to read sentience into them, of course.
I think I can consciously control my avatar because I’ve done it. I think the only way to udermine that would be to attempt to undefine “conscious control” so far that nobody awake or asleep has it. I don’t think I’d accept such definitions, myself.
We know for a hard fact that when we sleep our brains don’t shut down entirely into a corpselike inert state. At the very least subconscious mental processes of some kind continue to run. Objectively speaking is there anything preventing your brain from ‘reactivating’ the ‘conscious’ aspect of your mind and running it through dream-scenarios?
What, precisely, are the definitions of “conscious” and “sentient” you’re using? 'Cause by the ones I’m using, your dog certainly qualifies. (Unless it’s unconscious, in which case it’s not conscious.)
How do you know when “a while” has passed, and your kids have finally attained the lofty state of sentient-as-you-define-it? Apparently just acting sentient isn’t good enough, because that’s just us “reading sentience into them” - so if it’s not something that can be assessed behaviorally, how do you know when they have it?
Actually, when I’m half way between sleeping and waking, my conscious mind is activated. I can tell since I start to do problem solving on whatever issue I’m dreaming about. That’s also about the time you wake up, realize that you don’t have to deal with the problem because it is in a dream, and are very relieved.
I suppose you can write down some generic action which you’ll do in your dream. If you are in a sleep lab, you can get woken up from a dream state and report if you remembered to do this. I don’t see any other way of showing that the “conscious” action of your dream was real.
Recognition of themselves in a mirror is a standard test. By conscious I mean self-aware - actively thinking about your thoughts. That’s the difference between my conscious and subconscious mind. When I solve a problem consciously, I plan every step and can relate them. When I do it subconsciously, I have no access to how the problem got solved. It’s the difference between having the code for a sort routine, and seeing the algorithm used step by step, and calling a subroutine which is object only, which returns the sorted array.
This doesn’t align with my dream experiences - there are no points in my dreams where I suddenly acquire an alternate perspective on the dream experience, even as I am waking. The closest I get is when the dream stops entirely, and my alarm is ringing, and I’m like, ‘dangit! I wanna go back to the dream!’ (My dreams tend to be pleasant, or at least entertaining.)
I also recognize and react to the situation in the dream throughout the entire dream - is this what you mean by ‘problem-solving’? I don’t always assume real-world rules apply while I’m dreaming, though; sometimes I seem to be operating with a limited or altered memory or awareness of the rules, though I have also on occasion recognized that it was a dream and continued on dreaming, playing out the dream within the limits and constraints of the dream world.
Couldn’t this test fail if you simply didn’t remember the generic action once you got into the dream? I forget stuff in minutes in real life, and dreams are a somewhat altered state beyond that (though still a conscious state, at least in my dreams.)
A standard test? By whose standard? “Self awareness” doesn’t mean “being aware what your physical self looks like.” Recognition of yourself in a mirror requires not just self-awareness, but also the ability to extrapolate that another visually-percieved entity is related yourself despite not actually being yourself and being inaccessible to all your senses but one - and until they actually learn what they look like, they have to make this association based on nothing but the fact it mimics their movements.
Seems like a rather steep test - but then, you’re looking for something that you can only actaully test by verbally asking them what they’re thinking. (If they can answer coherently, they can “actively think about their thoughts” and therefore pass by your definition.) And of course, I don’t think this is the correct definition of ‘self-awareness’ anyway; doesn’t self-awareness really just mean that the entity, itself, has awareness? Not awareness of the self, but awareness as a self?
I would think that any sort of complicated reaction to the world at all that isn’t driven by puppet strings or an external driver would be enough to demonstrate self-awareness by that definition. Something like turning the head to track a moving object would probably be more than enough, I’d think.
Most of the time I just wake up. I spend a lot of my time dreaming about running or attending conferences, and in my dream there is often some kind of problem with the conference, or the environment, or something. I never solve these in the dream, but do in the semi-waking state; and then realize the effort was pointless. I am also often naked in my dream, and, while a bit disturbing, certainly doesn’t cause the reaction it would in real life (I’m not even close to being a nudist.)
Yeah. This test could never disprove your contention, but might give strong evidence for it.
I think that they use the mirror test for chimps or other animals. The point is that unless you have a sense of self you can never imagine that the thing in the mirror is you.
I disagree with your definition. Turning one’s head to track something is a simple response to stimuli, which even quite simple animals can do. Though it’s a matter of definition again, I think self awareness is definitely awareness of self. Something without that can learn by what happens when they act in response to stimuli. A really smart non-self aware animal might try new things, and see how they work. But only a creature with self awareness can build models of the world inside their heads, put themselves inside, and experiment without doing anything external. Only a self aware creature can critique their own thought processes. That seems to me to be fundamentally different.
Err, maybe you should cut back on the conferences a little. They may be getting to you.
My dreams tend to be fairly cinematic, and my influence on them is sometimes quite limited; in my dream last night (which I’ve already forgotten most of the content of) I did not seem to have a physical avatar I could control. I was moved from ‘scene’ to ‘scene’ independently of my will, able to observe what I was shown, and with only the ability to speak to use to interact with the other characters of the scenario.
I did not remember our discussion while actually dreaming, (or much of anything else about my waking identity), and so did not conduct any tests.
So, you’re actually defining “self awareness” as “can build models of the world inside their heads, put themselves inside, and experiment without doing anything external.” To avoid us running in circles chasing our tails on this, I will state that:
I reject that as a definition of “self awareness”, since that’s nothing like what I understand the word to mean (I think it has something to to with the existence of a self with awareness, debatably a self with awareness of the self, as could be demonstrated by knowing that they can move the self’s thumb into the self’s mouth for something to suck on.)
computers can do that already, for loose values of “put the self inside” (they can include the machine they’re running on in the simulation),
nothing about the mirror test suggests this is occuring,
There is no way to test for this without being able to simply interrogate the subject, and
There is no way to show that infants do not do this.
Ever been the general chair of a big conference? It is PTSD.
Self aware of self is the critical factor. Building internal models falls out of that ability.
is purely definitional. Under your definition, dogs are self aware, under mine they are not.
2). Computer simulations I’ve written, and one’s I’m aware of, just do the hardware, and very little software, since running software on a simulator is too darn slow. Getting a chunk of the kernel simulated takes weeks. (I’m talking a detailed simulation, not instruction level.) Animals certainly do have awareness of their bodies, which would correspond to a computer simulating its own hardware. For self awareness, the computer would have to simulate its software, including the simulator, and have the ability to modify the code and simulate the results of the modification. It would also have some chunks of code not visible in the simulation, corresponding to the subconscious. I’m not aware of anyone who has tried such a trick, but that might teach us more about real AI then all the heuristics written in the past 30 years.
The mirror test only detects the possibility of self awareness, and says nothing about internal model building, which would be very primitive at this stage. Interrogating the subject is preferable, but the mirror test is to explore the region between non-self aware and self aware and talking - in other words to find where the self aware and not talking region begins. There are other signs of self awareness long before speech, but the mirror test is for the minimal amount.