I think that purely abstract modeling and imaginative thought is a step or two beyond the minimal cognitive awareness that “that thing there (arm) has things on it that I can jam in my mouth (fingers, thumb), and I can move it to my mouth, and then I can suck on it” requires. I do think that thumb-sucking is demonstative of my definition of ‘self awareness’ which is more basic, but I do not think that it is indicative of the higher-level abstract modeling definition, not quite.
Block-stacking probably is, though.
Thanks, but I think I’ll pass.
Correct. I believe that the term “self-aware” is a useful term to describe the difference between being alive but mindlessly insensate (like your average plant) and being alive with that little ‘spark’ of active internal awareness that we humans have. I do not think the term was ever meant or commonly used to decribe “at least this clever, and not a hair less clever than that.” - that’s a much less generally useful distinction.
You’re moving the goalposts - a moment ago you only required “can build models of the world inside their heads, put themselves inside, and experiment without doing anything external.” This does not include awareness or modeling of the self in any sort of intricate detail such as simulating hardware - we humans don’t even have conscious awareness of any but the most prominent of our internal workings. We do not simulate at the cellular level - why should a computer have to?
All that a computer in, say, the space shuttle would have to do to meet your initial definition would be to model the gross aerodynamic properties of the entire ship, with only the controls that relate to the ship’s physical movement, and then run simulations of various different flight paths and approaches it could make (and then presumably select the ‘best’ one to carry out by some selection criteria). Do you seriously claim that computers of today couldn’t be programmed to do that?
The mirror test may be used for something like this in practice, but since it relies on a great deal more than self awareness to be done, I hope that any actual scientists who use it have a better idea of what they might be able to learn from it. As previously mentioned you have to learn what you look like, for this to work - a full adult presented with a simulated mirror reflection with an unfamiliar appearance would likely take several seconds to figure it out. Expecting an infant to do equally well means that this test will be failed long after actual self awareness itself (even by your overzealous definition) is in place.
I’m going to have to read about that when I have a chance. I studied computational complexity decades before anyone thought of quantum computing. Interesting - thanks for the cite.
I’d prefer “aware” to refer to the awareness animals have, and “self-aware” for that special case where the ego is in the model. And I don’t think cleverness has anything to do with it. It is quite possible for non-self aware animals to be more clever than self aware people.
My bad. I implicitly assumed that the ego was involved in the self-model, not just the body. animals are certainly aware of their positions in space. For example, my border collie mix can unwind himself when he wraps his leash around a tree, and do it with purpose. My dogs, at least, have object permanence also
I’d expect that seeing the results of one’s movements in the mirror has something to do with it - like Groucho and Harpo’s mirror scene in Duck Soup. I am certainly not claiming that a mirror test can be passed at the exact moment of self awareness. I’m not an expert, but it seems like the minimal known test at the moment., but not the minimal possible test.
I’m thinking about the origin of self awareness in humanity. Did it spring out of nothing? Doesn’t seem likely. Perhaps the internal modeling of the world, without the modeling of awareness, was more advanced than any other animal. Then, by a small advance, thought processes got included in the model. That would be the kind of incremental change you’d expect to see. Perhaps the modeling of others’ fairly rich emotional life, which would be advantageous, led to the extension of the same ability to internal thoughts. If you were guessing whether the chief was in a good or a bad mood, (or a potential mate) you might apply that ability to yourself.
Just speculating.
What peculiar definitions. Are you arguing for some functional difference between the mental processes of humans and animals? (Other animals, that is.) I have always thought that the difference was basically a matter of capability and capacity, not a significant difference in type. We have grey matter, they have grey matter, after all…
You’re not making much of a case that the difference between human and animal cognition is of a categorical nature rather than a difference in degree, here.
Given that humans don’t actively simulate their own cognition to any level of detail either (if we did, we’d know how it worked!) I think it’s safe to say that a computer could accurately model “what would I do with these inputs” - at least as well - by re-using much of the same actual hardware and just faking the inputs, if necessary. It would probably be more accurate than when humans say “If you got me mad I’d kick your butt” - humans tend to overlook a lot of incidental details in their simulations.
Given that you seem to be trying to test for something I’m not sure even exists, I withdraw any speculation over whether the ability to rapidly recognize a mirror-entity as being an extension of yourself is the minimal proof of anything.
Out of curiousity, do all animals fail the mirror test? Given unlimited time, I mean. Will a dog never get used to a ground-level mirror in a hallway?
(Trying to wrap my mind around your peculiar definitions) Presuming for a moment that animals don’t have two or three brain cells devoted to wondering if they’re irritated enough to bite you yet (in which case we would merely have greatly increased and expanded our mental metaprocessing capabilities), I would assume that the ability to speculate about your own behaviors would develop concurrently with the ability to speculate about others of your own species, and probably at a similar pace to your ability to speculate about things that are not in your species. Humans are a kind of tribal bunch, after all; being able to anticipate and react to the actions of the others in your herd would be at least as necessary a survival trait as recognizing whether it was likely to rain or whatever.