Part of the problem here is that it’s very difficult to define a lot of the sorts of terms we use surrounding these topics that objectively hold up. For instance, with regard to intelligence, it used to be thought that if a computer could do certain tasks, like play chess, as well or better than a human, it was intelligent, but now we have computers that can consistently defeat the best chess players in the world, and intelligent is not how we’d describe them.
By the same token, what does it mean to be self-aware? By certain definitions, my computer or my car is self-aware. It seems like there’s some line that humans have crossed and, depending on what characteristics you use, perhaps a few other mammals are treading around, but that’s it. But even though dolphins are able to, say, recognize their own reflections, can we meaningfully say they are or are not? So, in the end, we’re stuck with something along the lines of “like us” or “we know it when we see it”. But how we do even judge those?
There’s always the Turing Test, that if we cannot linguistically differentiate a person from a computer, then we ought to assume they’re qualitatively identical, that they’re both conscious, but I’m not sure I buy that. We’ve got some systems that are able to model language at some level, but they’re still far from passing that test, and all they’re really doing is recognizing patterns, that certain patterns of language expect certain patterns in response, they don’t actually “understand” the language.
Anyway, there’s a few different ideas I’ve heard which seem to have some scientific support. One that I think probably seems to have the most behind it, though really isn’t something that we can reasonably test yet, is that consciousness is an emergent property of certain types of complex systems, of which the human brain is one. This sort of goes along with how animals with more complex brains seem to be closer to how we’d describe consciousness than others, and it also sort of explains how our own consciousness evolved, both as a species, and even over our own lives as the neural connections became more complex, our memories and personalities come into focus. This approach would also seem to imply that we could eventually create artificial consciousness if we could develop the right kind of complexity with hardware/software.
Unfortunately, I’m not really sure how we’d go about proving this sort of concept. We can attempt to create artificial consciousness, but unless we do, we won’t know if we haven’t created it because we’re going about it wrong because our modeling of the brain is wrong, or because the underlying theory is wrong.
Another theory I’ve recently heard, but it was covered at a high level, was that the brain is at least partially a quantum computer and that consciousness was somehow related to quantum non-locality. It was very fascinating and seemed to fit in a lot with my own ponderings on the topic, but without a lot of depth on the theory or with a lot of knowledge in quantum physics, I can’t say much about it’s validity except for what was said about it.
I first heard about it on this season’s premiere episode of Through the Wormhole, Is There Life After Death? wherein they were trying to figure out what consciousness. I sort of assume that because it’s a new episode, the information is unlikely to be outdated, but maybe someone else who knows more and saw it can comment on the topic.