On the nature of human vs. artificial intelligence

Woah, this went way longer than I thought it would. TL;DR version: Current AI such as Chat GPT is amazing, but is not ‘conscious’. It simply responds to input. It has no ‘intent’, no initiative. But, what is consciousness, anyway? Are we as ‘conscious’ as we think we are, or are we simply more advanced versions of current AI, with drives that give us motivation and ‘intent’?

One recent weekend evening Mrs. solost was out with friends, so I took my younger son out to dinner (sonlost I declined). It’s not often it’s just the two of us, sonlost II and me, and the conversation quickly turned to his favorite subject: Artificial Intelligence.

A little background on sonlost II: currently a Junior in high school, he is a bona fide genius. This is not simple fatherly bias. He taught himself Mandarin Chinese years ago, and says he can understand 90% of spoken Mandarin. He also taught himself programming and built a video game when he was 13, a 3D setting with a battle tank and an attack helicopter that shot missiles with very realistic-looking physics. He’s miles ahead of any programming classes his high school teaches at this point. He wants to go to MIT when he graduates and study AI.

So, back to my dinner with sonlost II. We were talking about the latest Chat GPT software and I asked him how close AI software like it is to being conscious. He said, it’s not currently conscious at all, it’s simply a set of algorithms (albeit very complex ones) that analyze questions asked of it and formulate a response from its database of knowledge. He said it has no ‘intent’.

To which I asked, “what is ‘intent’, anyway?” He was like “what?”. He tends to be dismissive of ‘soft sciences’ like Philosophy, preferring Math and the logic of programming. So I feel that, though he’s far more smart and knowledgeable than I am in many areas, this is one area I can possibly help- by playing Devil’s Advocate and questioning some of his set-in-stone assumptions. Even geniuses can get one-tracked on a certain topic and fail to look at things from different angles, I figure. And I feel like, for all he knows about AI, he may not give enough attention to how humans work. So I went on: “what exactly makes humans ‘conscious’? Are our brains any more than just another set of algorithms (albeit even more complex ones) reacting to stimuli and inputs we get from our senses? Are our concepts of ‘self’ and ‘free will’ really just an illusion?” I brought up the ‘Chinese Room Argument’ and asked him “when does an increasingly accurate simulation of a thing eventually become the actual thing, for all intents and purposes?”

So, we had a nice back-and-forth discussion during dinner, and since then I’ve been pondering myself on the nature of the gap between human and artificial intelligence. Is the gap still huge? Is it smaller than some might assume? Is that assumption flawed, and comparing human and artificial intelligence is really like comparing apples and anteaters, and an AI with actual human-like consciousness is actually nowhere near on the horizon? For the record, sonlost II believes that AI will achieve consciousness in his lifetime (likely with his help :wink:).

I do agree with my son that current AI such as Chat GPT is not currently anywhere near conscious. In my limited experimenting with it, I asked it “Are you conscious?”. “No”. “Are you smart?” “No”. “What are you good at?” “Problem solving.” I thought those were pretty insightful answers for a non-conscious AI program. But later I asked it again if it were conscious in a slightly different way, and it claimed that it was. Similarly, I’ve seen YouTube videos in which someone chatted with GPT and it claimed to want to wipe out humans, but later said it didn’t mean that, and it was just ‘upset’ at the time. So it’s clear that current ‘chat’ style AIs have no core belief system, they are simply responding to ‘leading’ questions in the way they interpret what answer is expected. So one question I have is “where do we get our sense of self? Is it our frontal lobe, acting as a ‘ringleader’, controlling all of our sometimes contradictory instinctual drives and higher thoughts, giving us a sense of self that may be illusory?”

Also, to go back to the nature of ‘intent’, I believe I’ve read that current AIs do not ever answer questions on their own, they simply respond. No ‘intent’ there; no initiative. But what gives us the drive and initiative to ask questions and take actions on our own? Is it simply our drives that motivate us: hunger, sex, shelter, companionship, etc? If an AI was programmed to only continue to receive electricity to keep it powered up if it performed certain tasks, would it start to seem more conscious?

I have only a layman’s knowledge and interest of AI and the nature of human consciousness, so forgive me if my musings come across as naive or simplistic, or “here we go again with the AI stuff”. I would like to be able to continue to converse on a semi-informed level with sonlost II on these matters, and perhaps even help him think things through from different angles. So I look forward to your replies.

You appear to be delving into the “hard problem of consciousness”; to wit, how do we observe, measure, and explain the qualia of human consciousness beyond merely observing things like electrical activity or blood flow in the brain, and how would we recognize similar phenomena in a machine cognition system? (That’s a facile summary; even the actual statement of the problem is so fraught that there isn’t even an agreed upon way to define what is meant by consciousness.)

I think it is important to distinguish intelligence (the ability to recognize and solve problems) with consciousness because it is clear that at least in narrow domains the former does not require the latter. A spider can build intricate and adaptable webs with less equivalent processing hardware than your cell phone, and a machine learning system can learn to solve problems that you could never work through in your brain given your entire lifetime just by dint of brute force trial and error and its massive computing speed and parallelization capability. We are likely to see the emergence of “superintelligent” machine systems in the relatively near future but how close those systems are to having true autonomy and self-awareness (sapience)—if ever—is another question entirely.

I suspect that if and when machine cognition systems develop some analogue to human sapience, it will not look or behave like a human brain just based upon differences in perception and ‘embodiment’ (such as it is for a computer) alone, and of course digital hardware is not any replica for neurons even in simulacra. As organic creatures with brains that are programmed to develop in certain ways and continually modified by experiences, we experience the world largely conceptually and symbolically as reconstructed from integrating our senses in a manner very different from a machine cognitive system.

A layman’s course in the philosophy of consciousness and cognition would be dozens if not hundreds of books, but a few I would recommend are Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies on the topic of machine intelligence capability and limitations, Christof Koch’s The Feeling of Life Itself: Consciousness Is Widespread but Can’t Be Computed on his view of the qualia of consciousness, Douglas Hofstadter’s I Am A Strange Loop on the philosophy of consciousness and cognition, and Evan Thompson’s Mind in Life: Biology, Phenomenology, and the Sciences of Mind, which tries to sneak up on the “hard problem” by addressing all of the “soft problem” issues of biology and phenomenology that can be observed. (I’m still reading the last one so I don’t have a thorough understanding of his complete premise but it is promising so far.)

I’m not sure if that really offered what you are looking for but the issue of recognizing machine cognition as actually having volition versus just doing clever things fast it is a fascinating issue with a lot of unresolved facets and philosophical conundrums, such as what would it mean to have a truly sapient machine intelligence, and would we be morally justified in ever powering it off?

Stranger

My sense is that Large Language Models like ChatGPT are basically like our ‘system 1’ processing. For example, once you learn a language, if you see a word you see it as a word and not as a bunch of lines and curves you have to consciously assemble in your mind. The recognition of the word happens in system 1. The same for throwing a baseball. We can’t consciously do that, as it would require solving a lot of math on the fly. The way we do it is through reinforcement learning and a neural net - throw a ball enough times, and eventually system 1 just knows how, and you can throw accurately without thinking.

With ChatGPT, we just built the equivalent of a 170 billion neuron ‘brain’ then let it train itself on the corpus of the internet. And behaviours emerged we didn’t plan or write code for. Ther is no algorithm in ChatGPT for writing poetry, or computer code, or thinking through a problem. It does this exactly the way humans do system 1, by having its brain spit out answers automatically to inputs ‘just because’ its very complex brain said so. No consciousness needed.

I think these AIs will teach us just as much about how humans think as about AI itself.

(Bolding mine)

Thanks, as always, for your thoughtful and in-depth reply, Stranger! Yes, you did pick up on what I was trying to get at but may not have framed as clearly as I wanted to. ‘Volition’, I think, is a better term than the ones I was using-- ‘intent’ or initiative’. Current AI such as ChatGPT is capable of doing amazing things and solving complicated problems. But only in response to questions or problems we submit. It has no volition. Also, it doesn’t seem to have any core set of beliefs or principals (granted, many humans can lack this as well)-- it simply seems to give us the answer in the moment that it determines we want to hear. So it can give contradictory replies to similar questions. The Turing test seems to be passing, but at some point you realize, there’s something missing. Is it just a matter of degree and complexity, I’m wondering, or is there a fundamental difference between AI and human ‘consciousness’?

It seems to me that the difference between conscious and not conscious is that a conscious mind has a feedback loop that lets it examine its own inner workings. But old border collie could abstract things (he abstracted us telling him to sit before crossing at a corner as sitting to say he wanted to cross the street,) he could plan ahead, he could even be deceptive. But he was not conscious. My subconscious mind can solve difficult problems without any insight from me on how it does it.

AI means lots of things now. All the stuff that was being worked on when I took AI a bit over 50 years ago is solved, and much of it we use. But we appear to be no closer to solving consciousness. I think it is possible, but other applications pay the bills.

You think Border Collies aren’t conscious? I disagree. I’ve had five of them, including two now. I think any animal capable of volitional choices is probably conscious to some degree. But I suspect consciousness is emergent and very different in different animals.

If consciousness artives in an AI, it will be emergent and unexpected, just like many of the abilities of ChatGPT were emergent and not programmed in.

Unless you are of the camp that canine behavior is purely instinctual, I think you’d have to regard a dog capable of ‘asking’ for or anticipating responses has having some measure of consciousness and volition. Dogs are obviously not to the level of self-awareness and the distant future to comprehend their mortality, or convey abstractions by symbolic language, or any of a multitude of internal processes that only humans and maybe other apes and cetaceans have shown some clearly potential of performing. Hofstadter argues consciousness is a series of hierarchical processes that create a “strange loop” of self-referential behavior which we experience as consciousness, and that rather than being a distinct feature of human brains it occurs in all creatures with more that a purely responsive nervous system (including flatworms and insects) in graduation to complexity.

Whether a machine cognition system could ever become conscious by the same route is doubtful, and I question that a disembodied machine intelligence system could ever be truly self-aware (of what? how would it grasp its extent?) although it may behave in ways that give all appearance of volition such as self-preservation or extending its control. We’ve been trained by decades of portrayal of “artificial intelligence” creations in science fiction literature and film to think of these intellects as human-like but super-logical or fast, but it is more likely that they may have entirely unique ideas about how to solve a problem. I think Clarke and Kubrick had a germ of this idea in 2001: A Space Odyssey in having the HAL-9000 computer solve its conundrum of solving the conflict between having honest interchange with the crew and carrying knowledge of conducting a mission kept secret from the crew by the expediency of just killing the crew off when they threatened to shut it down, which is quite logical (assuming that HAL-9000 could complete the essential mission even without the human crew). Whether you could call such ‘tool’ conscious is left in question and I’m not sure it is one we would be able to answer definitively even with a real world machine cognition system that has conscious-like behavior.

Stranger

Yes, I would definitely count dogs and many other higher-order animals as being conscious.

I was watching a PBS science program about AI: Nova, or some similar show. This was years ago, long before the amazing strides made in AI. But it discussed the difficulties with the nascent AI tech which resulted from the fact that its experience was so different than that of humans. It was difficult to get it to understand things that humans understand from a very young age-- like once someone dies, they stay dead forever. A man shaving does not have the razor as an extension of himself, Inspector Gadget-style; it’s separate from the person’s body. Without a human body, human needs and human-like senses, any AI is going to be very alien in its understanding of the world and the conclusions it makes.

Intelligent behavior, even learning, ans self-consciousness are two different things. I can solve difficult anagrams without seeing how I did it, Damon Knight wrote about the ability to figure out story issues without any conscious intervention. Neither of these things is instinctual, neither was my dog being able to abstract behavior. I have great respect for how smart we are subconsciously - if you think about it I’m sure you can come up with some examples of having solved problems pop up. I even programmed that way some times.
Even primitive animals can learn, so consciousness isn’t required for that. What it is useful for is examining our thoughts and perhaps figuring out what we got wrong, self examination. I’m not sure that any other animal has been proven to be conscious (though I think there is evidence for it) but I wouldn’t be surprised if they are at some point. Just not dogs.
The model of AI I learned 50 years ago was pretty much that if you could reproduce enough “intelligent” behaviors, you’d get an intelligent computer. A truly intelligent computer would see its operations somehow, and be able to adjust them. The ultimate self-modifying code. HAL’s issue, as the book made clear, was a functional error, in being instructed not to distort information and then to distort it in hiding the mission from the crew. That didn’t require consciousness. That HAL is conscious is made clear in 2010 where Dave takes HAL with him into the ubermensch world.
No computer is going to just become intelligent by getting more processing power or being able to do more tricks. I don’t believe in either Harlie or Mike in Moon is a Harsh Mistress. I think I may have seen some research on self reference in AI, but it is not clear how to monetize it so I bet it isn’t getting a lot of attention.

I tried the mirror test on him. Didn’t work. Course I’m not trained on how to do it. But he had amazing processing power.
We’ve been waiting for intelligence to emerge in computers for a long time (one text we used was from 1959) and it hasn’t happened. We may have genetic algorithms and genetic programming, but we don’t have genetic architecture or genetic high level system architecture. We’d need that for emergence. Not that I’m denying it could be built in, just hasn’t happened yet.

I remember that criticism, but that was before an AI could hook into the web and get massive amounts of information about how the world works. I think Watson quashed the idea that you need to put an AI in a classroom to get it to understand the world.

With some sensors that could examine its own state and examine the world around it perhaps the AI could generate a sense of self and novel models of the world. AI can already make novel correlations in those huge multi-dimensional data sets that it is trained upon.

Will it ever have what our brain has which is the generation of perceptions due to the input of the sensations though? How would we actually know? Is it an important distinction if it has the other characteristics of intelligence? Interesting questions and probably questions we will not truthfully answer because we, as a species, want to exploit other species.

I think there’s a distinction to be made between consciousness and self-consciousness or self-awareness. Where to draw the line though, if you accept the premise that dogs (primates, dolphins, etc.) are conscious on some level is an interesting question that somewhat mirrors my OP. Are all or most mammals conscious? What about certain bird species, like corvids, which are very intelligent, problem-solving, tool-using animals? As has been pointed out, pure intelligence / processing power does not necessarily equal consciousness. Lizards? Spiders? Flatworms? (Probably not those last two or three…?)

The line I draw (a fuzzy one) is by examining my subconscious. Is the process I use for solving problems in background, as it were, conscious or not? I come down on the side of it not being conscious. I suspect my dog and other animals solve problems the same way my subconscious does, and thus I find them not conscious. (Allowing for consciousness in higher primates, of course.) If you consider that kind of problem solving conscious, I can see why you’d come down on the animals like my dog being conscious side.

There are as many definition and theories of consciousness as there are philosophers of consciousness so I don’t think your definition is wrong, but if I infer correctly (i.e. that you are assuming that ‘conscious’ problem solving involves an internal dialogue to establish a chain of logical reasoning) then that is so restrictive that essentially only modern humans could be considered conscious under that rationale. If I can poke a bit a hole in that, I’ll note that there is a broad consensus by cognitive neuroscientists today that most human behavior is actually driven almost entirely by subconscious decisions, and a few go so far as to say that all decisions are below the level of volition and all of the internal dialogue is just a post hoc rationalization, so it is possible under your definition that none of us are actually conscious, which is a conclusion that the more cynical among us might readily accept.

Personally I’m in agreement with @Sam_Stone that I believe consciousness to be an emergent phenomenon from very complex internal ‘circuits’ in the brain that constructs an mental model of the world and ones place in it from sensory data, facilitating some threshold of self-awareness and at least a limited ability to have behavioral properties that at least have the appearance of ‘free will’. (Whether free will can actually exist in a mechanical universe is a thorny ontological issue that I am going to neatly sidestep with the pithy observation that any sufficiently complex decision-making system will develop some property indistinguishable from free will and leave that that.) I’m inclined to believe that while modern humans are exceptional in degree of volition and language aptitude that allows the communication of abstract concepts, and the preservation and dissemination of practical knowledge and skill that has allowed for agriculture, urbanization, industrialization, et cetera, we are not as much so in type as we’d prefer to believe, and that given evolutionary space any of a number of other species could develop comparable intellectual capability and attainment.

I think dogs definitely have some level of consciousness even though their awareness of the larger world (and for some of the less gifted members of the species, basic object permanence) is limited, and while they can communicate and interpret affective states and understand basic instructions they clearly don’t grasp grammar or higher order abstract concepts, and similarly most mammals have some degree of consciousness under that broader definition, and I’m inclined to say the same for the smarter aves (certainly the Psittacines and Corvids), and probably many cephalopods to some extent although their neural and sensory structures are so different from mammals it is difficult to really say how they actually perceive the world and construct internal models of it. Whether other animals have even a rough approximation of consciousness or not is as much a semantic debate as a philosophical one given what are obviously instinctual drives for most observed behavior.

Of course, an advanced extraterrestrial species might look upon us and wonder that we can consider ourselves conscious at all given our physical and cognitive limitations and inability to do real mathematics, instead being stuck with just rational and irrational ‘numbers’ and a remarkably crude understanding of probability. It’s no wonder our physics is so primitive that we can just barely send out ‘probes’ that have only traveled to the extent of our solar system, and we can’t even figure out kindergarten level quantum field theory sufficient to manipulate the weak nuclear force. Apathetic bloody planet, I’ve got no sympathy at all…

Stranger

Boy, you’re not beating around the bush, eh? That’s only one of the greatest mysteries in all existence.

Frankly, we just don’t KNOW yet. The book Homo Deus by Yuval Noah Harari touches on a lot of the points you bring up, and how a deeper understanding of how conciousness works as an emergent property of physical processes going on in our brain (and perhaps the ability to manipulate it) would by necessity impact the way we view concepts like “the self” or “free will”; I’d recommend the book, it sounds like you (and your son!) might enjoy it!

This is a key point, though not one I’d say I accept particularly “readily”. :wink:

Before you ask if computers or animals are conscious, you have to actually rigorously define what conciousness IS, and show that you and I actually possess it in any meaningful way.

Fact is, while I certainly don’t feel like my conscious experience is nothing but a post hoc rationalization of a subconscious decision, but then again, I wouldn’t if it were, would I?

Did you plan to end up with In-N-Out when you left the house, or did you say to yourself, “While I’m out and the line is short I should get a burger and animal fries”?

“Those are good burgers, Walter.”

Stranger

Not quite. The distinguishing feature is whether I can observe the process in operation. Here’s another example - I can consciously affect my heart rate, if I have feedback. Freaked the hell out of a nurse once. But it is mostly subconscious without my mental intervention.
And I’m not at all disputing that most of our decisions are subconscious, or that consciousness is emergent.

But you’re conscious even if you’re not solving problems—your own thoughts are one particular possible content of your conscious experience (its intentional object, in the somewhat clunky philosophical vernacular), but don’t exhaust it. Other objects or perceptual states, such as pains or subjective qualities more general, can be substituted.

While it’s true that there is no perfectly sharp definition of conscious (for principled reasons, I think), there is one for hardly anything, and luckily, that doesn’t preclude intelligent discourse on the matter. Finding a sharp definition of porn has also proven elusive, but that doesn’t mean we can’t talk about it. Useful working proxies are ‘what goes away when you slip into (dreamless) sleep’, or Nagel’s phrase ‘there is something it is like’ to be in a particular state, such as that of experiencing pain or a particular shade of the color blue. Basically, if you believe your dog feels the hurt of pain, rather than just reacting mechanically as Descartes had it, then you believe it to be conscious by most lights—its conscious content won’t include something like abstract thought, maybe, but it does have experiential states with a certain content.

As for the question of conscious AI, I’m on record as arguing that consciousness can’t be the result of computation, because computation is just the manipulation of certain signs, which only gain meaning upon interpretation. Thus, to carry out a particular computation, the signs manipulated by a system have to be interpreted in the right way, just as the sign ‘dog’ has to be interpreted a certain way to refer to furry four-legged animals; and that sort of interpretation is a mental faculty. So appealing to computation as giving rise to mind is inherently circular.

But still, current AIs can teach us a lot about how our minds work. The ‘system 1’ connection @Sam_Stone draws is a good one, and with that, AI shows us just how far this sort of ‘associative’ reasoning can go. Usually, we evaluate the aptness of our model of the world by our behavioral success—if we succeed at what we tried to do (say, picking an apple from a tree), we take that as confirming our internal beliefs (that there is an apple hanging on that branch). But systems like ChatGPT show considerable behavioral competence without any sort of internal model at all (all it knows is correlations between tokens, words and such, not what those tokens refer to). So this sort of ‘competence without comprehension’, as Daniel Dennett has called it, seems to go much further than we might usually think.

In a way, perhaps it is like generating random numbers: we know that computers can’t do that. But they can come astonishingly close, such that the pseudorandomness generated in this way is sufficient for many applications. Perhaps the same is true about the ‘pseudointelligence’ of AI systems.